Dec 13 00:20:28.220541 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Fri Dec 12 20:55:10 -00 2025 Dec 13 00:20:28.220575 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=eb354b129f31681bdee44febfe9924e0e1b63e0b602aff7e7ef2973e2c8c1e9e Dec 13 00:20:28.220584 kernel: BIOS-provided physical RAM map: Dec 13 00:20:28.220592 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000002ffff] usable Dec 13 00:20:28.220598 kernel: BIOS-e820: [mem 0x0000000000030000-0x000000000004ffff] reserved Dec 13 00:20:28.220607 kernel: BIOS-e820: [mem 0x0000000000050000-0x000000000009efff] usable Dec 13 00:20:28.220615 kernel: BIOS-e820: [mem 0x000000000009f000-0x000000000009ffff] reserved Dec 13 00:20:28.220622 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009b8ecfff] usable Dec 13 00:20:28.220631 kernel: BIOS-e820: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Dec 13 00:20:28.220638 kernel: BIOS-e820: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Dec 13 00:20:28.220645 kernel: BIOS-e820: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Dec 13 00:20:28.220652 kernel: BIOS-e820: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Dec 13 00:20:28.220659 kernel: BIOS-e820: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Dec 13 00:20:28.220666 kernel: BIOS-e820: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Dec 13 00:20:28.220676 kernel: BIOS-e820: [mem 0x000000009bfb7000-0x000000009bffffff] usable Dec 13 00:20:28.220684 kernel: BIOS-e820: [mem 0x000000009c000000-0x000000009cffffff] reserved Dec 13 00:20:28.220691 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Dec 13 00:20:28.220698 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 00:20:28.220708 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Dec 13 00:20:28.220715 kernel: NX (Execute Disable) protection: active Dec 13 00:20:28.220722 kernel: APIC: Static calls initialized Dec 13 00:20:28.220730 kernel: e820: update [mem 0x9a13f018-0x9a148c57] usable ==> usable Dec 13 00:20:28.220737 kernel: e820: update [mem 0x9a102018-0x9a13ee57] usable ==> usable Dec 13 00:20:28.220744 kernel: extended physical RAM map: Dec 13 00:20:28.220752 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000002ffff] usable Dec 13 00:20:28.220759 kernel: reserve setup_data: [mem 0x0000000000030000-0x000000000004ffff] reserved Dec 13 00:20:28.220767 kernel: reserve setup_data: [mem 0x0000000000050000-0x000000000009efff] usable Dec 13 00:20:28.220774 kernel: reserve setup_data: [mem 0x000000000009f000-0x000000000009ffff] reserved Dec 13 00:20:28.220782 kernel: reserve setup_data: [mem 0x0000000000100000-0x000000009a102017] usable Dec 13 00:20:28.220791 kernel: reserve setup_data: [mem 0x000000009a102018-0x000000009a13ee57] usable Dec 13 00:20:28.220799 kernel: reserve setup_data: [mem 0x000000009a13ee58-0x000000009a13f017] usable Dec 13 00:20:28.220806 kernel: reserve setup_data: [mem 0x000000009a13f018-0x000000009a148c57] usable Dec 13 00:20:28.220814 kernel: reserve setup_data: [mem 0x000000009a148c58-0x000000009b8ecfff] usable Dec 13 00:20:28.220822 kernel: reserve setup_data: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Dec 13 00:20:28.220829 kernel: reserve setup_data: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Dec 13 00:20:28.220837 kernel: reserve setup_data: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Dec 13 00:20:28.220844 kernel: reserve setup_data: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Dec 13 00:20:28.220851 kernel: reserve setup_data: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Dec 13 00:20:28.220927 kernel: reserve setup_data: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Dec 13 00:20:28.220943 kernel: reserve setup_data: [mem 0x000000009bfb7000-0x000000009bffffff] usable Dec 13 00:20:28.220958 kernel: reserve setup_data: [mem 0x000000009c000000-0x000000009cffffff] reserved Dec 13 00:20:28.220969 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Dec 13 00:20:28.220980 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 00:20:28.220990 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Dec 13 00:20:28.221003 kernel: efi: EFI v2.7 by EDK II Dec 13 00:20:28.221014 kernel: efi: SMBIOS=0x9b9d5000 ACPI=0x9bb7e000 ACPI 2.0=0x9bb7e014 MEMATTR=0x9a1af018 RNG=0x9bb73018 Dec 13 00:20:28.221024 kernel: random: crng init done Dec 13 00:20:28.221034 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Dec 13 00:20:28.221045 kernel: secureboot: Secure boot enabled Dec 13 00:20:28.221055 kernel: SMBIOS 2.8 present. Dec 13 00:20:28.221065 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Dec 13 00:20:28.221075 kernel: DMI: Memory slots populated: 1/1 Dec 13 00:20:28.221085 kernel: Hypervisor detected: KVM Dec 13 00:20:28.221097 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x400000000 Dec 13 00:20:28.221108 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 00:20:28.221118 kernel: kvm-clock: using sched offset of 5219611850 cycles Dec 13 00:20:28.221128 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 00:20:28.221140 kernel: tsc: Detected 2794.748 MHz processor Dec 13 00:20:28.221151 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 00:20:28.221163 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 00:20:28.221175 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x400000000 Dec 13 00:20:28.221190 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Dec 13 00:20:28.221213 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 00:20:28.221227 kernel: Using GB pages for direct mapping Dec 13 00:20:28.221242 kernel: ACPI: Early table checksum verification disabled Dec 13 00:20:28.221256 kernel: ACPI: RSDP 0x000000009BB7E014 000024 (v02 BOCHS ) Dec 13 00:20:28.221271 kernel: ACPI: XSDT 0x000000009BB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Dec 13 00:20:28.221286 kernel: ACPI: FACP 0x000000009BB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 00:20:28.221300 kernel: ACPI: DSDT 0x000000009BB7A000 002237 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 00:20:28.221318 kernel: ACPI: FACS 0x000000009BBDD000 000040 Dec 13 00:20:28.221331 kernel: ACPI: APIC 0x000000009BB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 00:20:28.221343 kernel: ACPI: HPET 0x000000009BB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 00:20:28.221355 kernel: ACPI: MCFG 0x000000009BB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 00:20:28.221366 kernel: ACPI: WAET 0x000000009BB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 00:20:28.221378 kernel: ACPI: BGRT 0x000000009BB74000 000038 (v01 INTEL EDK2 00000002 01000013) Dec 13 00:20:28.221389 kernel: ACPI: Reserving FACP table memory at [mem 0x9bb79000-0x9bb790f3] Dec 13 00:20:28.221403 kernel: ACPI: Reserving DSDT table memory at [mem 0x9bb7a000-0x9bb7c236] Dec 13 00:20:28.221415 kernel: ACPI: Reserving FACS table memory at [mem 0x9bbdd000-0x9bbdd03f] Dec 13 00:20:28.221427 kernel: ACPI: Reserving APIC table memory at [mem 0x9bb78000-0x9bb7808f] Dec 13 00:20:28.221438 kernel: ACPI: Reserving HPET table memory at [mem 0x9bb77000-0x9bb77037] Dec 13 00:20:28.221449 kernel: ACPI: Reserving MCFG table memory at [mem 0x9bb76000-0x9bb7603b] Dec 13 00:20:28.221461 kernel: ACPI: Reserving WAET table memory at [mem 0x9bb75000-0x9bb75027] Dec 13 00:20:28.221472 kernel: ACPI: Reserving BGRT table memory at [mem 0x9bb74000-0x9bb74037] Dec 13 00:20:28.221486 kernel: No NUMA configuration found Dec 13 00:20:28.221498 kernel: Faking a node at [mem 0x0000000000000000-0x000000009bffffff] Dec 13 00:20:28.221509 kernel: NODE_DATA(0) allocated [mem 0x9bf57dc0-0x9bf5efff] Dec 13 00:20:28.221521 kernel: Zone ranges: Dec 13 00:20:28.221532 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 00:20:28.221544 kernel: DMA32 [mem 0x0000000001000000-0x000000009bffffff] Dec 13 00:20:28.221555 kernel: Normal empty Dec 13 00:20:28.221567 kernel: Device empty Dec 13 00:20:28.221580 kernel: Movable zone start for each node Dec 13 00:20:28.221591 kernel: Early memory node ranges Dec 13 00:20:28.221602 kernel: node 0: [mem 0x0000000000001000-0x000000000002ffff] Dec 13 00:20:28.221613 kernel: node 0: [mem 0x0000000000050000-0x000000000009efff] Dec 13 00:20:28.221623 kernel: node 0: [mem 0x0000000000100000-0x000000009b8ecfff] Dec 13 00:20:28.221634 kernel: node 0: [mem 0x000000009bbff000-0x000000009bfb0fff] Dec 13 00:20:28.221645 kernel: node 0: [mem 0x000000009bfb7000-0x000000009bffffff] Dec 13 00:20:28.221655 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009bffffff] Dec 13 00:20:28.221669 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 00:20:28.221679 kernel: On node 0, zone DMA: 32 pages in unavailable ranges Dec 13 00:20:28.221690 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 00:20:28.221701 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Dec 13 00:20:28.221712 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Dec 13 00:20:28.221722 kernel: On node 0, zone DMA32: 16384 pages in unavailable ranges Dec 13 00:20:28.221733 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 00:20:28.221746 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 00:20:28.221756 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 00:20:28.221766 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 00:20:28.221777 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 00:20:28.221788 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 00:20:28.221798 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 00:20:28.221809 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 00:20:28.221823 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 00:20:28.221833 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 00:20:28.221844 kernel: TSC deadline timer available Dec 13 00:20:28.221868 kernel: CPU topo: Max. logical packages: 1 Dec 13 00:20:28.221880 kernel: CPU topo: Max. logical dies: 1 Dec 13 00:20:28.221910 kernel: CPU topo: Max. dies per package: 1 Dec 13 00:20:28.221923 kernel: CPU topo: Max. threads per core: 1 Dec 13 00:20:28.221935 kernel: CPU topo: Num. cores per package: 4 Dec 13 00:20:28.221946 kernel: CPU topo: Num. threads per package: 4 Dec 13 00:20:28.221960 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Dec 13 00:20:28.221974 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 13 00:20:28.221985 kernel: kvm-guest: KVM setup pv remote TLB flush Dec 13 00:20:28.221997 kernel: kvm-guest: setup PV sched yield Dec 13 00:20:28.222008 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Dec 13 00:20:28.222022 kernel: Booting paravirtualized kernel on KVM Dec 13 00:20:28.222034 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 00:20:28.222045 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Dec 13 00:20:28.222056 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Dec 13 00:20:28.222067 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Dec 13 00:20:28.222078 kernel: pcpu-alloc: [0] 0 1 2 3 Dec 13 00:20:28.222089 kernel: kvm-guest: PV spinlocks enabled Dec 13 00:20:28.222103 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 00:20:28.222116 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=eb354b129f31681bdee44febfe9924e0e1b63e0b602aff7e7ef2973e2c8c1e9e Dec 13 00:20:28.222128 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 00:20:28.222139 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 00:20:28.222150 kernel: Fallback order for Node 0: 0 Dec 13 00:20:28.222162 kernel: Built 1 zonelists, mobility grouping on. Total pages: 638054 Dec 13 00:20:28.222177 kernel: Policy zone: DMA32 Dec 13 00:20:28.222188 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 00:20:28.222199 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 13 00:20:28.222210 kernel: ftrace: allocating 40103 entries in 157 pages Dec 13 00:20:28.222222 kernel: ftrace: allocated 157 pages with 5 groups Dec 13 00:20:28.222233 kernel: Dynamic Preempt: voluntary Dec 13 00:20:28.222245 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 00:20:28.222258 kernel: rcu: RCU event tracing is enabled. Dec 13 00:20:28.222273 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 13 00:20:28.222285 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 00:20:28.222296 kernel: Rude variant of Tasks RCU enabled. Dec 13 00:20:28.222308 kernel: Tracing variant of Tasks RCU enabled. Dec 13 00:20:28.222319 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 00:20:28.222329 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 13 00:20:28.222341 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 00:20:28.222356 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 00:20:28.222370 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 00:20:28.222382 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Dec 13 00:20:28.222395 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 00:20:28.222408 kernel: Console: colour dummy device 80x25 Dec 13 00:20:28.222421 kernel: printk: legacy console [ttyS0] enabled Dec 13 00:20:28.222432 kernel: ACPI: Core revision 20240827 Dec 13 00:20:28.222444 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 13 00:20:28.222459 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 00:20:28.222470 kernel: x2apic enabled Dec 13 00:20:28.222482 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 00:20:28.222493 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Dec 13 00:20:28.222504 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Dec 13 00:20:28.222516 kernel: kvm-guest: setup PV IPIs Dec 13 00:20:28.222527 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 00:20:28.222542 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Dec 13 00:20:28.222554 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Dec 13 00:20:28.222566 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 13 00:20:28.222577 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 13 00:20:28.222588 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 13 00:20:28.222600 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 00:20:28.222616 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 00:20:28.222632 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Dec 13 00:20:28.222644 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Dec 13 00:20:28.222655 kernel: active return thunk: retbleed_return_thunk Dec 13 00:20:28.222667 kernel: RETBleed: Mitigation: untrained return thunk Dec 13 00:20:28.222678 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 00:20:28.222689 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 13 00:20:28.222701 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Dec 13 00:20:28.222718 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Dec 13 00:20:28.222729 kernel: active return thunk: srso_return_thunk Dec 13 00:20:28.222741 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Dec 13 00:20:28.222752 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 00:20:28.222764 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 00:20:28.222775 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 00:20:28.222791 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 00:20:28.222802 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Dec 13 00:20:28.222814 kernel: Freeing SMP alternatives memory: 32K Dec 13 00:20:28.222825 kernel: pid_max: default: 32768 minimum: 301 Dec 13 00:20:28.222837 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 13 00:20:28.222848 kernel: landlock: Up and running. Dec 13 00:20:28.222873 kernel: SELinux: Initializing. Dec 13 00:20:28.222889 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 00:20:28.222909 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 00:20:28.222921 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Dec 13 00:20:28.222933 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 13 00:20:28.222944 kernel: ... version: 0 Dec 13 00:20:28.222956 kernel: ... bit width: 48 Dec 13 00:20:28.222967 kernel: ... generic registers: 6 Dec 13 00:20:28.222979 kernel: ... value mask: 0000ffffffffffff Dec 13 00:20:28.222994 kernel: ... max period: 00007fffffffffff Dec 13 00:20:28.223005 kernel: ... fixed-purpose events: 0 Dec 13 00:20:28.223016 kernel: ... event mask: 000000000000003f Dec 13 00:20:28.223028 kernel: signal: max sigframe size: 1776 Dec 13 00:20:28.223039 kernel: rcu: Hierarchical SRCU implementation. Dec 13 00:20:28.223051 kernel: rcu: Max phase no-delay instances is 400. Dec 13 00:20:28.223063 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 13 00:20:28.223078 kernel: smp: Bringing up secondary CPUs ... Dec 13 00:20:28.223089 kernel: smpboot: x86: Booting SMP configuration: Dec 13 00:20:28.223101 kernel: .... node #0, CPUs: #1 #2 #3 Dec 13 00:20:28.223112 kernel: smp: Brought up 1 node, 4 CPUs Dec 13 00:20:28.223123 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Dec 13 00:20:28.223136 kernel: Memory: 2425600K/2552216K available (14336K kernel code, 2444K rwdata, 31636K rodata, 15596K init, 2444K bss, 120680K reserved, 0K cma-reserved) Dec 13 00:20:28.223147 kernel: devtmpfs: initialized Dec 13 00:20:28.223162 kernel: x86/mm: Memory block size: 128MB Dec 13 00:20:28.223174 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bb7f000-0x9bbfefff] (524288 bytes) Dec 13 00:20:28.223185 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bfb5000-0x9bfb6fff] (8192 bytes) Dec 13 00:20:28.223197 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 00:20:28.223208 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 13 00:20:28.223220 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 00:20:28.223231 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 00:20:28.223246 kernel: audit: initializing netlink subsys (disabled) Dec 13 00:20:28.223258 kernel: audit: type=2000 audit(1765585226.229:1): state=initialized audit_enabled=0 res=1 Dec 13 00:20:28.223269 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 00:20:28.223282 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 00:20:28.223295 kernel: cpuidle: using governor menu Dec 13 00:20:28.223306 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 00:20:28.223318 kernel: dca service started, version 1.12.1 Dec 13 00:20:28.223332 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Dec 13 00:20:28.223344 kernel: PCI: Using configuration type 1 for base access Dec 13 00:20:28.223355 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 00:20:28.223367 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 00:20:28.223378 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 00:20:28.223389 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 00:20:28.223401 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 00:20:28.223415 kernel: ACPI: Added _OSI(Module Device) Dec 13 00:20:28.223427 kernel: ACPI: Added _OSI(Processor Device) Dec 13 00:20:28.223438 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 00:20:28.223450 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 00:20:28.223461 kernel: ACPI: Interpreter enabled Dec 13 00:20:28.223472 kernel: ACPI: PM: (supports S0 S5) Dec 13 00:20:28.223483 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 00:20:28.223498 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 00:20:28.223509 kernel: PCI: Using E820 reservations for host bridge windows Dec 13 00:20:28.223521 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 13 00:20:28.223532 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 00:20:28.223827 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 00:20:28.224069 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 13 00:20:28.224285 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 13 00:20:28.224305 kernel: PCI host bridge to bus 0000:00 Dec 13 00:20:28.224539 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 00:20:28.224731 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 00:20:28.224953 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 00:20:28.225144 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Dec 13 00:20:28.225339 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Dec 13 00:20:28.225526 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Dec 13 00:20:28.225717 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 00:20:28.225978 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Dec 13 00:20:28.226203 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Dec 13 00:20:28.226448 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Dec 13 00:20:28.226660 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Dec 13 00:20:28.226907 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Dec 13 00:20:28.227121 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 00:20:28.227366 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Dec 13 00:20:28.227605 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Dec 13 00:20:28.227818 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Dec 13 00:20:28.228052 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Dec 13 00:20:28.228271 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Dec 13 00:20:28.228487 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Dec 13 00:20:28.228700 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Dec 13 00:20:28.228930 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Dec 13 00:20:28.229152 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Dec 13 00:20:28.229379 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Dec 13 00:20:28.229593 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Dec 13 00:20:28.229800 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Dec 13 00:20:28.230027 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Dec 13 00:20:28.230231 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Dec 13 00:20:28.230481 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 13 00:20:28.230767 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Dec 13 00:20:28.231007 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Dec 13 00:20:28.231223 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Dec 13 00:20:28.231456 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Dec 13 00:20:28.231675 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Dec 13 00:20:28.231693 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 00:20:28.231706 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 00:20:28.231719 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 00:20:28.231731 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 00:20:28.231743 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 13 00:20:28.231759 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 13 00:20:28.231770 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 13 00:20:28.231782 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 13 00:20:28.231793 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 13 00:20:28.231805 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 13 00:20:28.231817 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 13 00:20:28.231829 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 13 00:20:28.231843 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 13 00:20:28.231855 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 13 00:20:28.231884 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 13 00:20:28.231906 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 13 00:20:28.231918 kernel: iommu: Default domain type: Translated Dec 13 00:20:28.231930 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 00:20:28.231942 kernel: efivars: Registered efivars operations Dec 13 00:20:28.231954 kernel: PCI: Using ACPI for IRQ routing Dec 13 00:20:28.231969 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 00:20:28.231981 kernel: e820: reserve RAM buffer [mem 0x0009f000-0x0009ffff] Dec 13 00:20:28.231993 kernel: e820: reserve RAM buffer [mem 0x9a102018-0x9bffffff] Dec 13 00:20:28.232004 kernel: e820: reserve RAM buffer [mem 0x9a13f018-0x9bffffff] Dec 13 00:20:28.232016 kernel: e820: reserve RAM buffer [mem 0x9b8ed000-0x9bffffff] Dec 13 00:20:28.232028 kernel: e820: reserve RAM buffer [mem 0x9bfb1000-0x9bffffff] Dec 13 00:20:28.232254 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 13 00:20:28.232484 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 13 00:20:28.232683 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 00:20:28.232698 kernel: vgaarb: loaded Dec 13 00:20:28.232711 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 13 00:20:28.232723 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 13 00:20:28.232736 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 00:20:28.232748 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 00:20:28.232764 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 00:20:28.232776 kernel: pnp: PnP ACPI init Dec 13 00:20:28.233028 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Dec 13 00:20:28.233047 kernel: pnp: PnP ACPI: found 6 devices Dec 13 00:20:28.233060 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 00:20:28.233072 kernel: NET: Registered PF_INET protocol family Dec 13 00:20:28.233089 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 00:20:28.233101 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 00:20:28.233114 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 00:20:28.233127 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 00:20:28.233139 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 13 00:20:28.233151 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 00:20:28.233162 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 00:20:28.233177 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 00:20:28.233185 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 00:20:28.233194 kernel: NET: Registered PF_XDP protocol family Dec 13 00:20:28.233380 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Dec 13 00:20:28.233574 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Dec 13 00:20:28.233729 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 00:20:28.233944 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 00:20:28.234140 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 00:20:28.234330 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Dec 13 00:20:28.234521 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Dec 13 00:20:28.234709 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Dec 13 00:20:28.234726 kernel: PCI: CLS 0 bytes, default 64 Dec 13 00:20:28.234738 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Dec 13 00:20:28.234755 kernel: Initialise system trusted keyrings Dec 13 00:20:28.234767 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 00:20:28.234779 kernel: Key type asymmetric registered Dec 13 00:20:28.234796 kernel: Asymmetric key parser 'x509' registered Dec 13 00:20:28.234891 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 13 00:20:28.234946 kernel: io scheduler mq-deadline registered Dec 13 00:20:28.234988 kernel: io scheduler kyber registered Dec 13 00:20:28.235026 kernel: io scheduler bfq registered Dec 13 00:20:28.235059 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 00:20:28.235090 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 13 00:20:28.235124 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 13 00:20:28.235154 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 13 00:20:28.235184 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 00:20:28.235217 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 00:20:28.235258 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 00:20:28.235292 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 00:20:28.235325 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 00:20:28.235655 kernel: rtc_cmos 00:04: RTC can wake from S4 Dec 13 00:20:28.235674 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 00:20:28.235944 kernel: rtc_cmos 00:04: registered as rtc0 Dec 13 00:20:28.236142 kernel: rtc_cmos 00:04: setting system clock to 2025-12-13T00:20:26 UTC (1765585226) Dec 13 00:20:28.236339 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Dec 13 00:20:28.236355 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Dec 13 00:20:28.236368 kernel: efifb: probing for efifb Dec 13 00:20:28.236380 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Dec 13 00:20:28.236392 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Dec 13 00:20:28.236407 kernel: efifb: scrolling: redraw Dec 13 00:20:28.236419 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 13 00:20:28.236438 kernel: Console: switching to colour frame buffer device 160x50 Dec 13 00:20:28.236454 kernel: fb0: EFI VGA frame buffer device Dec 13 00:20:28.236467 kernel: pstore: Using crash dump compression: deflate Dec 13 00:20:28.236479 kernel: pstore: Registered efi_pstore as persistent store backend Dec 13 00:20:28.236493 kernel: NET: Registered PF_INET6 protocol family Dec 13 00:20:28.236505 kernel: Segment Routing with IPv6 Dec 13 00:20:28.236517 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 00:20:28.236529 kernel: NET: Registered PF_PACKET protocol family Dec 13 00:20:28.236541 kernel: Key type dns_resolver registered Dec 13 00:20:28.236553 kernel: IPI shorthand broadcast: enabled Dec 13 00:20:28.236566 kernel: sched_clock: Marking stable (2045002393, 258145768)->(2368458946, -65310785) Dec 13 00:20:28.236581 kernel: registered taskstats version 1 Dec 13 00:20:28.236592 kernel: Loading compiled-in X.509 certificates Dec 13 00:20:28.236605 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 199a9f6885410acbf0a1b178e5562253352ca03c' Dec 13 00:20:28.236617 kernel: Demotion targets for Node 0: null Dec 13 00:20:28.236628 kernel: Key type .fscrypt registered Dec 13 00:20:28.236640 kernel: Key type fscrypt-provisioning registered Dec 13 00:20:28.236652 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 00:20:28.236668 kernel: ima: Allocated hash algorithm: sha1 Dec 13 00:20:28.236680 kernel: ima: No architecture policies found Dec 13 00:20:28.236692 kernel: clk: Disabling unused clocks Dec 13 00:20:28.236704 kernel: Freeing unused kernel image (initmem) memory: 15596K Dec 13 00:20:28.236716 kernel: Write protecting the kernel read-only data: 47104k Dec 13 00:20:28.236729 kernel: Freeing unused kernel image (rodata/data gap) memory: 1132K Dec 13 00:20:28.236741 kernel: Run /init as init process Dec 13 00:20:28.236755 kernel: with arguments: Dec 13 00:20:28.236767 kernel: /init Dec 13 00:20:28.236779 kernel: with environment: Dec 13 00:20:28.236791 kernel: HOME=/ Dec 13 00:20:28.236803 kernel: TERM=linux Dec 13 00:20:28.236815 kernel: SCSI subsystem initialized Dec 13 00:20:28.236827 kernel: libata version 3.00 loaded. Dec 13 00:20:28.237066 kernel: ahci 0000:00:1f.2: version 3.0 Dec 13 00:20:28.237088 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 13 00:20:28.237285 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Dec 13 00:20:28.237484 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Dec 13 00:20:28.237685 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 13 00:20:28.237944 kernel: scsi host0: ahci Dec 13 00:20:28.238164 kernel: scsi host1: ahci Dec 13 00:20:28.238414 kernel: scsi host2: ahci Dec 13 00:20:28.238637 kernel: scsi host3: ahci Dec 13 00:20:28.238879 kernel: scsi host4: ahci Dec 13 00:20:28.239118 kernel: scsi host5: ahci Dec 13 00:20:28.239134 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 26 lpm-pol 1 Dec 13 00:20:28.239151 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 26 lpm-pol 1 Dec 13 00:20:28.239163 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 26 lpm-pol 1 Dec 13 00:20:28.239177 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 26 lpm-pol 1 Dec 13 00:20:28.239189 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 26 lpm-pol 1 Dec 13 00:20:28.239202 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 26 lpm-pol 1 Dec 13 00:20:28.239216 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 00:20:28.239229 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 00:20:28.239244 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 00:20:28.239257 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Dec 13 00:20:28.239270 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 13 00:20:28.239283 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 13 00:20:28.239296 kernel: ata3.00: LPM support broken, forcing max_power Dec 13 00:20:28.239309 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Dec 13 00:20:28.239322 kernel: ata3.00: applying bridge limits Dec 13 00:20:28.239338 kernel: ata3.00: LPM support broken, forcing max_power Dec 13 00:20:28.239350 kernel: ata3.00: configured for UDMA/100 Dec 13 00:20:28.239607 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 13 00:20:28.239826 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Dec 13 00:20:28.240062 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Dec 13 00:20:28.240082 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 00:20:28.240100 kernel: GPT:16515071 != 27000831 Dec 13 00:20:28.240113 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 00:20:28.240126 kernel: GPT:16515071 != 27000831 Dec 13 00:20:28.240137 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 00:20:28.240150 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 00:20:28.240385 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Dec 13 00:20:28.240402 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 00:20:28.240655 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Dec 13 00:20:28.240674 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 00:20:28.240687 kernel: device-mapper: uevent: version 1.0.3 Dec 13 00:20:28.240699 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 13 00:20:28.240712 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Dec 13 00:20:28.240725 kernel: raid6: avx2x4 gen() 30000 MB/s Dec 13 00:20:28.240737 kernel: raid6: avx2x2 gen() 30606 MB/s Dec 13 00:20:28.240753 kernel: raid6: avx2x1 gen() 25334 MB/s Dec 13 00:20:28.240765 kernel: raid6: using algorithm avx2x2 gen() 30606 MB/s Dec 13 00:20:28.240777 kernel: raid6: .... xor() 19684 MB/s, rmw enabled Dec 13 00:20:28.240789 kernel: raid6: using avx2x2 recovery algorithm Dec 13 00:20:28.240802 kernel: xor: automatically using best checksumming function avx Dec 13 00:20:28.240815 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 00:20:28.240828 kernel: BTRFS: device fsid 0d9bdcaa-df05-4fc6-a68f-ebab7c5b281d devid 1 transid 45 /dev/mapper/usr (253:0) scanned by mount (181) Dec 13 00:20:28.240843 kernel: BTRFS info (device dm-0): first mount of filesystem 0d9bdcaa-df05-4fc6-a68f-ebab7c5b281d Dec 13 00:20:28.240856 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 00:20:28.240887 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 00:20:28.240909 kernel: BTRFS info (device dm-0): enabling free space tree Dec 13 00:20:28.240921 kernel: loop: module loaded Dec 13 00:20:28.240934 kernel: loop0: detected capacity change from 0 to 100528 Dec 13 00:20:28.240946 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 00:20:28.240964 systemd[1]: Successfully made /usr/ read-only. Dec 13 00:20:28.240980 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 13 00:20:28.240994 systemd[1]: Detected virtualization kvm. Dec 13 00:20:28.241007 systemd[1]: Detected architecture x86-64. Dec 13 00:20:28.241020 systemd[1]: Running in initrd. Dec 13 00:20:28.241032 systemd[1]: No hostname configured, using default hostname. Dec 13 00:20:28.241049 systemd[1]: Hostname set to . Dec 13 00:20:28.241062 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 13 00:20:28.241075 systemd[1]: Queued start job for default target initrd.target. Dec 13 00:20:28.241088 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 13 00:20:28.241101 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 00:20:28.241114 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 00:20:28.241131 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 00:20:28.241145 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 00:20:28.241159 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 00:20:28.241173 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 00:20:28.241187 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 00:20:28.241203 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 00:20:28.241216 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 13 00:20:28.241233 systemd[1]: Reached target paths.target - Path Units. Dec 13 00:20:28.241249 systemd[1]: Reached target slices.target - Slice Units. Dec 13 00:20:28.241266 systemd[1]: Reached target swap.target - Swaps. Dec 13 00:20:28.241283 systemd[1]: Reached target timers.target - Timer Units. Dec 13 00:20:28.241299 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 00:20:28.241318 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 00:20:28.241335 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 13 00:20:28.241351 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 00:20:28.241366 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 13 00:20:28.241379 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 00:20:28.241392 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 00:20:28.241405 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 00:20:28.241421 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 00:20:28.241434 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 00:20:28.241447 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 00:20:28.241461 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 00:20:28.241474 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 00:20:28.241488 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 13 00:20:28.241501 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 00:20:28.241516 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 00:20:28.241529 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 00:20:28.241543 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 00:20:28.241559 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 00:20:28.241573 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 00:20:28.241586 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 00:20:28.241599 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 00:20:28.241639 systemd-journald[316]: Collecting audit messages is enabled. Dec 13 00:20:28.241671 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 00:20:28.241685 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 00:20:28.241698 systemd-journald[316]: Journal started Dec 13 00:20:28.241724 systemd-journald[316]: Runtime Journal (/run/log/journal/841d0107f93c4df8935b7461dbeea5fd) is 5.9M, max 47.8M, 41.8M free. Dec 13 00:20:28.246876 kernel: audit: type=1130 audit(1765585228.241:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:28.246914 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 00:20:28.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:28.253831 kernel: Bridge firewalling registered Dec 13 00:20:28.253870 kernel: audit: type=1130 audit(1765585228.249:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:28.249000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:28.254293 systemd-modules-load[319]: Inserted module 'br_netfilter' Dec 13 00:20:28.255480 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 00:20:28.263013 kernel: audit: type=1130 audit(1765585228.255:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:28.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:28.263114 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 00:20:28.270262 kernel: audit: type=1130 audit(1765585228.262:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:28.262000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:28.272624 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 00:20:28.276588 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 00:20:28.277808 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 00:20:28.291666 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 00:20:28.302191 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 00:20:28.302000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:28.307888 kernel: audit: type=1130 audit(1765585228.302:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:28.310884 systemd-tmpfiles[339]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 13 00:20:28.315841 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 00:20:28.324846 kernel: audit: type=1130 audit(1765585228.315:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:28.324908 kernel: audit: type=1334 audit(1765585228.317:8): prog-id=6 op=LOAD Dec 13 00:20:28.315000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:28.317000 audit: BPF prog-id=6 op=LOAD Dec 13 00:20:28.319780 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 00:20:28.324000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:28.324282 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 00:20:28.337010 kernel: audit: type=1130 audit(1765585228.324:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:28.337040 kernel: audit: type=1130 audit(1765585228.332:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:28.332000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:28.325815 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 00:20:28.351541 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 00:20:28.368926 dracut-cmdline[362]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=eb354b129f31681bdee44febfe9924e0e1b63e0b602aff7e7ef2973e2c8c1e9e Dec 13 00:20:28.403184 systemd-resolved[353]: Positive Trust Anchors: Dec 13 00:20:28.403202 systemd-resolved[353]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 00:20:28.403207 systemd-resolved[353]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 13 00:20:28.403245 systemd-resolved[353]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 00:20:28.478766 systemd-resolved[353]: Defaulting to hostname 'linux'. Dec 13 00:20:28.479942 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 00:20:28.480000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:28.480970 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 00:20:28.537894 kernel: Loading iSCSI transport class v2.0-870. Dec 13 00:20:28.551877 kernel: iscsi: registered transport (tcp) Dec 13 00:20:28.575919 kernel: iscsi: registered transport (qla4xxx) Dec 13 00:20:28.576048 kernel: QLogic iSCSI HBA Driver Dec 13 00:20:28.601745 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 00:20:28.622787 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 00:20:28.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:28.624814 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 00:20:28.676234 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 00:20:28.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:28.680905 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 00:20:28.683439 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 00:20:28.718944 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 00:20:28.718000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:28.719000 audit: BPF prog-id=7 op=LOAD Dec 13 00:20:28.719000 audit: BPF prog-id=8 op=LOAD Dec 13 00:20:28.721107 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 00:20:28.754959 systemd-udevd[598]: Using default interface naming scheme 'v257'. Dec 13 00:20:28.768075 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 00:20:28.767000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:28.771509 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 00:20:28.798753 dracut-pre-trigger[664]: rd.md=0: removing MD RAID activation Dec 13 00:20:28.809953 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 00:20:28.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:28.813000 audit: BPF prog-id=9 op=LOAD Dec 13 00:20:28.814790 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 00:20:28.832027 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 00:20:28.834000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:28.837122 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 00:20:28.874409 systemd-networkd[723]: lo: Link UP Dec 13 00:20:28.874416 systemd-networkd[723]: lo: Gained carrier Dec 13 00:20:28.874980 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 00:20:28.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:28.877785 systemd[1]: Reached target network.target - Network. Dec 13 00:20:28.933165 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 00:20:28.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:28.939126 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 00:20:28.978196 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 13 00:20:29.003301 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 13 00:20:29.030897 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 00:20:29.042667 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 00:20:29.051109 systemd-networkd[723]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 13 00:20:29.058930 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Dec 13 00:20:29.058959 kernel: AES CTR mode by8 optimization enabled Dec 13 00:20:29.051120 systemd-networkd[723]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 00:20:29.051602 systemd-networkd[723]: eth0: Link UP Dec 13 00:20:29.057517 systemd-networkd[723]: eth0: Gained carrier Dec 13 00:20:29.057535 systemd-networkd[723]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 13 00:20:29.067668 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 13 00:20:29.072600 systemd-networkd[723]: eth0: DHCPv4 address 10.0.0.65/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 00:20:29.081000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:29.080106 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 00:20:29.082437 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 00:20:29.082652 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 00:20:29.082767 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 00:20:29.098315 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 00:20:29.111909 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 00:20:29.112036 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 00:20:29.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:29.114000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:29.117444 disk-uuid[838]: Primary Header is updated. Dec 13 00:20:29.117444 disk-uuid[838]: Secondary Entries is updated. Dec 13 00:20:29.117444 disk-uuid[838]: Secondary Header is updated. Dec 13 00:20:29.123449 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 00:20:29.156801 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 00:20:29.159000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:29.170359 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 00:20:29.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:29.174752 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 00:20:29.183313 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 00:20:29.187961 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 00:20:29.193321 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 00:20:29.218511 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 00:20:29.219000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:30.170021 disk-uuid[841]: Warning: The kernel is still using the old partition table. Dec 13 00:20:30.170021 disk-uuid[841]: The new table will be used at the next reboot or after you Dec 13 00:20:30.170021 disk-uuid[841]: run partprobe(8) or kpartx(8) Dec 13 00:20:30.170021 disk-uuid[841]: The operation has completed successfully. Dec 13 00:20:30.184011 systemd-networkd[723]: eth0: Gained IPv6LL Dec 13 00:20:30.188354 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 00:20:30.188544 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 00:20:30.204125 kernel: kauditd_printk_skb: 18 callbacks suppressed Dec 13 00:20:30.204168 kernel: audit: type=1130 audit(1765585230.188:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:30.204185 kernel: audit: type=1131 audit(1765585230.188:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:30.188000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:30.188000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:30.190620 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 00:20:30.238108 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (868) Dec 13 00:20:30.238178 kernel: BTRFS info (device vda6): first mount of filesystem 374f3f93-27fb-4dd4-ae91-362a24dc4bed Dec 13 00:20:30.238191 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 00:20:30.243752 kernel: BTRFS info (device vda6): turning on async discard Dec 13 00:20:30.243838 kernel: BTRFS info (device vda6): enabling free space tree Dec 13 00:20:30.253900 kernel: BTRFS info (device vda6): last unmount of filesystem 374f3f93-27fb-4dd4-ae91-362a24dc4bed Dec 13 00:20:30.254912 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 00:20:30.264238 kernel: audit: type=1130 audit(1765585230.256:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:30.256000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:30.258382 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 00:20:30.462412 ignition[887]: Ignition 2.24.0 Dec 13 00:20:30.462426 ignition[887]: Stage: fetch-offline Dec 13 00:20:30.463756 ignition[887]: no configs at "/usr/lib/ignition/base.d" Dec 13 00:20:30.463804 ignition[887]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 00:20:30.466279 ignition[887]: parsed url from cmdline: "" Dec 13 00:20:30.466284 ignition[887]: no config URL provided Dec 13 00:20:30.466292 ignition[887]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 00:20:30.466306 ignition[887]: no config at "/usr/lib/ignition/user.ign" Dec 13 00:20:30.466371 ignition[887]: op(1): [started] loading QEMU firmware config module Dec 13 00:20:30.466377 ignition[887]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 13 00:20:30.480655 ignition[887]: op(1): [finished] loading QEMU firmware config module Dec 13 00:20:30.563010 ignition[887]: parsing config with SHA512: 6e16d4dbe7d347ac9c0655f4435dd3868a7c17197071b7987aea4ea92684c7531c91a31f4296163eddd588b20b2586e749761ab03a3ada6f7d450b3cf3545ccc Dec 13 00:20:30.568397 unknown[887]: fetched base config from "system" Dec 13 00:20:30.568412 unknown[887]: fetched user config from "qemu" Dec 13 00:20:30.569197 ignition[887]: fetch-offline: fetch-offline passed Dec 13 00:20:30.569291 ignition[887]: Ignition finished successfully Dec 13 00:20:30.576824 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 00:20:30.584532 kernel: audit: type=1130 audit(1765585230.576:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:30.576000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:30.577799 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 00:20:30.580020 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 00:20:30.675394 ignition[898]: Ignition 2.24.0 Dec 13 00:20:30.675410 ignition[898]: Stage: kargs Dec 13 00:20:30.675575 ignition[898]: no configs at "/usr/lib/ignition/base.d" Dec 13 00:20:30.675586 ignition[898]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 00:20:30.676321 ignition[898]: kargs: kargs passed Dec 13 00:20:30.676382 ignition[898]: Ignition finished successfully Dec 13 00:20:30.689546 kernel: audit: type=1130 audit(1765585230.682:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:30.682000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:30.682441 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 00:20:30.688193 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 00:20:30.762765 ignition[904]: Ignition 2.24.0 Dec 13 00:20:30.762778 ignition[904]: Stage: disks Dec 13 00:20:30.762999 ignition[904]: no configs at "/usr/lib/ignition/base.d" Dec 13 00:20:30.763010 ignition[904]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 00:20:30.764054 ignition[904]: disks: disks passed Dec 13 00:20:30.764115 ignition[904]: Ignition finished successfully Dec 13 00:20:30.773237 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 00:20:30.780374 kernel: audit: type=1130 audit(1765585230.773:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:30.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:30.774477 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 00:20:30.781303 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 00:20:30.784510 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 00:20:30.788511 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 00:20:30.791653 systemd[1]: Reached target basic.target - Basic System. Dec 13 00:20:30.797447 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 00:20:30.849424 systemd-fsck[913]: ROOT: clean, 15/456736 files, 38230/456704 blocks Dec 13 00:20:30.857889 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 00:20:30.858000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:30.861127 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 00:20:30.868955 kernel: audit: type=1130 audit(1765585230.858:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:31.020891 kernel: EXT4-fs (vda9): mounted filesystem fc518408-2cc6-461e-9cc3-fcafcb4d05ba r/w with ordered data mode. Quota mode: none. Dec 13 00:20:31.021119 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 00:20:31.022413 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 00:20:31.027905 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 00:20:31.029559 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 00:20:31.032006 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 00:20:31.032054 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 00:20:31.032085 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 00:20:31.050005 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 00:20:31.052434 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 00:20:31.068413 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (921) Dec 13 00:20:31.068481 kernel: BTRFS info (device vda6): first mount of filesystem 374f3f93-27fb-4dd4-ae91-362a24dc4bed Dec 13 00:20:31.068507 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 00:20:31.079925 kernel: BTRFS info (device vda6): turning on async discard Dec 13 00:20:31.079970 kernel: BTRFS info (device vda6): enabling free space tree Dec 13 00:20:31.077235 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 00:20:31.244563 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 00:20:31.245000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:31.247115 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 00:20:31.256275 kernel: audit: type=1130 audit(1765585231.245:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:31.252756 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 00:20:31.282048 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 00:20:31.285221 kernel: BTRFS info (device vda6): last unmount of filesystem 374f3f93-27fb-4dd4-ae91-362a24dc4bed Dec 13 00:20:31.306091 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 00:20:31.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:31.313903 kernel: audit: type=1130 audit(1765585231.308:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:31.365432 ignition[1018]: INFO : Ignition 2.24.0 Dec 13 00:20:31.365432 ignition[1018]: INFO : Stage: mount Dec 13 00:20:31.368340 ignition[1018]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 00:20:31.368340 ignition[1018]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 00:20:31.368340 ignition[1018]: INFO : mount: mount passed Dec 13 00:20:31.368340 ignition[1018]: INFO : Ignition finished successfully Dec 13 00:20:31.378466 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 00:20:31.381000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:31.383214 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 00:20:31.389092 kernel: audit: type=1130 audit(1765585231.381:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:31.419928 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 00:20:31.448901 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1030) Dec 13 00:20:31.452114 kernel: BTRFS info (device vda6): first mount of filesystem 374f3f93-27fb-4dd4-ae91-362a24dc4bed Dec 13 00:20:31.452148 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 00:20:31.456298 kernel: BTRFS info (device vda6): turning on async discard Dec 13 00:20:31.456337 kernel: BTRFS info (device vda6): enabling free space tree Dec 13 00:20:31.458225 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 00:20:31.496231 ignition[1047]: INFO : Ignition 2.24.0 Dec 13 00:20:31.496231 ignition[1047]: INFO : Stage: files Dec 13 00:20:31.499667 ignition[1047]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 00:20:31.499667 ignition[1047]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 00:20:31.499667 ignition[1047]: DEBUG : files: compiled without relabeling support, skipping Dec 13 00:20:31.499667 ignition[1047]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 00:20:31.499667 ignition[1047]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 00:20:31.511883 ignition[1047]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 00:20:31.511883 ignition[1047]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 00:20:31.511883 ignition[1047]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 00:20:31.511883 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Dec 13 00:20:31.511883 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Dec 13 00:20:31.502667 unknown[1047]: wrote ssh authorized keys file for user: core Dec 13 00:20:31.549510 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 00:20:31.669928 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Dec 13 00:20:31.673471 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 00:20:31.673471 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 00:20:31.673471 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 00:20:31.673471 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 00:20:31.673471 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 00:20:31.673471 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 00:20:31.673471 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 00:20:31.673471 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 00:20:31.773676 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 00:20:31.776899 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 00:20:31.776899 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 13 00:20:31.784839 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 13 00:20:31.789599 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 13 00:20:31.789599 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Dec 13 00:20:32.074563 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 13 00:20:32.552515 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 13 00:20:32.552515 ignition[1047]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 13 00:20:32.558965 ignition[1047]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 00:20:32.562390 ignition[1047]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 00:20:32.562390 ignition[1047]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 13 00:20:32.562390 ignition[1047]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Dec 13 00:20:32.562390 ignition[1047]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 00:20:32.562390 ignition[1047]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 00:20:32.562390 ignition[1047]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Dec 13 00:20:32.562390 ignition[1047]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Dec 13 00:20:32.594060 ignition[1047]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 00:20:32.603571 ignition[1047]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 00:20:32.607199 ignition[1047]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Dec 13 00:20:32.607199 ignition[1047]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Dec 13 00:20:32.607199 ignition[1047]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 00:20:32.607199 ignition[1047]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 00:20:32.607199 ignition[1047]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 00:20:32.607199 ignition[1047]: INFO : files: files passed Dec 13 00:20:32.607199 ignition[1047]: INFO : Ignition finished successfully Dec 13 00:20:32.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:32.615175 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 00:20:32.619533 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 00:20:32.628619 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 00:20:32.653179 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 00:20:32.653424 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 00:20:32.654000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:32.654000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:32.661202 initrd-setup-root-after-ignition[1078]: grep: /sysroot/oem/oem-release: No such file or directory Dec 13 00:20:32.666596 initrd-setup-root-after-ignition[1080]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 00:20:32.666596 initrd-setup-root-after-ignition[1080]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 00:20:32.673514 initrd-setup-root-after-ignition[1084]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 00:20:32.678472 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 00:20:32.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:32.679455 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 00:20:32.687318 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 00:20:32.748814 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 00:20:32.748977 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 00:20:32.753275 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 00:20:32.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:32.752000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:32.754275 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 00:20:32.759952 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 00:20:32.761512 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 00:20:32.796209 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 00:20:32.796000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:32.799919 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 00:20:32.890030 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 13 00:20:32.890280 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 00:20:32.891232 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 00:20:32.897395 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 00:20:32.900532 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 00:20:32.900671 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 00:20:32.904712 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 00:20:32.902000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:32.908673 systemd[1]: Stopped target basic.target - Basic System. Dec 13 00:20:32.909640 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 00:20:32.910467 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 00:20:32.916908 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 00:20:32.917729 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 13 00:20:32.924779 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 00:20:32.928123 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 00:20:32.929282 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 00:20:32.933964 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 00:20:32.937295 systemd[1]: Stopped target swap.target - Swaps. Dec 13 00:20:32.940350 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 00:20:32.940532 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 00:20:32.943071 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 00:20:32.941000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:32.943656 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 00:20:32.950321 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 00:20:32.953387 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 00:20:32.956850 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 00:20:32.956000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:32.956992 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 00:20:32.962105 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 00:20:32.962261 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 00:20:32.962000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:32.963444 systemd[1]: Stopped target paths.target - Path Units. Dec 13 00:20:32.963879 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 00:20:32.974002 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 00:20:32.974784 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 00:20:32.979804 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 00:20:32.980675 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 00:20:32.980795 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 00:20:32.981495 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 00:20:32.981574 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 00:20:32.992000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:32.987502 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Dec 13 00:20:32.995000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:32.987585 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Dec 13 00:20:32.988361 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 00:20:32.988485 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 00:20:32.993390 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 00:20:32.993552 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 00:20:33.006000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:33.011000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:33.014000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:32.997854 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 00:20:33.000581 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 00:20:33.003577 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 00:20:33.003789 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 00:20:33.007688 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 00:20:33.007822 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 00:20:33.011970 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 00:20:33.012130 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 00:20:33.019660 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 00:20:33.027184 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 00:20:33.033000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:33.033000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:33.058367 ignition[1104]: INFO : Ignition 2.24.0 Dec 13 00:20:33.058367 ignition[1104]: INFO : Stage: umount Dec 13 00:20:33.061514 ignition[1104]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 00:20:33.061514 ignition[1104]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 00:20:33.064032 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 00:20:33.067054 ignition[1104]: INFO : umount: umount passed Dec 13 00:20:33.068650 ignition[1104]: INFO : Ignition finished successfully Dec 13 00:20:33.070904 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 00:20:33.071038 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 00:20:33.072373 systemd[1]: Stopped target network.target - Network. Dec 13 00:20:33.072718 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 00:20:33.071000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:33.079000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:33.072781 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 00:20:33.080000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:33.080005 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 00:20:33.084000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:33.080064 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 00:20:33.084000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:33.081208 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 00:20:33.081259 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 00:20:33.085335 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 00:20:33.085388 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 00:20:33.085981 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 00:20:33.092062 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 00:20:33.105950 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 00:20:33.106116 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 00:20:33.108000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:33.115140 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 00:20:33.115000 audit: BPF prog-id=6 op=UNLOAD Dec 13 00:20:33.115324 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 00:20:33.116000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:33.122888 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 13 00:20:33.123000 audit: BPF prog-id=9 op=UNLOAD Dec 13 00:20:33.123719 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 00:20:33.123764 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 00:20:33.132998 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 00:20:33.136000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:33.139000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:33.142000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:33.134791 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 00:20:33.134872 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 00:20:33.137324 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 00:20:33.149000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:33.137383 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 00:20:33.140900 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 00:20:33.153000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:33.140954 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 00:20:33.143076 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 00:20:33.144071 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 00:20:33.149151 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 00:20:33.151486 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 00:20:33.151584 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 00:20:33.171138 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 00:20:33.171351 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 00:20:33.173000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:33.177583 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 00:20:33.177907 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 00:20:33.177000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:33.179101 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 00:20:33.179163 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 00:20:33.184598 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 00:20:33.184638 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 00:20:33.191000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:33.188505 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 00:20:33.188567 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 00:20:33.199000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:33.196983 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 00:20:33.201000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:33.197070 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 00:20:33.201173 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 00:20:33.201232 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 00:20:33.210019 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 00:20:33.210837 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 13 00:20:33.210000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:33.210918 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 00:20:33.212000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:33.211547 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 00:20:33.212000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:33.211595 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 00:20:33.229000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:33.212236 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 13 00:20:33.230000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:33.212286 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 00:20:33.212829 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 00:20:33.212892 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 00:20:33.230825 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 00:20:33.230930 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 00:20:33.257823 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 00:20:33.258001 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 00:20:33.296000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:33.296000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:33.297079 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 00:20:33.298848 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 00:20:33.312984 systemd[1]: Switching root. Dec 13 00:20:33.348963 systemd-journald[316]: Journal stopped Dec 13 00:20:35.115768 systemd-journald[316]: Received SIGTERM from PID 1 (systemd). Dec 13 00:20:35.115850 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 00:20:35.115916 kernel: SELinux: policy capability open_perms=1 Dec 13 00:20:35.116136 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 00:20:35.116164 kernel: SELinux: policy capability always_check_network=0 Dec 13 00:20:35.116181 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 00:20:35.116201 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 00:20:35.116217 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 00:20:35.116233 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 00:20:35.116255 kernel: SELinux: policy capability userspace_initial_context=0 Dec 13 00:20:35.116273 systemd[1]: Successfully loaded SELinux policy in 69.983ms. Dec 13 00:20:35.116304 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.002ms. Dec 13 00:20:35.116326 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 13 00:20:35.116344 systemd[1]: Detected virtualization kvm. Dec 13 00:20:35.116369 systemd[1]: Detected architecture x86-64. Dec 13 00:20:35.116387 systemd[1]: Detected first boot. Dec 13 00:20:35.116405 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 13 00:20:35.116423 zram_generator::config[1148]: No configuration found. Dec 13 00:20:35.116443 kernel: Guest personality initialized and is inactive Dec 13 00:20:35.116464 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Dec 13 00:20:35.116481 kernel: Initialized host personality Dec 13 00:20:35.116496 kernel: NET: Registered PF_VSOCK protocol family Dec 13 00:20:35.116512 systemd[1]: Populated /etc with preset unit settings. Dec 13 00:20:35.116530 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 00:20:35.116548 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 00:20:35.116564 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 00:20:35.116589 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 00:20:35.116607 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 00:20:35.116624 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 00:20:35.116648 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 00:20:35.116666 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 00:20:35.116685 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 00:20:35.116703 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 00:20:35.116737 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 00:20:35.116754 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 00:20:35.116771 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 00:20:35.116789 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 00:20:35.116806 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 00:20:35.116823 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 00:20:35.116840 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 00:20:35.116877 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 00:20:35.116901 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 00:20:35.116919 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 00:20:35.116937 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 00:20:35.116954 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 00:20:35.116972 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 00:20:35.116993 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 00:20:35.117011 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 00:20:35.117029 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 00:20:35.117046 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Dec 13 00:20:35.117063 systemd[1]: Reached target slices.target - Slice Units. Dec 13 00:20:35.117079 systemd[1]: Reached target swap.target - Swaps. Dec 13 00:20:35.117097 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 00:20:35.117119 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 00:20:35.117137 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 13 00:20:35.117155 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 13 00:20:35.117172 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Dec 13 00:20:35.117190 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 00:20:35.117208 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Dec 13 00:20:35.117226 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Dec 13 00:20:35.117248 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 00:20:35.117266 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 00:20:35.117283 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 00:20:35.117304 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 00:20:35.117326 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 00:20:35.117343 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 00:20:35.117362 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 00:20:35.117391 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 00:20:35.117412 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 00:20:35.117430 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 00:20:35.117448 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 00:20:35.117468 systemd[1]: Reached target machines.target - Containers. Dec 13 00:20:35.117486 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 00:20:35.117503 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 00:20:35.117519 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 00:20:35.117536 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 00:20:35.117555 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 00:20:35.117573 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 00:20:35.117595 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 00:20:35.117613 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 00:20:35.117630 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 00:20:35.117648 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 00:20:35.117667 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 00:20:35.117684 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 00:20:35.117704 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 00:20:35.117741 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 00:20:35.117762 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 13 00:20:35.117780 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 00:20:35.117801 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 00:20:35.117819 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 00:20:35.117836 kernel: fuse: init (API version 7.41) Dec 13 00:20:35.117853 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 00:20:35.117995 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 13 00:20:35.118015 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 00:20:35.118035 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 00:20:35.118057 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 00:20:35.118075 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 00:20:35.118091 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 00:20:35.118108 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 00:20:35.118126 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 00:20:35.118170 systemd-journald[1211]: Collecting audit messages is enabled. Dec 13 00:20:35.118207 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 00:20:35.118227 systemd-journald[1211]: Journal started Dec 13 00:20:35.118261 systemd-journald[1211]: Runtime Journal (/run/log/journal/841d0107f93c4df8935b7461dbeea5fd) is 5.9M, max 47.8M, 41.8M free. Dec 13 00:20:34.843000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 13 00:20:35.002000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:35.006000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:35.011000 audit: BPF prog-id=14 op=UNLOAD Dec 13 00:20:35.011000 audit: BPF prog-id=13 op=UNLOAD Dec 13 00:20:35.012000 audit: BPF prog-id=15 op=LOAD Dec 13 00:20:35.013000 audit: BPF prog-id=16 op=LOAD Dec 13 00:20:35.013000 audit: BPF prog-id=17 op=LOAD Dec 13 00:20:35.113000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 00:20:35.113000 audit[1211]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffdff82ea10 a2=4000 a3=0 items=0 ppid=1 pid=1211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:20:35.113000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 00:20:34.619014 systemd[1]: Queued start job for default target multi-user.target. Dec 13 00:20:34.644571 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 13 00:20:34.645269 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 00:20:35.121895 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 00:20:35.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:35.126992 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 00:20:35.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:35.130027 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 00:20:35.130311 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 00:20:35.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:35.131000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:35.133744 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 00:20:35.134061 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 00:20:35.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:35.135000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:35.137409 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 00:20:35.137762 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 00:20:35.140923 kernel: ACPI: bus type drm_connector registered Dec 13 00:20:35.140000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:35.140000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:35.141524 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 00:20:35.142049 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 00:20:35.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:35.143000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:35.144691 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 00:20:35.145084 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 00:20:35.146000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:35.147000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:35.148430 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 00:20:35.148837 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 00:20:35.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:35.150000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:35.151515 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 00:20:35.153000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:35.156000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:35.154487 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 00:20:35.158605 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 00:20:35.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:35.162322 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 13 00:20:35.163000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:35.166165 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 00:20:35.167000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:35.183335 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 00:20:35.186049 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Dec 13 00:20:35.189870 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 00:20:35.193053 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 00:20:35.195157 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 00:20:35.195270 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 00:20:35.198222 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 13 00:20:35.201345 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 00:20:35.201540 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 13 00:20:35.205989 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 00:20:35.212041 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 00:20:35.214261 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 00:20:35.215581 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 00:20:35.217661 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 00:20:35.218821 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 00:20:35.223114 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 00:20:35.225775 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 00:20:35.231019 systemd-journald[1211]: Time spent on flushing to /var/log/journal/841d0107f93c4df8935b7461dbeea5fd is 31.422ms for 1161 entries. Dec 13 00:20:35.231019 systemd-journald[1211]: System Journal (/var/log/journal/841d0107f93c4df8935b7461dbeea5fd) is 8M, max 163.5M, 155.5M free. Dec 13 00:20:35.280773 systemd-journald[1211]: Received client request to flush runtime journal. Dec 13 00:20:35.280920 kernel: kauditd_printk_skb: 90 callbacks suppressed Dec 13 00:20:35.280983 kernel: audit: type=1130 audit(1765585235.236:127): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:35.281038 kernel: loop1: detected capacity change from 0 to 171112 Dec 13 00:20:35.281083 kernel: loop1: p1 p2 p3 Dec 13 00:20:35.281129 kernel: audit: type=1130 audit(1765585235.262:128): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:35.281183 kernel: audit: type=1130 audit(1765585235.277:129): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:35.236000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:35.262000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:35.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:35.233107 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 00:20:35.238399 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 00:20:35.240814 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 00:20:35.260740 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 00:20:35.262989 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 00:20:35.273014 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 13 00:20:35.276159 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 00:20:35.283432 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 00:20:35.290290 kernel: audit: type=1130 audit(1765585235.285:130): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:35.285000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:35.296598 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. Dec 13 00:20:35.296625 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. Dec 13 00:20:35.306385 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 00:20:35.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:35.313015 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 00:20:35.314890 kernel: audit: type=1130 audit(1765585235.308:131): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:35.323911 kernel: erofs: (device loop1p1): mounted with root inode @ nid 39. Dec 13 00:20:35.326175 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 13 00:20:35.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:35.334913 kernel: audit: type=1130 audit(1765585235.327:132): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:35.375904 kernel: loop2: detected capacity change from 0 to 224512 Dec 13 00:20:35.389514 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 00:20:35.391000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:35.394921 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Dec 13 00:20:35.398889 kernel: audit: type=1130 audit(1765585235.391:133): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:35.398960 kernel: audit: type=1334 audit(1765585235.392:134): prog-id=18 op=LOAD Dec 13 00:20:35.398990 kernel: audit: type=1334 audit(1765585235.392:135): prog-id=19 op=LOAD Dec 13 00:20:35.399018 kernel: audit: type=1334 audit(1765585235.392:136): prog-id=20 op=LOAD Dec 13 00:20:35.392000 audit: BPF prog-id=18 op=LOAD Dec 13 00:20:35.392000 audit: BPF prog-id=19 op=LOAD Dec 13 00:20:35.392000 audit: BPF prog-id=20 op=LOAD Dec 13 00:20:35.403000 audit: BPF prog-id=21 op=LOAD Dec 13 00:20:35.404802 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 00:20:35.410231 kernel: loop3: detected capacity change from 0 to 375256 Dec 13 00:20:35.409132 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 00:20:35.414462 kernel: loop3: p1 p2 p3 Dec 13 00:20:35.414000 audit: BPF prog-id=22 op=LOAD Dec 13 00:20:35.414000 audit: BPF prog-id=23 op=LOAD Dec 13 00:20:35.414000 audit: BPF prog-id=24 op=LOAD Dec 13 00:20:35.423986 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Dec 13 00:20:35.426000 audit: BPF prog-id=25 op=LOAD Dec 13 00:20:35.427000 audit: BPF prog-id=26 op=LOAD Dec 13 00:20:35.427000 audit: BPF prog-id=27 op=LOAD Dec 13 00:20:35.429265 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 00:20:35.435895 kernel: erofs: (device loop3p1): mounted with root inode @ nid 39. Dec 13 00:20:35.446735 systemd-tmpfiles[1291]: ACLs are not supported, ignoring. Dec 13 00:20:35.447169 systemd-tmpfiles[1291]: ACLs are not supported, ignoring. Dec 13 00:20:35.453089 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 00:20:35.454000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:35.458884 kernel: loop4: detected capacity change from 0 to 171112 Dec 13 00:20:35.461173 kernel: loop4: p1 p2 p3 Dec 13 00:20:35.480439 systemd-nsresourced[1292]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Dec 13 00:20:35.483056 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Dec 13 00:20:35.483164 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Dec 13 00:20:35.484098 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Dec 13 00:20:35.484819 kernel: device-mapper: table: 253:1: verity: Unrecognized verity feature request (-EINVAL) Dec 13 00:20:35.488066 kernel: device-mapper: ioctl: error adding target to table Dec 13 00:20:35.487000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:35.488934 (sd-merge)[1297]: device-mapper: reload ioctl on af67e6a29067aeda0590a0009488436dd8f718bac6be743160aad6f147c2927f-verity (253:1) failed: Invalid argument Dec 13 00:20:35.497579 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 00:20:35.498893 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Dec 13 00:20:35.500000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:35.565930 systemd-oomd[1289]: No swap; memory pressure usage will be degraded Dec 13 00:20:35.567344 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Dec 13 00:20:35.568000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:35.585753 systemd-resolved[1290]: Positive Trust Anchors: Dec 13 00:20:35.585768 systemd-resolved[1290]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 00:20:35.585773 systemd-resolved[1290]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 13 00:20:35.585804 systemd-resolved[1290]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 00:20:35.591352 systemd-resolved[1290]: Defaulting to hostname 'linux'. Dec 13 00:20:35.592949 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 00:20:35.595029 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 00:20:35.594000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:35.647170 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 00:20:36.188675 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 00:20:36.190000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:36.191000 audit: BPF prog-id=8 op=UNLOAD Dec 13 00:20:36.191000 audit: BPF prog-id=7 op=UNLOAD Dec 13 00:20:36.192000 audit: BPF prog-id=28 op=LOAD Dec 13 00:20:36.192000 audit: BPF prog-id=29 op=LOAD Dec 13 00:20:36.194490 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 00:20:36.262015 systemd-udevd[1316]: Using default interface naming scheme 'v257'. Dec 13 00:20:36.288103 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 00:20:36.290000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:36.291000 audit: BPF prog-id=30 op=LOAD Dec 13 00:20:36.293634 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 00:20:36.428377 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 13 00:20:36.431038 systemd-networkd[1321]: lo: Link UP Dec 13 00:20:36.431044 systemd-networkd[1321]: lo: Gained carrier Dec 13 00:20:36.433237 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 00:20:36.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:36.435539 systemd[1]: Reached target network.target - Network. Dec 13 00:20:36.439517 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 13 00:20:36.444335 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 00:20:36.475031 systemd-networkd[1321]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 13 00:20:36.475046 systemd-networkd[1321]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 00:20:36.477066 systemd-networkd[1321]: eth0: Link UP Dec 13 00:20:36.478345 systemd-networkd[1321]: eth0: Gained carrier Dec 13 00:20:36.478375 systemd-networkd[1321]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 13 00:20:36.487925 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 00:20:36.496618 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 13 00:20:36.499000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:36.501954 systemd-networkd[1321]: eth0: DHCPv4 address 10.0.0.65/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 00:20:36.516469 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 00:20:36.521057 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 00:20:36.526126 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 13 00:20:36.534923 kernel: ACPI: button: Power Button [PWRF] Dec 13 00:20:36.547858 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 00:20:36.550000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:36.556648 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Dec 13 00:20:36.557115 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 13 00:20:36.559913 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 13 00:20:36.744362 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 00:20:36.763503 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 00:20:36.763801 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 00:20:36.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:36.765000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:36.769001 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 00:20:36.793877 kernel: kvm_amd: TSC scaling supported Dec 13 00:20:36.793935 kernel: kvm_amd: Nested Virtualization enabled Dec 13 00:20:36.793949 kernel: kvm_amd: Nested Paging enabled Dec 13 00:20:36.793963 kernel: kvm_amd: LBR virtualization supported Dec 13 00:20:36.793975 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Dec 13 00:20:36.795281 kernel: kvm_amd: Virtual GIF supported Dec 13 00:20:36.801887 kernel: erofs: (device dm-1): mounted with root inode @ nid 39. Dec 13 00:20:36.807897 kernel: loop5: detected capacity change from 0 to 224512 Dec 13 00:20:36.824935 kernel: loop6: detected capacity change from 0 to 375256 Dec 13 00:20:36.826887 kernel: EDAC MC: Ver: 3.0.0 Dec 13 00:20:36.826955 kernel: loop6: p1 p2 p3 Dec 13 00:20:36.861464 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Dec 13 00:20:36.861567 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Dec 13 00:20:36.861585 kernel: device-mapper: table: 253:2: verity: Unrecognized verity feature request (-EINVAL) Dec 13 00:20:36.861611 kernel: device-mapper: ioctl: error adding target to table Dec 13 00:20:36.860547 (sd-merge)[1297]: device-mapper: reload ioctl on c81b0b335c4f741d8803812340292f37f57a6bdf618683fbcdb11178b8725544-verity (253:2) failed: Invalid argument Dec 13 00:20:36.863917 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Dec 13 00:20:36.865197 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 00:20:36.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:36.902966 kernel: erofs: (device dm-2): mounted with root inode @ nid 39. Dec 13 00:20:36.904190 (sd-merge)[1297]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Dec 13 00:20:36.909171 (sd-merge)[1297]: Merged extensions into '/usr'. Dec 13 00:20:36.913602 systemd[1]: Reload requested from client PID 1268 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 00:20:36.913619 systemd[1]: Reloading... Dec 13 00:20:36.987915 zram_generator::config[1424]: No configuration found. Dec 13 00:20:37.259717 systemd[1]: Reloading finished in 345 ms. Dec 13 00:20:37.296028 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 00:20:37.298000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:37.327835 systemd[1]: Starting ensure-sysext.service... Dec 13 00:20:37.330616 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 00:20:37.333000 audit: BPF prog-id=31 op=LOAD Dec 13 00:20:37.333000 audit: BPF prog-id=15 op=UNLOAD Dec 13 00:20:37.333000 audit: BPF prog-id=32 op=LOAD Dec 13 00:20:37.333000 audit: BPF prog-id=33 op=LOAD Dec 13 00:20:37.333000 audit: BPF prog-id=16 op=UNLOAD Dec 13 00:20:37.333000 audit: BPF prog-id=17 op=UNLOAD Dec 13 00:20:37.334000 audit: BPF prog-id=34 op=LOAD Dec 13 00:20:37.334000 audit: BPF prog-id=30 op=UNLOAD Dec 13 00:20:37.335000 audit: BPF prog-id=35 op=LOAD Dec 13 00:20:37.335000 audit: BPF prog-id=25 op=UNLOAD Dec 13 00:20:37.335000 audit: BPF prog-id=36 op=LOAD Dec 13 00:20:37.335000 audit: BPF prog-id=37 op=LOAD Dec 13 00:20:37.335000 audit: BPF prog-id=26 op=UNLOAD Dec 13 00:20:37.335000 audit: BPF prog-id=27 op=UNLOAD Dec 13 00:20:37.336000 audit: BPF prog-id=38 op=LOAD Dec 13 00:20:37.336000 audit: BPF prog-id=22 op=UNLOAD Dec 13 00:20:37.336000 audit: BPF prog-id=39 op=LOAD Dec 13 00:20:37.336000 audit: BPF prog-id=40 op=LOAD Dec 13 00:20:37.336000 audit: BPF prog-id=23 op=UNLOAD Dec 13 00:20:37.336000 audit: BPF prog-id=24 op=UNLOAD Dec 13 00:20:37.338000 audit: BPF prog-id=41 op=LOAD Dec 13 00:20:37.338000 audit: BPF prog-id=21 op=UNLOAD Dec 13 00:20:37.340000 audit: BPF prog-id=42 op=LOAD Dec 13 00:20:37.340000 audit: BPF prog-id=18 op=UNLOAD Dec 13 00:20:37.340000 audit: BPF prog-id=43 op=LOAD Dec 13 00:20:37.340000 audit: BPF prog-id=44 op=LOAD Dec 13 00:20:37.340000 audit: BPF prog-id=19 op=UNLOAD Dec 13 00:20:37.340000 audit: BPF prog-id=20 op=UNLOAD Dec 13 00:20:37.340000 audit: BPF prog-id=45 op=LOAD Dec 13 00:20:37.340000 audit: BPF prog-id=46 op=LOAD Dec 13 00:20:37.341000 audit: BPF prog-id=28 op=UNLOAD Dec 13 00:20:37.341000 audit: BPF prog-id=29 op=UNLOAD Dec 13 00:20:37.348249 systemd[1]: Reload requested from client PID 1454 ('systemctl') (unit ensure-sysext.service)... Dec 13 00:20:37.348268 systemd[1]: Reloading... Dec 13 00:20:37.350285 systemd-tmpfiles[1455]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 13 00:20:37.350335 systemd-tmpfiles[1455]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 13 00:20:37.350643 systemd-tmpfiles[1455]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 00:20:37.352108 systemd-tmpfiles[1455]: ACLs are not supported, ignoring. Dec 13 00:20:37.352188 systemd-tmpfiles[1455]: ACLs are not supported, ignoring. Dec 13 00:20:37.358681 systemd-tmpfiles[1455]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 00:20:37.358697 systemd-tmpfiles[1455]: Skipping /boot Dec 13 00:20:37.370819 systemd-tmpfiles[1455]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 00:20:37.370834 systemd-tmpfiles[1455]: Skipping /boot Dec 13 00:20:37.416940 zram_generator::config[1492]: No configuration found. Dec 13 00:20:37.670064 systemd[1]: Reloading finished in 321 ms. Dec 13 00:20:37.693350 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 00:20:37.695000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:37.698000 audit: BPF prog-id=47 op=LOAD Dec 13 00:20:37.698000 audit: BPF prog-id=35 op=UNLOAD Dec 13 00:20:37.698000 audit: BPF prog-id=48 op=LOAD Dec 13 00:20:37.698000 audit: BPF prog-id=49 op=LOAD Dec 13 00:20:37.698000 audit: BPF prog-id=36 op=UNLOAD Dec 13 00:20:37.698000 audit: BPF prog-id=37 op=UNLOAD Dec 13 00:20:37.699000 audit: BPF prog-id=50 op=LOAD Dec 13 00:20:37.699000 audit: BPF prog-id=34 op=UNLOAD Dec 13 00:20:37.701000 audit: BPF prog-id=51 op=LOAD Dec 13 00:20:37.701000 audit: BPF prog-id=31 op=UNLOAD Dec 13 00:20:37.701000 audit: BPF prog-id=52 op=LOAD Dec 13 00:20:37.701000 audit: BPF prog-id=53 op=LOAD Dec 13 00:20:37.701000 audit: BPF prog-id=32 op=UNLOAD Dec 13 00:20:37.701000 audit: BPF prog-id=33 op=UNLOAD Dec 13 00:20:37.701000 audit: BPF prog-id=54 op=LOAD Dec 13 00:20:37.701000 audit: BPF prog-id=55 op=LOAD Dec 13 00:20:37.701000 audit: BPF prog-id=45 op=UNLOAD Dec 13 00:20:37.701000 audit: BPF prog-id=46 op=UNLOAD Dec 13 00:20:37.703000 audit: BPF prog-id=56 op=LOAD Dec 13 00:20:37.703000 audit: BPF prog-id=42 op=UNLOAD Dec 13 00:20:37.703000 audit: BPF prog-id=57 op=LOAD Dec 13 00:20:37.703000 audit: BPF prog-id=58 op=LOAD Dec 13 00:20:37.703000 audit: BPF prog-id=43 op=UNLOAD Dec 13 00:20:37.715000 audit: BPF prog-id=44 op=UNLOAD Dec 13 00:20:37.716000 audit: BPF prog-id=59 op=LOAD Dec 13 00:20:37.716000 audit: BPF prog-id=38 op=UNLOAD Dec 13 00:20:37.716000 audit: BPF prog-id=60 op=LOAD Dec 13 00:20:37.716000 audit: BPF prog-id=61 op=LOAD Dec 13 00:20:37.716000 audit: BPF prog-id=39 op=UNLOAD Dec 13 00:20:37.716000 audit: BPF prog-id=40 op=UNLOAD Dec 13 00:20:37.717000 audit: BPF prog-id=62 op=LOAD Dec 13 00:20:37.717000 audit: BPF prog-id=41 op=UNLOAD Dec 13 00:20:37.730838 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 13 00:20:37.735006 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 00:20:37.758630 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 00:20:37.763124 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 00:20:37.768809 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 00:20:37.775800 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 00:20:37.777667 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 00:20:37.780517 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 00:20:37.782000 audit[1532]: SYSTEM_BOOT pid=1532 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 00:20:37.785429 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 00:20:37.790464 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 00:20:37.792523 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 00:20:37.792787 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 13 00:20:37.792963 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 13 00:20:37.793060 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 00:20:37.800953 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 00:20:37.801466 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 00:20:37.802937 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 00:20:37.803109 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 13 00:20:37.803204 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 13 00:20:37.803290 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 00:20:37.804363 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 00:20:37.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:37.810046 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 00:20:37.812488 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 00:20:37.817708 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 00:20:37.817991 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 00:20:37.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:37.816000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:37.824000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:37.824000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:37.826116 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 00:20:37.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:37.841086 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 00:20:37.841347 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 00:20:37.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:37.843000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:37.849000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 00:20:37.849000 audit[1560]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffef3d7aa0 a2=420 a3=0 items=0 ppid=1527 pid=1560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:20:37.849000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 00:20:37.850465 augenrules[1560]: No rules Dec 13 00:20:37.850687 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 00:20:37.851014 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 00:20:37.854164 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 00:20:37.857805 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 00:20:37.863076 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 00:20:37.865734 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 00:20:37.865913 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 13 00:20:37.865973 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 13 00:20:37.866062 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 00:20:37.868571 systemd[1]: Finished ensure-sysext.service. Dec 13 00:20:37.870964 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 00:20:37.871729 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 13 00:20:37.874480 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 00:20:37.874800 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 00:20:37.878024 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 00:20:37.878327 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 00:20:37.880720 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 00:20:37.883805 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 00:20:37.884116 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 00:20:37.894035 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 00:20:37.894147 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 00:20:37.897081 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 00:20:37.899190 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 00:20:38.094493 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 00:20:39.965094 systemd-resolved[1290]: Clock change detected. Flushing caches. Dec 13 00:20:39.965447 systemd-timesyncd[1574]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 13 00:20:39.965449 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 00:20:39.965496 systemd-timesyncd[1574]: Initial clock synchronization to Sat 2025-12-13 00:20:39.965032 UTC. Dec 13 00:20:40.276692 ldconfig[1529]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 00:20:40.286342 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 00:20:40.292777 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 00:20:40.331328 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 00:20:40.333660 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 00:20:40.335654 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 00:20:40.337746 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 00:20:40.339831 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Dec 13 00:20:40.341962 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 00:20:40.343924 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 00:20:40.346227 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Dec 13 00:20:40.348454 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Dec 13 00:20:40.350252 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 00:20:40.352470 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 00:20:40.352511 systemd[1]: Reached target paths.target - Path Units. Dec 13 00:20:40.354049 systemd[1]: Reached target timers.target - Timer Units. Dec 13 00:20:40.357048 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 00:20:40.361112 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 00:20:40.368289 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 13 00:20:40.370741 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 13 00:20:40.372930 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 13 00:20:40.372983 systemd-networkd[1321]: eth0: Gained IPv6LL Dec 13 00:20:40.378766 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 00:20:40.381284 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 13 00:20:40.384612 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 00:20:40.387203 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 00:20:40.390864 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 00:20:40.392705 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 00:20:40.394276 systemd[1]: Reached target basic.target - Basic System. Dec 13 00:20:40.395912 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 00:20:40.395945 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 00:20:40.397457 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 00:20:40.400371 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 13 00:20:40.403250 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 00:20:40.413890 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 00:20:40.417041 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 00:20:40.420042 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 00:20:40.421784 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 00:20:40.422982 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Dec 13 00:20:40.427024 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 00:20:40.430984 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 00:20:40.431417 jq[1588]: false Dec 13 00:20:40.435957 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 00:20:40.438907 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 00:20:40.443015 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 00:20:40.443127 google_oslogin_nss_cache[1590]: oslogin_cache_refresh[1590]: Refreshing passwd entry cache Dec 13 00:20:40.444954 oslogin_cache_refresh[1590]: Refreshing passwd entry cache Dec 13 00:20:40.451668 extend-filesystems[1589]: Found /dev/vda6 Dec 13 00:20:40.453285 google_oslogin_nss_cache[1590]: oslogin_cache_refresh[1590]: Failure getting users, quitting Dec 13 00:20:40.453285 google_oslogin_nss_cache[1590]: oslogin_cache_refresh[1590]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 13 00:20:40.453285 google_oslogin_nss_cache[1590]: oslogin_cache_refresh[1590]: Refreshing group entry cache Dec 13 00:20:40.451906 oslogin_cache_refresh[1590]: Failure getting users, quitting Dec 13 00:20:40.451930 oslogin_cache_refresh[1590]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 13 00:20:40.451989 oslogin_cache_refresh[1590]: Refreshing group entry cache Dec 13 00:20:40.454629 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 00:20:40.456394 extend-filesystems[1589]: Found /dev/vda9 Dec 13 00:20:40.459455 extend-filesystems[1589]: Checking size of /dev/vda9 Dec 13 00:20:40.462878 google_oslogin_nss_cache[1590]: oslogin_cache_refresh[1590]: Failure getting groups, quitting Dec 13 00:20:40.462878 google_oslogin_nss_cache[1590]: oslogin_cache_refresh[1590]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 13 00:20:40.461777 oslogin_cache_refresh[1590]: Failure getting groups, quitting Dec 13 00:20:40.461792 oslogin_cache_refresh[1590]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 13 00:20:40.464084 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 00:20:40.466106 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 00:20:40.467265 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 00:20:40.470894 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 00:20:40.475168 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 00:20:40.483024 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 00:20:40.486800 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 00:20:40.487939 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 00:20:40.488329 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Dec 13 00:20:40.491103 extend-filesystems[1589]: Resized partition /dev/vda9 Dec 13 00:20:40.494130 extend-filesystems[1622]: resize2fs 1.47.3 (8-Jul-2025) Dec 13 00:20:40.496597 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Dec 13 00:20:40.502607 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 00:20:40.504895 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 00:20:40.510267 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Dec 13 00:20:40.515431 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 00:20:40.516806 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 00:20:40.526892 jq[1612]: true Dec 13 00:20:40.552351 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 00:20:40.585369 tar[1627]: linux-amd64/LICENSE Dec 13 00:20:40.585871 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Dec 13 00:20:40.586528 jq[1640]: true Dec 13 00:20:40.631984 tar[1627]: linux-amd64/helm Dec 13 00:20:40.598839 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 13 00:20:40.632115 update_engine[1607]: I20251213 00:20:40.590921 1607 main.cc:92] Flatcar Update Engine starting Dec 13 00:20:40.599206 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 13 00:20:40.617301 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 00:20:40.642966 extend-filesystems[1622]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 00:20:40.642966 extend-filesystems[1622]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 00:20:40.642966 extend-filesystems[1622]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Dec 13 00:20:40.652885 extend-filesystems[1589]: Resized filesystem in /dev/vda9 Dec 13 00:20:40.652335 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 00:20:40.653375 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 00:20:40.671090 dbus-daemon[1586]: [system] SELinux support is enabled Dec 13 00:20:40.671989 systemd-logind[1605]: Watching system buttons on /dev/input/event2 (Power Button) Dec 13 00:20:40.672016 systemd-logind[1605]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 00:20:40.672061 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 00:20:40.672469 systemd-logind[1605]: New seat seat0. Dec 13 00:20:40.679832 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 00:20:40.691140 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 00:20:40.691178 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 00:20:40.691889 bash[1673]: Updated "/home/core/.ssh/authorized_keys" Dec 13 00:20:40.694006 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 00:20:40.694022 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 00:20:40.694552 dbus-daemon[1586]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 00:20:40.696387 update_engine[1607]: I20251213 00:20:40.696187 1607 update_check_scheduler.cc:74] Next update check in 7m43s Dec 13 00:20:40.696964 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 00:20:40.701463 systemd[1]: Started update-engine.service - Update Engine. Dec 13 00:20:40.703855 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 13 00:20:40.707229 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 00:20:40.898630 locksmithd[1675]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 00:20:41.137990 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 00:20:41.279292 sshd_keygen[1624]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 00:20:41.327787 containerd[1633]: time="2025-12-13T00:20:41Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 13 00:20:41.329024 containerd[1633]: time="2025-12-13T00:20:41.328742806Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Dec 13 00:20:41.329573 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 00:20:41.336883 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 00:20:41.342288 systemd[1]: Started sshd@0-10.0.0.65:22-10.0.0.1:54232.service - OpenSSH per-connection server daemon (10.0.0.1:54232). Dec 13 00:20:41.353337 containerd[1633]: time="2025-12-13T00:20:41.353257329Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="16.941µs" Dec 13 00:20:41.353337 containerd[1633]: time="2025-12-13T00:20:41.353313094Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 13 00:20:41.353487 containerd[1633]: time="2025-12-13T00:20:41.353378256Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 13 00:20:41.353487 containerd[1633]: time="2025-12-13T00:20:41.353394486Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 13 00:20:41.354066 containerd[1633]: time="2025-12-13T00:20:41.354033374Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 13 00:20:41.354097 containerd[1633]: time="2025-12-13T00:20:41.354065785Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 13 00:20:41.354203 containerd[1633]: time="2025-12-13T00:20:41.354165512Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 13 00:20:41.354203 containerd[1633]: time="2025-12-13T00:20:41.354193064Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 13 00:20:41.354583 containerd[1633]: time="2025-12-13T00:20:41.354533512Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 13 00:20:41.354583 containerd[1633]: time="2025-12-13T00:20:41.354578266Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 13 00:20:41.354644 containerd[1633]: time="2025-12-13T00:20:41.354610437Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 13 00:20:41.354644 containerd[1633]: time="2025-12-13T00:20:41.354622860Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 13 00:20:41.355178 containerd[1633]: time="2025-12-13T00:20:41.355141142Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 13 00:20:41.355329 containerd[1633]: time="2025-12-13T00:20:41.355298357Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 13 00:20:41.355671 containerd[1633]: time="2025-12-13T00:20:41.355641160Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 13 00:20:41.355704 containerd[1633]: time="2025-12-13T00:20:41.355690342Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 13 00:20:41.355725 containerd[1633]: time="2025-12-13T00:20:41.355705941Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 13 00:20:41.357799 containerd[1633]: time="2025-12-13T00:20:41.357754032Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 13 00:20:41.361896 containerd[1633]: time="2025-12-13T00:20:41.359607077Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 13 00:20:41.361896 containerd[1633]: time="2025-12-13T00:20:41.359724958Z" level=info msg="metadata content store policy set" policy=shared Dec 13 00:20:41.370968 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 00:20:41.371395 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 00:20:41.380282 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 00:20:41.391392 containerd[1633]: time="2025-12-13T00:20:41.388963170Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 13 00:20:41.391520 containerd[1633]: time="2025-12-13T00:20:41.391437911Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 13 00:20:41.391650 containerd[1633]: time="2025-12-13T00:20:41.391615925Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 13 00:20:41.391650 containerd[1633]: time="2025-12-13T00:20:41.391642455Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 13 00:20:41.391799 containerd[1633]: time="2025-12-13T00:20:41.391661060Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 13 00:20:41.391799 containerd[1633]: time="2025-12-13T00:20:41.391676579Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 13 00:20:41.391799 containerd[1633]: time="2025-12-13T00:20:41.391715482Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 13 00:20:41.391799 containerd[1633]: time="2025-12-13T00:20:41.391731542Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 13 00:20:41.391799 containerd[1633]: time="2025-12-13T00:20:41.391747502Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 13 00:20:41.391799 containerd[1633]: time="2025-12-13T00:20:41.391771066Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 13 00:20:41.391799 containerd[1633]: time="2025-12-13T00:20:41.391791274Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 13 00:20:41.392019 containerd[1633]: time="2025-12-13T00:20:41.391945163Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 13 00:20:41.392019 containerd[1633]: time="2025-12-13T00:20:41.391966913Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 13 00:20:41.393295 containerd[1633]: time="2025-12-13T00:20:41.393054693Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 13 00:20:41.393425 containerd[1633]: time="2025-12-13T00:20:41.393387287Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 13 00:20:41.393467 containerd[1633]: time="2025-12-13T00:20:41.393450616Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 13 00:20:41.393504 containerd[1633]: time="2025-12-13T00:20:41.393470142Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 13 00:20:41.393504 containerd[1633]: time="2025-12-13T00:20:41.393489419Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 13 00:20:41.393945 containerd[1633]: time="2025-12-13T00:20:41.393531478Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 13 00:20:41.393945 containerd[1633]: time="2025-12-13T00:20:41.393558448Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 13 00:20:41.393945 containerd[1633]: time="2025-12-13T00:20:41.393584737Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 13 00:20:41.393945 containerd[1633]: time="2025-12-13T00:20:41.393626496Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 13 00:20:41.393945 containerd[1633]: time="2025-12-13T00:20:41.393648988Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 13 00:20:41.393945 containerd[1633]: time="2025-12-13T00:20:41.393662102Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 13 00:20:41.393945 containerd[1633]: time="2025-12-13T00:20:41.393696828Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 13 00:20:41.396354 containerd[1633]: time="2025-12-13T00:20:41.396311461Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 13 00:20:41.396940 containerd[1633]: time="2025-12-13T00:20:41.396863226Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 13 00:20:41.396940 containerd[1633]: time="2025-12-13T00:20:41.396892651Z" level=info msg="Start snapshots syncer" Dec 13 00:20:41.402655 containerd[1633]: time="2025-12-13T00:20:41.401936500Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 13 00:20:41.402655 containerd[1633]: time="2025-12-13T00:20:41.402483506Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 13 00:20:41.402896 containerd[1633]: time="2025-12-13T00:20:41.402571821Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 13 00:20:41.404738 containerd[1633]: time="2025-12-13T00:20:41.404710172Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 13 00:20:41.405871 containerd[1633]: time="2025-12-13T00:20:41.405848917Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 13 00:20:41.406016 containerd[1633]: time="2025-12-13T00:20:41.406001103Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 13 00:20:41.406098 containerd[1633]: time="2025-12-13T00:20:41.406086212Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 13 00:20:41.406168 containerd[1633]: time="2025-12-13T00:20:41.406155522Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 13 00:20:41.406244 containerd[1633]: time="2025-12-13T00:20:41.406215555Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 13 00:20:41.406334 containerd[1633]: time="2025-12-13T00:20:41.406290936Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 13 00:20:41.406427 containerd[1633]: time="2025-12-13T00:20:41.406403607Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 13 00:20:41.406569 containerd[1633]: time="2025-12-13T00:20:41.406537749Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 13 00:20:41.406665 containerd[1633]: time="2025-12-13T00:20:41.406643808Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 13 00:20:41.407071 containerd[1633]: time="2025-12-13T00:20:41.407045351Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 13 00:20:41.407295 containerd[1633]: time="2025-12-13T00:20:41.407278528Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 13 00:20:41.407993 containerd[1633]: time="2025-12-13T00:20:41.407922656Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 13 00:20:41.407993 containerd[1633]: time="2025-12-13T00:20:41.407951440Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 13 00:20:41.407993 containerd[1633]: time="2025-12-13T00:20:41.407967320Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 13 00:20:41.408362 containerd[1633]: time="2025-12-13T00:20:41.407978200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 13 00:20:41.408477 containerd[1633]: time="2025-12-13T00:20:41.408450286Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 13 00:20:41.408862 containerd[1633]: time="2025-12-13T00:20:41.408623841Z" level=info msg="runtime interface created" Dec 13 00:20:41.408940 containerd[1633]: time="2025-12-13T00:20:41.408927430Z" level=info msg="created NRI interface" Dec 13 00:20:41.409022 containerd[1633]: time="2025-12-13T00:20:41.409005286Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 13 00:20:41.409139 containerd[1633]: time="2025-12-13T00:20:41.409121364Z" level=info msg="Connect containerd service" Dec 13 00:20:41.411760 containerd[1633]: time="2025-12-13T00:20:41.410535576Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 00:20:41.417532 containerd[1633]: time="2025-12-13T00:20:41.417289673Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 00:20:41.422965 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 00:20:41.500318 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 00:20:41.509374 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 00:20:41.511622 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 00:20:41.618473 tar[1627]: linux-amd64/README.md Dec 13 00:20:41.636529 sshd[1698]: Accepted publickey for core from 10.0.0.1 port 54232 ssh2: RSA SHA256:bCAENV3gEImip2hLsDgpmZxJX+wB3hyqf9WeGkoaK2w Dec 13 00:20:41.638856 sshd-session[1698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:20:41.649998 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 00:20:41.677668 systemd-logind[1605]: New session 1 of user core. Dec 13 00:20:41.679442 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 00:20:41.683162 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 00:20:41.764893 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 00:20:41.770337 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 00:20:41.824208 (systemd)[1723]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:20:41.827300 systemd-logind[1605]: New session 2 of user core. Dec 13 00:20:41.887888 containerd[1633]: time="2025-12-13T00:20:41.887080378Z" level=info msg="Start subscribing containerd event" Dec 13 00:20:41.887888 containerd[1633]: time="2025-12-13T00:20:41.887177670Z" level=info msg="Start recovering state" Dec 13 00:20:41.888012 containerd[1633]: time="2025-12-13T00:20:41.887908872Z" level=info msg="Start event monitor" Dec 13 00:20:41.888012 containerd[1633]: time="2025-12-13T00:20:41.887946843Z" level=info msg="Start cni network conf syncer for default" Dec 13 00:20:41.888012 containerd[1633]: time="2025-12-13T00:20:41.887954337Z" level=info msg="Start streaming server" Dec 13 00:20:41.888067 containerd[1633]: time="2025-12-13T00:20:41.888021853Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 13 00:20:41.888067 containerd[1633]: time="2025-12-13T00:20:41.888031962Z" level=info msg="runtime interface starting up..." Dec 13 00:20:41.888103 containerd[1633]: time="2025-12-13T00:20:41.888089911Z" level=info msg="starting plugins..." Dec 13 00:20:41.888123 containerd[1633]: time="2025-12-13T00:20:41.888111141Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 13 00:20:41.888591 containerd[1633]: time="2025-12-13T00:20:41.888464153Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 00:20:41.888764 containerd[1633]: time="2025-12-13T00:20:41.888720924Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 00:20:41.889557 containerd[1633]: time="2025-12-13T00:20:41.889518931Z" level=info msg="containerd successfully booted in 0.562215s" Dec 13 00:20:41.889738 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 00:20:42.056859 systemd[1723]: Queued start job for default target default.target. Dec 13 00:20:42.130302 systemd[1723]: Created slice app.slice - User Application Slice. Dec 13 00:20:42.130332 systemd[1723]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Dec 13 00:20:42.130346 systemd[1723]: Reached target paths.target - Paths. Dec 13 00:20:42.130395 systemd[1723]: Reached target timers.target - Timers. Dec 13 00:20:42.132040 systemd[1723]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 00:20:42.133287 systemd[1723]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Dec 13 00:20:42.147836 systemd[1723]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Dec 13 00:20:42.150061 systemd[1723]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 00:20:42.150210 systemd[1723]: Reached target sockets.target - Sockets. Dec 13 00:20:42.150263 systemd[1723]: Reached target basic.target - Basic System. Dec 13 00:20:42.150306 systemd[1723]: Reached target default.target - Main User Target. Dec 13 00:20:42.150342 systemd[1723]: Startup finished in 311ms. Dec 13 00:20:42.150467 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 00:20:42.203167 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 00:20:42.280108 systemd[1]: Started sshd@1-10.0.0.65:22-10.0.0.1:54236.service - OpenSSH per-connection server daemon (10.0.0.1:54236). Dec 13 00:20:42.397194 sshd[1742]: Accepted publickey for core from 10.0.0.1 port 54236 ssh2: RSA SHA256:bCAENV3gEImip2hLsDgpmZxJX+wB3hyqf9WeGkoaK2w Dec 13 00:20:42.399351 sshd-session[1742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:20:42.406349 systemd-logind[1605]: New session 3 of user core. Dec 13 00:20:42.418037 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 00:20:42.460910 sshd[1746]: Connection closed by 10.0.0.1 port 54236 Dec 13 00:20:42.461990 sshd-session[1742]: pam_unix(sshd:session): session closed for user core Dec 13 00:20:42.472566 systemd[1]: sshd@1-10.0.0.65:22-10.0.0.1:54236.service: Deactivated successfully. Dec 13 00:20:42.474478 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 00:20:42.475506 systemd-logind[1605]: Session 3 logged out. Waiting for processes to exit. Dec 13 00:20:42.478562 systemd[1]: Started sshd@2-10.0.0.65:22-10.0.0.1:54248.service - OpenSSH per-connection server daemon (10.0.0.1:54248). Dec 13 00:20:42.481747 systemd-logind[1605]: Removed session 3. Dec 13 00:20:42.634436 sshd[1752]: Accepted publickey for core from 10.0.0.1 port 54248 ssh2: RSA SHA256:bCAENV3gEImip2hLsDgpmZxJX+wB3hyqf9WeGkoaK2w Dec 13 00:20:42.636381 sshd-session[1752]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:20:42.641825 systemd-logind[1605]: New session 4 of user core. Dec 13 00:20:42.651036 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 00:20:42.685831 sshd[1756]: Connection closed by 10.0.0.1 port 54248 Dec 13 00:20:42.686149 sshd-session[1752]: pam_unix(sshd:session): session closed for user core Dec 13 00:20:42.692108 systemd[1]: sshd@2-10.0.0.65:22-10.0.0.1:54248.service: Deactivated successfully. Dec 13 00:20:42.694368 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 00:20:42.695364 systemd-logind[1605]: Session 4 logged out. Waiting for processes to exit. Dec 13 00:20:42.696681 systemd-logind[1605]: Removed session 4. Dec 13 00:20:42.945973 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 00:20:42.948340 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 00:20:42.950514 systemd[1]: Startup finished in 3.232s (kernel) + 6.140s (initrd) + 7.201s (userspace) = 16.574s. Dec 13 00:20:42.960175 (kubelet)[1766]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 00:20:43.593755 kubelet[1766]: E1213 00:20:43.593637 1766 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 00:20:43.597872 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 00:20:43.598119 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 00:20:43.598636 systemd[1]: kubelet.service: Consumed 2.197s CPU time, 265.7M memory peak. Dec 13 00:20:52.698654 systemd[1]: Started sshd@3-10.0.0.65:22-10.0.0.1:33190.service - OpenSSH per-connection server daemon (10.0.0.1:33190). Dec 13 00:20:52.748827 sshd[1779]: Accepted publickey for core from 10.0.0.1 port 33190 ssh2: RSA SHA256:bCAENV3gEImip2hLsDgpmZxJX+wB3hyqf9WeGkoaK2w Dec 13 00:20:52.750699 sshd-session[1779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:20:52.755478 systemd-logind[1605]: New session 5 of user core. Dec 13 00:20:52.765957 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 00:20:52.780895 sshd[1783]: Connection closed by 10.0.0.1 port 33190 Dec 13 00:20:52.781233 sshd-session[1779]: pam_unix(sshd:session): session closed for user core Dec 13 00:20:52.797684 systemd[1]: sshd@3-10.0.0.65:22-10.0.0.1:33190.service: Deactivated successfully. Dec 13 00:20:52.799609 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 00:20:52.800450 systemd-logind[1605]: Session 5 logged out. Waiting for processes to exit. Dec 13 00:20:52.803258 systemd[1]: Started sshd@4-10.0.0.65:22-10.0.0.1:33200.service - OpenSSH per-connection server daemon (10.0.0.1:33200). Dec 13 00:20:52.804023 systemd-logind[1605]: Removed session 5. Dec 13 00:20:52.860202 sshd[1789]: Accepted publickey for core from 10.0.0.1 port 33200 ssh2: RSA SHA256:bCAENV3gEImip2hLsDgpmZxJX+wB3hyqf9WeGkoaK2w Dec 13 00:20:52.861880 sshd-session[1789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:20:52.866529 systemd-logind[1605]: New session 6 of user core. Dec 13 00:20:52.876944 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 00:20:52.886096 sshd[1793]: Connection closed by 10.0.0.1 port 33200 Dec 13 00:20:52.886410 sshd-session[1789]: pam_unix(sshd:session): session closed for user core Dec 13 00:20:52.895414 systemd[1]: sshd@4-10.0.0.65:22-10.0.0.1:33200.service: Deactivated successfully. Dec 13 00:20:52.897077 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 00:20:52.897917 systemd-logind[1605]: Session 6 logged out. Waiting for processes to exit. Dec 13 00:20:52.900417 systemd[1]: Started sshd@5-10.0.0.65:22-10.0.0.1:33204.service - OpenSSH per-connection server daemon (10.0.0.1:33204). Dec 13 00:20:52.901042 systemd-logind[1605]: Removed session 6. Dec 13 00:20:52.960363 sshd[1799]: Accepted publickey for core from 10.0.0.1 port 33204 ssh2: RSA SHA256:bCAENV3gEImip2hLsDgpmZxJX+wB3hyqf9WeGkoaK2w Dec 13 00:20:52.962119 sshd-session[1799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:20:52.966750 systemd-logind[1605]: New session 7 of user core. Dec 13 00:20:52.979990 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 00:20:52.994574 sshd[1803]: Connection closed by 10.0.0.1 port 33204 Dec 13 00:20:52.994965 sshd-session[1799]: pam_unix(sshd:session): session closed for user core Dec 13 00:20:53.004321 systemd[1]: sshd@5-10.0.0.65:22-10.0.0.1:33204.service: Deactivated successfully. Dec 13 00:20:53.006279 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 00:20:53.007053 systemd-logind[1605]: Session 7 logged out. Waiting for processes to exit. Dec 13 00:20:53.009847 systemd[1]: Started sshd@6-10.0.0.65:22-10.0.0.1:33220.service - OpenSSH per-connection server daemon (10.0.0.1:33220). Dec 13 00:20:53.010702 systemd-logind[1605]: Removed session 7. Dec 13 00:20:53.070188 sshd[1809]: Accepted publickey for core from 10.0.0.1 port 33220 ssh2: RSA SHA256:bCAENV3gEImip2hLsDgpmZxJX+wB3hyqf9WeGkoaK2w Dec 13 00:20:53.071902 sshd-session[1809]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:20:53.076440 systemd-logind[1605]: New session 8 of user core. Dec 13 00:20:53.085963 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 00:20:53.110204 sudo[1814]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 00:20:53.110581 sudo[1814]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 00:20:53.124839 sudo[1814]: pam_unix(sudo:session): session closed for user root Dec 13 00:20:53.126778 sshd[1813]: Connection closed by 10.0.0.1 port 33220 Dec 13 00:20:53.127158 sshd-session[1809]: pam_unix(sshd:session): session closed for user core Dec 13 00:20:53.140472 systemd[1]: sshd@6-10.0.0.65:22-10.0.0.1:33220.service: Deactivated successfully. Dec 13 00:20:53.142400 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 00:20:53.143405 systemd-logind[1605]: Session 8 logged out. Waiting for processes to exit. Dec 13 00:20:53.146287 systemd[1]: Started sshd@7-10.0.0.65:22-10.0.0.1:33224.service - OpenSSH per-connection server daemon (10.0.0.1:33224). Dec 13 00:20:53.147104 systemd-logind[1605]: Removed session 8. Dec 13 00:20:53.209510 sshd[1821]: Accepted publickey for core from 10.0.0.1 port 33224 ssh2: RSA SHA256:bCAENV3gEImip2hLsDgpmZxJX+wB3hyqf9WeGkoaK2w Dec 13 00:20:53.211341 sshd-session[1821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:20:53.216387 systemd-logind[1605]: New session 9 of user core. Dec 13 00:20:53.230027 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 00:20:53.244654 sudo[1827]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 00:20:53.245022 sudo[1827]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 00:20:53.430845 sudo[1827]: pam_unix(sudo:session): session closed for user root Dec 13 00:20:53.439136 sudo[1826]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 13 00:20:53.439566 sudo[1826]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 00:20:53.450310 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 13 00:20:53.508000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 13 00:20:53.509538 augenrules[1851]: No rules Dec 13 00:20:53.510535 kernel: kauditd_printk_skb: 103 callbacks suppressed Dec 13 00:20:53.510592 kernel: audit: type=1305 audit(1765585253.508:238): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 13 00:20:53.511097 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 00:20:53.511524 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 13 00:20:53.512826 sudo[1826]: pam_unix(sudo:session): session closed for user root Dec 13 00:20:53.508000 audit[1851]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdadf70e40 a2=420 a3=0 items=0 ppid=1832 pid=1851 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:20:53.514158 sshd[1825]: Connection closed by 10.0.0.1 port 33224 Dec 13 00:20:53.514529 sshd-session[1821]: pam_unix(sshd:session): session closed for user core Dec 13 00:20:53.519936 kernel: audit: type=1300 audit(1765585253.508:238): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdadf70e40 a2=420 a3=0 items=0 ppid=1832 pid=1851 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:20:53.508000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 00:20:53.522659 kernel: audit: type=1327 audit(1765585253.508:238): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 00:20:53.522682 kernel: audit: type=1130 audit(1765585253.510:239): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:53.510000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:53.526974 kernel: audit: type=1131 audit(1765585253.510:240): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:53.510000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:53.510000 audit[1826]: USER_END pid=1826 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 00:20:53.536209 kernel: audit: type=1106 audit(1765585253.510:241): pid=1826 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 00:20:53.536267 kernel: audit: type=1104 audit(1765585253.510:242): pid=1826 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 00:20:53.510000 audit[1826]: CRED_DISP pid=1826 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 00:20:53.513000 audit[1821]: USER_END pid=1821 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:20:53.564922 kernel: audit: type=1106 audit(1765585253.513:243): pid=1821 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:20:53.564953 kernel: audit: type=1104 audit(1765585253.513:244): pid=1821 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:20:53.513000 audit[1821]: CRED_DISP pid=1821 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:20:53.579495 systemd[1]: sshd@7-10.0.0.65:22-10.0.0.1:33224.service: Deactivated successfully. Dec 13 00:20:53.578000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.65:22-10.0.0.1:33224 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:53.581184 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 00:20:53.582212 systemd-logind[1605]: Session 9 logged out. Waiting for processes to exit. Dec 13 00:20:53.584682 systemd[1]: Started sshd@8-10.0.0.65:22-10.0.0.1:33234.service - OpenSSH per-connection server daemon (10.0.0.1:33234). Dec 13 00:20:53.584831 kernel: audit: type=1131 audit(1765585253.578:245): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.65:22-10.0.0.1:33224 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:53.583000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.65:22-10.0.0.1:33234 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:53.585301 systemd-logind[1605]: Removed session 9. Dec 13 00:20:53.604053 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 00:20:53.605638 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 00:20:53.641000 audit[1860]: USER_ACCT pid=1860 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:20:53.642335 sshd[1860]: Accepted publickey for core from 10.0.0.1 port 33234 ssh2: RSA SHA256:bCAENV3gEImip2hLsDgpmZxJX+wB3hyqf9WeGkoaK2w Dec 13 00:20:53.642000 audit[1860]: CRED_ACQ pid=1860 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:20:53.642000 audit[1860]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe774d1f50 a2=3 a3=0 items=0 ppid=1 pid=1860 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:20:53.642000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:20:53.643615 sshd-session[1860]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:20:53.648739 systemd-logind[1605]: New session 10 of user core. Dec 13 00:20:53.656962 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 00:20:53.659000 audit[1860]: USER_START pid=1860 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:20:53.661000 audit[1867]: CRED_ACQ pid=1867 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:20:53.673000 audit[1868]: USER_ACCT pid=1868 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 00:20:53.674633 sudo[1868]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 00:20:53.674000 audit[1868]: CRED_REFR pid=1868 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 00:20:53.675190 sudo[1868]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 00:20:53.674000 audit[1868]: USER_START pid=1868 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 00:20:53.878355 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 00:20:53.877000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:53.889167 (kubelet)[1886]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 00:20:53.938916 kubelet[1886]: E1213 00:20:53.938836 1886 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 00:20:53.945779 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 00:20:53.945979 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 00:20:53.945000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 13 00:20:53.946407 systemd[1]: kubelet.service: Consumed 291ms CPU time, 111.2M memory peak. Dec 13 00:20:54.328899 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 00:20:54.356443 (dockerd)[1904]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 00:20:54.771716 dockerd[1904]: time="2025-12-13T00:20:54.771589442Z" level=info msg="Starting up" Dec 13 00:20:54.777780 dockerd[1904]: time="2025-12-13T00:20:54.777739615Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 13 00:20:54.796712 dockerd[1904]: time="2025-12-13T00:20:54.796650530Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 13 00:20:55.523332 dockerd[1904]: time="2025-12-13T00:20:55.523255253Z" level=info msg="Loading containers: start." Dec 13 00:20:55.536843 kernel: Initializing XFRM netlink socket Dec 13 00:20:55.606000 audit[1958]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1958 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:20:55.606000 audit[1958]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffcd0e44d50 a2=0 a3=0 items=0 ppid=1904 pid=1958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:20:55.606000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 13 00:20:55.609000 audit[1960]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1960 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:20:55.609000 audit[1960]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffc8b2a2de0 a2=0 a3=0 items=0 ppid=1904 pid=1960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:20:55.609000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 13 00:20:55.611000 audit[1962]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1962 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:20:55.611000 audit[1962]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd3e469bb0 a2=0 a3=0 items=0 ppid=1904 pid=1962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:20:55.611000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 13 00:20:55.614000 audit[1964]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1964 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:20:55.614000 audit[1964]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc98b8adf0 a2=0 a3=0 items=0 ppid=1904 pid=1964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:20:55.614000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 13 00:20:55.616000 audit[1966]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=1966 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:20:55.616000 audit[1966]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd8412d6b0 a2=0 a3=0 items=0 ppid=1904 pid=1966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:20:55.616000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 13 00:20:55.619000 audit[1968]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1968 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:20:55.619000 audit[1968]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd81bb6bf0 a2=0 a3=0 items=0 ppid=1904 pid=1968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:20:55.619000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 13 00:20:55.621000 audit[1970]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1970 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:20:55.621000 audit[1970]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffda629dd40 a2=0 a3=0 items=0 ppid=1904 pid=1970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:20:55.621000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 13 00:20:55.624000 audit[1972]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1972 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:20:55.624000 audit[1972]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffdacf950c0 a2=0 a3=0 items=0 ppid=1904 pid=1972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:20:55.624000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 13 00:20:55.657000 audit[1975]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1975 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:20:55.657000 audit[1975]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7ffebaf70d40 a2=0 a3=0 items=0 ppid=1904 pid=1975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:20:55.657000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Dec 13 00:20:55.660000 audit[1977]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=1977 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:20:55.660000 audit[1977]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffd53e3b480 a2=0 a3=0 items=0 ppid=1904 pid=1977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:20:55.660000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 13 00:20:55.662000 audit[1979]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1979 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:20:55.662000 audit[1979]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffe7ef13970 a2=0 a3=0 items=0 ppid=1904 pid=1979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:20:55.662000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 13 00:20:55.665000 audit[1981]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1981 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:20:55.665000 audit[1981]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffd80e136f0 a2=0 a3=0 items=0 ppid=1904 pid=1981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:20:55.665000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 13 00:20:55.668000 audit[1983]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=1983 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:20:55.668000 audit[1983]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7fffddeecab0 a2=0 a3=0 items=0 ppid=1904 pid=1983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:20:55.668000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 13 00:20:55.717000 audit[2013]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=2013 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:20:55.717000 audit[2013]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffff1581090 a2=0 a3=0 items=0 ppid=1904 pid=2013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:20:55.717000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 13 00:20:55.719000 audit[2015]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=2015 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:20:55.719000 audit[2015]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffee6729e80 a2=0 a3=0 items=0 ppid=1904 pid=2015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:20:55.719000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 13 00:20:55.721000 audit[2017]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=2017 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:20:55.721000 audit[2017]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffca1dc82f0 a2=0 a3=0 items=0 ppid=1904 pid=2017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:20:55.721000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 13 00:20:55.723000 audit[2019]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=2019 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:20:55.723000 audit[2019]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe67958da0 a2=0 a3=0 items=0 ppid=1904 pid=2019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:20:55.723000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 13 00:20:55.726000 audit[2021]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2021 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:20:55.726000 audit[2021]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe07619fe0 a2=0 a3=0 items=0 ppid=1904 pid=2021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:20:55.726000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 13 00:20:55.728000 audit[2023]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2023 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:20:55.728000 audit[2023]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc59553a80 a2=0 a3=0 items=0 ppid=1904 pid=2023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:20:55.728000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 13 00:20:55.731000 audit[2025]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2025 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:20:55.731000 audit[2025]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fffa05ad700 a2=0 a3=0 items=0 ppid=1904 pid=2025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:20:55.731000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 13 00:20:55.733000 audit[2027]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2027 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:20:55.733000 audit[2027]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffc472960a0 a2=0 a3=0 items=0 ppid=1904 pid=2027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:20:55.733000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 13 00:20:55.736000 audit[2029]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2029 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:20:55.736000 audit[2029]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffcb3ce45d0 a2=0 a3=0 items=0 ppid=1904 pid=2029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:20:55.736000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Dec 13 00:20:55.739000 audit[2031]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2031 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:20:55.739000 audit[2031]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffd5a0cd5f0 a2=0 a3=0 items=0 ppid=1904 pid=2031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:20:55.739000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 13 00:20:55.741000 audit[2033]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2033 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:20:55.741000 audit[2033]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffffcf1fc20 a2=0 a3=0 items=0 ppid=1904 pid=2033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:20:55.741000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 13 00:20:55.744000 audit[2035]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2035 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:20:55.744000 audit[2035]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffd83b25360 a2=0 a3=0 items=0 ppid=1904 pid=2035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:20:55.744000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 13 00:20:55.746000 audit[2037]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2037 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:20:55.746000 audit[2037]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffdb8100b10 a2=0 a3=0 items=0 ppid=1904 pid=2037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:20:55.746000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 13 00:20:55.753000 audit[2042]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2042 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:20:55.753000 audit[2042]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffdb65afbe0 a2=0 a3=0 items=0 ppid=1904 pid=2042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:20:55.753000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 13 00:20:55.756000 audit[2044]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2044 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:20:55.756000 audit[2044]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7fff87f2f000 a2=0 a3=0 items=0 ppid=1904 pid=2044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:20:55.756000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 13 00:20:55.758000 audit[2046]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2046 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:20:55.758000 audit[2046]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fff421b2880 a2=0 a3=0 items=0 ppid=1904 pid=2046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:20:55.758000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 13 00:20:55.761000 audit[2048]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2048 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:20:55.761000 audit[2048]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffdf50363a0 a2=0 a3=0 items=0 ppid=1904 pid=2048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:20:55.761000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 13 00:20:55.763000 audit[2050]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2050 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:20:55.763000 audit[2050]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffdbd112190 a2=0 a3=0 items=0 ppid=1904 pid=2050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:20:55.763000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 13 00:20:55.766000 audit[2052]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2052 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:20:55.766000 audit[2052]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffcc1bddea0 a2=0 a3=0 items=0 ppid=1904 pid=2052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:20:55.766000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 13 00:20:55.786000 audit[2056]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2056 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:20:55.786000 audit[2056]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7fffa1674400 a2=0 a3=0 items=0 ppid=1904 pid=2056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:20:55.786000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Dec 13 00:20:55.789000 audit[2058]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2058 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:20:55.789000 audit[2058]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffefaf79c20 a2=0 a3=0 items=0 ppid=1904 pid=2058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:20:55.789000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Dec 13 00:20:55.801000 audit[2066]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2066 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:20:55.801000 audit[2066]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffc20881cb0 a2=0 a3=0 items=0 ppid=1904 pid=2066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:20:55.801000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Dec 13 00:20:55.812000 audit[2072]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2072 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:20:55.812000 audit[2072]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffeaf1cf5d0 a2=0 a3=0 items=0 ppid=1904 pid=2072 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:20:55.812000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Dec 13 00:20:55.815000 audit[2074]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2074 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:20:55.815000 audit[2074]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffe4124e000 a2=0 a3=0 items=0 ppid=1904 pid=2074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:20:55.815000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Dec 13 00:20:55.818000 audit[2076]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2076 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:20:55.818000 audit[2076]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffdacabbec0 a2=0 a3=0 items=0 ppid=1904 pid=2076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:20:55.818000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Dec 13 00:20:55.820000 audit[2078]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2078 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:20:55.820000 audit[2078]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffffeb3f790 a2=0 a3=0 items=0 ppid=1904 pid=2078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:20:55.820000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 13 00:20:55.823000 audit[2080]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2080 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:20:55.823000 audit[2080]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc057da0c0 a2=0 a3=0 items=0 ppid=1904 pid=2080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:20:55.823000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Dec 13 00:20:55.824743 systemd-networkd[1321]: docker0: Link UP Dec 13 00:20:55.959499 dockerd[1904]: time="2025-12-13T00:20:55.959445546Z" level=info msg="Loading containers: done." Dec 13 00:20:55.983660 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2830446385-merged.mount: Deactivated successfully. Dec 13 00:20:55.986828 dockerd[1904]: time="2025-12-13T00:20:55.986770049Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 00:20:55.986912 dockerd[1904]: time="2025-12-13T00:20:55.986896476Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 13 00:20:55.987025 dockerd[1904]: time="2025-12-13T00:20:55.987008256Z" level=info msg="Initializing buildkit" Dec 13 00:20:56.181186 dockerd[1904]: time="2025-12-13T00:20:56.181100378Z" level=info msg="Completed buildkit initialization" Dec 13 00:20:56.188201 dockerd[1904]: time="2025-12-13T00:20:56.188132235Z" level=info msg="Daemon has completed initialization" Dec 13 00:20:56.188414 dockerd[1904]: time="2025-12-13T00:20:56.188374860Z" level=info msg="API listen on /run/docker.sock" Dec 13 00:20:56.188605 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 00:20:56.188000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:20:56.906733 containerd[1633]: time="2025-12-13T00:20:56.906671863Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\"" Dec 13 00:20:57.685529 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1978762886.mount: Deactivated successfully. Dec 13 00:20:58.531885 containerd[1633]: time="2025-12-13T00:20:58.531804317Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:20:58.532631 containerd[1633]: time="2025-12-13T00:20:58.532589439Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.10: active requests=0, bytes read=27403437" Dec 13 00:20:58.533535 containerd[1633]: time="2025-12-13T00:20:58.533495769Z" level=info msg="ImageCreate event name:\"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:20:58.536282 containerd[1633]: time="2025-12-13T00:20:58.536236348Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:20:58.537112 containerd[1633]: time="2025-12-13T00:20:58.537074560Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.10\" with image id \"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\", size \"29068782\" in 1.630339889s" Dec 13 00:20:58.537172 containerd[1633]: time="2025-12-13T00:20:58.537118633Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\" returns image reference \"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\"" Dec 13 00:20:58.537915 containerd[1633]: time="2025-12-13T00:20:58.537882155Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\"" Dec 13 00:20:59.915486 containerd[1633]: time="2025-12-13T00:20:59.915394141Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:20:59.916381 containerd[1633]: time="2025-12-13T00:20:59.916336809Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.10: active requests=0, bytes read=24983855" Dec 13 00:20:59.917728 containerd[1633]: time="2025-12-13T00:20:59.917646385Z" level=info msg="ImageCreate event name:\"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:20:59.920736 containerd[1633]: time="2025-12-13T00:20:59.920693600Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:20:59.922165 containerd[1633]: time="2025-12-13T00:20:59.922113653Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.10\" with image id \"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\", size \"26649046\" in 1.384204808s" Dec 13 00:20:59.922165 containerd[1633]: time="2025-12-13T00:20:59.922162855Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\" returns image reference \"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\"" Dec 13 00:20:59.922897 containerd[1633]: time="2025-12-13T00:20:59.922844083Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\"" Dec 13 00:21:01.650760 containerd[1633]: time="2025-12-13T00:21:01.650679260Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:21:01.651661 containerd[1633]: time="2025-12-13T00:21:01.651632428Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.10: active requests=0, bytes read=19396111" Dec 13 00:21:01.653037 containerd[1633]: time="2025-12-13T00:21:01.653007817Z" level=info msg="ImageCreate event name:\"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:21:01.655454 containerd[1633]: time="2025-12-13T00:21:01.655430531Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:21:01.656468 containerd[1633]: time="2025-12-13T00:21:01.656431037Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.10\" with image id \"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\", size \"21061302\" in 1.733534727s" Dec 13 00:21:01.656468 containerd[1633]: time="2025-12-13T00:21:01.656464620Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\" returns image reference \"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\"" Dec 13 00:21:01.656934 containerd[1633]: time="2025-12-13T00:21:01.656896249Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\"" Dec 13 00:21:02.623224 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3324001918.mount: Deactivated successfully. Dec 13 00:21:03.344215 containerd[1633]: time="2025-12-13T00:21:03.344147147Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:21:03.347536 containerd[1633]: time="2025-12-13T00:21:03.347508230Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.10: active requests=0, bytes read=19571995" Dec 13 00:21:03.352878 containerd[1633]: time="2025-12-13T00:21:03.352842424Z" level=info msg="ImageCreate event name:\"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:21:03.355046 containerd[1633]: time="2025-12-13T00:21:03.355008326Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:21:03.355535 containerd[1633]: time="2025-12-13T00:21:03.355492864Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.10\" with image id \"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\", repo tag \"registry.k8s.io/kube-proxy:v1.32.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\", size \"31160442\" in 1.698563964s" Dec 13 00:21:03.355535 containerd[1633]: time="2025-12-13T00:21:03.355528431Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\" returns image reference \"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\"" Dec 13 00:21:03.356034 containerd[1633]: time="2025-12-13T00:21:03.356010104Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Dec 13 00:21:03.834064 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1251593048.mount: Deactivated successfully. Dec 13 00:21:04.063619 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 00:21:04.065617 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 00:21:04.275193 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 00:21:04.278390 kernel: kauditd_printk_skb: 134 callbacks suppressed Dec 13 00:21:04.278445 kernel: audit: type=1130 audit(1765585264.274:298): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:21:04.274000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:21:04.281480 (kubelet)[2222]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 00:21:04.415294 kubelet[2222]: E1213 00:21:04.415226 2222 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 00:21:04.419359 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 00:21:04.419563 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 00:21:04.419000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 13 00:21:04.420084 systemd[1]: kubelet.service: Consumed 226ms CPU time, 110.7M memory peak. Dec 13 00:21:04.425873 kernel: audit: type=1131 audit(1765585264.419:299): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 13 00:21:05.033962 containerd[1633]: time="2025-12-13T00:21:05.033827651Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:21:05.034590 containerd[1633]: time="2025-12-13T00:21:05.034525169Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=17693106" Dec 13 00:21:05.035886 containerd[1633]: time="2025-12-13T00:21:05.035861826Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:21:05.038885 containerd[1633]: time="2025-12-13T00:21:05.038840262Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:21:05.040010 containerd[1633]: time="2025-12-13T00:21:05.039955854Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.683889584s" Dec 13 00:21:05.040067 containerd[1633]: time="2025-12-13T00:21:05.040018922Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Dec 13 00:21:05.041328 containerd[1633]: time="2025-12-13T00:21:05.041292731Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 13 00:21:05.554154 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount541869588.mount: Deactivated successfully. Dec 13 00:21:05.562575 containerd[1633]: time="2025-12-13T00:21:05.562470916Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 00:21:05.563439 containerd[1633]: time="2025-12-13T00:21:05.563378829Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 13 00:21:05.564693 containerd[1633]: time="2025-12-13T00:21:05.564634393Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 00:21:05.567124 containerd[1633]: time="2025-12-13T00:21:05.567040646Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 00:21:05.567822 containerd[1633]: time="2025-12-13T00:21:05.567766147Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 526.437418ms" Dec 13 00:21:05.567884 containerd[1633]: time="2025-12-13T00:21:05.567825818Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Dec 13 00:21:05.568624 containerd[1633]: time="2025-12-13T00:21:05.568391329Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Dec 13 00:21:06.248500 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount453334670.mount: Deactivated successfully. Dec 13 00:21:08.246083 containerd[1633]: time="2025-12-13T00:21:08.246015210Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:21:08.246940 containerd[1633]: time="2025-12-13T00:21:08.246905088Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=45502580" Dec 13 00:21:08.248369 containerd[1633]: time="2025-12-13T00:21:08.248325482Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:21:08.251095 containerd[1633]: time="2025-12-13T00:21:08.251059590Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:21:08.252302 containerd[1633]: time="2025-12-13T00:21:08.252266783Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.683838705s" Dec 13 00:21:08.252302 containerd[1633]: time="2025-12-13T00:21:08.252300627Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Dec 13 00:21:10.962030 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 00:21:10.961000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:21:10.962355 systemd[1]: kubelet.service: Consumed 226ms CPU time, 110.7M memory peak. Dec 13 00:21:10.965251 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 00:21:10.961000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:21:10.972129 kernel: audit: type=1130 audit(1765585270.961:300): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:21:10.972377 kernel: audit: type=1131 audit(1765585270.961:301): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:21:11.003681 systemd[1]: Reload requested from client PID 2358 ('systemctl') (unit session-10.scope)... Dec 13 00:21:11.003737 systemd[1]: Reloading... Dec 13 00:21:11.141887 zram_generator::config[2405]: No configuration found. Dec 13 00:21:11.562693 systemd[1]: Reloading finished in 558 ms. Dec 13 00:21:11.593244 kernel: audit: type=1334 audit(1765585271.589:302): prog-id=67 op=LOAD Dec 13 00:21:11.593385 kernel: audit: type=1334 audit(1765585271.589:303): prog-id=50 op=UNLOAD Dec 13 00:21:11.589000 audit: BPF prog-id=67 op=LOAD Dec 13 00:21:11.589000 audit: BPF prog-id=50 op=UNLOAD Dec 13 00:21:11.589000 audit: BPF prog-id=68 op=LOAD Dec 13 00:21:11.589000 audit: BPF prog-id=69 op=LOAD Dec 13 00:21:11.598771 kernel: audit: type=1334 audit(1765585271.589:304): prog-id=68 op=LOAD Dec 13 00:21:11.598847 kernel: audit: type=1334 audit(1765585271.589:305): prog-id=69 op=LOAD Dec 13 00:21:11.598880 kernel: audit: type=1334 audit(1765585271.589:306): prog-id=54 op=UNLOAD Dec 13 00:21:11.589000 audit: BPF prog-id=54 op=UNLOAD Dec 13 00:21:11.600282 kernel: audit: type=1334 audit(1765585271.589:307): prog-id=55 op=UNLOAD Dec 13 00:21:11.589000 audit: BPF prog-id=55 op=UNLOAD Dec 13 00:21:11.591000 audit: BPF prog-id=70 op=LOAD Dec 13 00:21:11.603412 kernel: audit: type=1334 audit(1765585271.591:308): prog-id=70 op=LOAD Dec 13 00:21:11.603471 kernel: audit: type=1334 audit(1765585271.591:309): prog-id=51 op=UNLOAD Dec 13 00:21:11.591000 audit: BPF prog-id=51 op=UNLOAD Dec 13 00:21:11.591000 audit: BPF prog-id=71 op=LOAD Dec 13 00:21:11.591000 audit: BPF prog-id=72 op=LOAD Dec 13 00:21:11.591000 audit: BPF prog-id=52 op=UNLOAD Dec 13 00:21:11.591000 audit: BPF prog-id=53 op=UNLOAD Dec 13 00:21:11.598000 audit: BPF prog-id=73 op=LOAD Dec 13 00:21:11.598000 audit: BPF prog-id=56 op=UNLOAD Dec 13 00:21:11.598000 audit: BPF prog-id=74 op=LOAD Dec 13 00:21:11.598000 audit: BPF prog-id=75 op=LOAD Dec 13 00:21:11.598000 audit: BPF prog-id=57 op=UNLOAD Dec 13 00:21:11.598000 audit: BPF prog-id=58 op=UNLOAD Dec 13 00:21:11.600000 audit: BPF prog-id=76 op=LOAD Dec 13 00:21:11.600000 audit: BPF prog-id=62 op=UNLOAD Dec 13 00:21:11.600000 audit: BPF prog-id=77 op=LOAD Dec 13 00:21:11.600000 audit: BPF prog-id=47 op=UNLOAD Dec 13 00:21:11.600000 audit: BPF prog-id=78 op=LOAD Dec 13 00:21:11.601000 audit: BPF prog-id=79 op=LOAD Dec 13 00:21:11.601000 audit: BPF prog-id=48 op=UNLOAD Dec 13 00:21:11.601000 audit: BPF prog-id=49 op=UNLOAD Dec 13 00:21:11.601000 audit: BPF prog-id=80 op=LOAD Dec 13 00:21:11.601000 audit: BPF prog-id=59 op=UNLOAD Dec 13 00:21:11.601000 audit: BPF prog-id=81 op=LOAD Dec 13 00:21:11.601000 audit: BPF prog-id=82 op=LOAD Dec 13 00:21:11.601000 audit: BPF prog-id=60 op=UNLOAD Dec 13 00:21:11.601000 audit: BPF prog-id=61 op=UNLOAD Dec 13 00:21:11.605000 audit: BPF prog-id=83 op=LOAD Dec 13 00:21:11.605000 audit: BPF prog-id=64 op=UNLOAD Dec 13 00:21:11.605000 audit: BPF prog-id=84 op=LOAD Dec 13 00:21:11.605000 audit: BPF prog-id=85 op=LOAD Dec 13 00:21:11.605000 audit: BPF prog-id=65 op=UNLOAD Dec 13 00:21:11.605000 audit: BPF prog-id=66 op=UNLOAD Dec 13 00:21:11.606000 audit: BPF prog-id=86 op=LOAD Dec 13 00:21:11.606000 audit: BPF prog-id=63 op=UNLOAD Dec 13 00:21:11.627757 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 00:21:11.627895 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 00:21:11.628426 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 00:21:11.627000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 13 00:21:11.628502 systemd[1]: kubelet.service: Consumed 258ms CPU time, 98.5M memory peak. Dec 13 00:21:11.630485 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 00:21:11.839642 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 00:21:11.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:21:11.857871 (kubelet)[2452]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 00:21:11.903070 kubelet[2452]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 00:21:11.903070 kubelet[2452]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 13 00:21:11.903070 kubelet[2452]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 00:21:11.903480 kubelet[2452]: I1213 00:21:11.903169 2452 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 00:21:12.115384 kubelet[2452]: I1213 00:21:12.115253 2452 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 13 00:21:12.115384 kubelet[2452]: I1213 00:21:12.115299 2452 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 00:21:12.115722 kubelet[2452]: I1213 00:21:12.115687 2452 server.go:954] "Client rotation is on, will bootstrap in background" Dec 13 00:21:12.135821 kubelet[2452]: E1213 00:21:12.135756 2452 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.65:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.65:6443: connect: connection refused" logger="UnhandledError" Dec 13 00:21:12.137559 kubelet[2452]: I1213 00:21:12.137523 2452 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 00:21:12.146106 kubelet[2452]: I1213 00:21:12.146082 2452 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 13 00:21:12.152201 kubelet[2452]: I1213 00:21:12.152164 2452 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 00:21:12.153653 kubelet[2452]: I1213 00:21:12.153601 2452 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 00:21:12.153859 kubelet[2452]: I1213 00:21:12.153636 2452 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 00:21:12.153987 kubelet[2452]: I1213 00:21:12.153870 2452 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 00:21:12.153987 kubelet[2452]: I1213 00:21:12.153879 2452 container_manager_linux.go:304] "Creating device plugin manager" Dec 13 00:21:12.154046 kubelet[2452]: I1213 00:21:12.154031 2452 state_mem.go:36] "Initialized new in-memory state store" Dec 13 00:21:12.157009 kubelet[2452]: I1213 00:21:12.156973 2452 kubelet.go:446] "Attempting to sync node with API server" Dec 13 00:21:12.157009 kubelet[2452]: I1213 00:21:12.157001 2452 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 00:21:12.157120 kubelet[2452]: I1213 00:21:12.157039 2452 kubelet.go:352] "Adding apiserver pod source" Dec 13 00:21:12.157120 kubelet[2452]: I1213 00:21:12.157063 2452 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 00:21:12.160367 kubelet[2452]: I1213 00:21:12.160337 2452 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 13 00:21:12.160794 kubelet[2452]: I1213 00:21:12.160767 2452 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 00:21:12.161302 kubelet[2452]: W1213 00:21:12.161273 2452 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 00:21:12.163561 kubelet[2452]: W1213 00:21:12.163451 2452 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.65:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Dec 13 00:21:12.163749 kubelet[2452]: E1213 00:21:12.163697 2452 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.65:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.65:6443: connect: connection refused" logger="UnhandledError" Dec 13 00:21:12.164684 kubelet[2452]: I1213 00:21:12.164088 2452 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 13 00:21:12.164684 kubelet[2452]: I1213 00:21:12.164129 2452 server.go:1287] "Started kubelet" Dec 13 00:21:12.165184 kubelet[2452]: I1213 00:21:12.165155 2452 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 00:21:12.165401 kubelet[2452]: W1213 00:21:12.165361 2452 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.65:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Dec 13 00:21:12.165450 kubelet[2452]: E1213 00:21:12.165409 2452 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.65:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.65:6443: connect: connection refused" logger="UnhandledError" Dec 13 00:21:12.166465 kubelet[2452]: I1213 00:21:12.166439 2452 server.go:479] "Adding debug handlers to kubelet server" Dec 13 00:21:12.167803 kubelet[2452]: I1213 00:21:12.167727 2452 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 00:21:12.168072 kubelet[2452]: I1213 00:21:12.168056 2452 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 00:21:12.169858 kubelet[2452]: E1213 00:21:12.168727 2452 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.65:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.65:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18809e735800b713 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-12-13 00:21:12.164103955 +0000 UTC m=+0.300057326,LastTimestamp:2025-12-13 00:21:12.164103955 +0000 UTC m=+0.300057326,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 00:21:12.170139 kubelet[2452]: I1213 00:21:12.170102 2452 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 00:21:12.170239 kubelet[2452]: I1213 00:21:12.170222 2452 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 13 00:21:12.170316 kubelet[2452]: I1213 00:21:12.170303 2452 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 00:21:12.172160 kubelet[2452]: E1213 00:21:12.172122 2452 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.65:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.65:6443: connect: connection refused" interval="200ms" Dec 13 00:21:12.172237 kubelet[2452]: I1213 00:21:12.172192 2452 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 13 00:21:12.172277 kubelet[2452]: I1213 00:21:12.172252 2452 reconciler.go:26] "Reconciler: start to sync state" Dec 13 00:21:12.172381 kubelet[2452]: E1213 00:21:12.172309 2452 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 00:21:12.172670 kubelet[2452]: W1213 00:21:12.172626 2452 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.65:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Dec 13 00:21:12.172670 kubelet[2452]: E1213 00:21:12.172666 2452 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.65:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.65:6443: connect: connection refused" logger="UnhandledError" Dec 13 00:21:12.172875 kubelet[2452]: E1213 00:21:12.172721 2452 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 00:21:12.173323 kubelet[2452]: I1213 00:21:12.173304 2452 factory.go:221] Registration of the systemd container factory successfully Dec 13 00:21:12.173400 kubelet[2452]: I1213 00:21:12.173383 2452 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 00:21:12.173000 audit[2465]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2465 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:21:12.173000 audit[2465]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc68b8aa30 a2=0 a3=0 items=0 ppid=2452 pid=2465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:12.173000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 13 00:21:12.174412 kubelet[2452]: I1213 00:21:12.174353 2452 factory.go:221] Registration of the containerd container factory successfully Dec 13 00:21:12.174000 audit[2466]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2466 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:21:12.174000 audit[2466]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffefc5e2f30 a2=0 a3=0 items=0 ppid=2452 pid=2466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:12.174000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 13 00:21:12.177000 audit[2468]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2468 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:21:12.177000 audit[2468]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffe80c98420 a2=0 a3=0 items=0 ppid=2452 pid=2468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:12.177000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 13 00:21:12.179000 audit[2470]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2470 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:21:12.179000 audit[2470]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fff5732b420 a2=0 a3=0 items=0 ppid=2452 pid=2470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:12.179000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 13 00:21:12.187000 audit[2474]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2474 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:21:12.187000 audit[2474]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffffe8984c0 a2=0 a3=0 items=0 ppid=2452 pid=2474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:12.187000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Dec 13 00:21:12.188361 kubelet[2452]: I1213 00:21:12.188292 2452 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 00:21:12.188000 audit[2476]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2476 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:21:12.188000 audit[2476]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffde80d20c0 a2=0 a3=0 items=0 ppid=2452 pid=2476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:12.188000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 13 00:21:12.189614 kubelet[2452]: I1213 00:21:12.189588 2452 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 00:21:12.189614 kubelet[2452]: I1213 00:21:12.189612 2452 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 13 00:21:12.189692 kubelet[2452]: I1213 00:21:12.189636 2452 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 13 00:21:12.189692 kubelet[2452]: I1213 00:21:12.189645 2452 kubelet.go:2382] "Starting kubelet main sync loop" Dec 13 00:21:12.189761 kubelet[2452]: E1213 00:21:12.189688 2452 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 00:21:12.189000 audit[2477]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2477 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:21:12.189000 audit[2477]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffde3440f10 a2=0 a3=0 items=0 ppid=2452 pid=2477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:12.189000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 13 00:21:12.190000 audit[2478]: NETFILTER_CFG table=mangle:49 family=10 entries=1 op=nft_register_chain pid=2478 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:21:12.190000 audit[2478]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc19384b90 a2=0 a3=0 items=0 ppid=2452 pid=2478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:12.190000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 13 00:21:12.190000 audit[2480]: NETFILTER_CFG table=nat:50 family=2 entries=1 op=nft_register_chain pid=2480 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:21:12.190000 audit[2480]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe114e3d10 a2=0 a3=0 items=0 ppid=2452 pid=2480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:12.190000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 13 00:21:12.191000 audit[2482]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_chain pid=2482 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:21:12.191000 audit[2482]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff777709a0 a2=0 a3=0 items=0 ppid=2452 pid=2482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:12.191000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 13 00:21:12.191000 audit[2481]: NETFILTER_CFG table=nat:52 family=10 entries=1 op=nft_register_chain pid=2481 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:21:12.191000 audit[2481]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd79d9ff70 a2=0 a3=0 items=0 ppid=2452 pid=2481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:12.191000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 13 00:21:12.193208 kubelet[2452]: W1213 00:21:12.192761 2452 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.65:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Dec 13 00:21:12.193208 kubelet[2452]: E1213 00:21:12.193121 2452 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.65:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.65:6443: connect: connection refused" logger="UnhandledError" Dec 13 00:21:12.193340 kubelet[2452]: I1213 00:21:12.193311 2452 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 13 00:21:12.193412 kubelet[2452]: I1213 00:21:12.193369 2452 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 13 00:21:12.193412 kubelet[2452]: I1213 00:21:12.193392 2452 state_mem.go:36] "Initialized new in-memory state store" Dec 13 00:21:12.193000 audit[2486]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2486 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:21:12.193000 audit[2486]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff145e5860 a2=0 a3=0 items=0 ppid=2452 pid=2486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:12.193000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 13 00:21:12.272854 kubelet[2452]: E1213 00:21:12.272797 2452 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 00:21:12.290019 kubelet[2452]: E1213 00:21:12.289963 2452 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 00:21:12.373087 kubelet[2452]: E1213 00:21:12.372964 2452 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 00:21:12.373373 kubelet[2452]: E1213 00:21:12.373316 2452 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.65:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.65:6443: connect: connection refused" interval="400ms" Dec 13 00:21:12.473828 kubelet[2452]: E1213 00:21:12.473767 2452 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 00:21:12.491073 kubelet[2452]: E1213 00:21:12.491015 2452 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 00:21:12.574455 kubelet[2452]: E1213 00:21:12.574398 2452 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 00:21:12.675742 kubelet[2452]: E1213 00:21:12.675576 2452 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 00:21:12.683871 kubelet[2452]: I1213 00:21:12.683844 2452 policy_none.go:49] "None policy: Start" Dec 13 00:21:12.683953 kubelet[2452]: I1213 00:21:12.683885 2452 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 13 00:21:12.683953 kubelet[2452]: I1213 00:21:12.683908 2452 state_mem.go:35] "Initializing new in-memory state store" Dec 13 00:21:12.695554 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 00:21:12.714413 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 00:21:12.718048 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 00:21:12.730969 kubelet[2452]: I1213 00:21:12.730795 2452 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 00:21:12.731125 kubelet[2452]: I1213 00:21:12.731092 2452 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 00:21:12.731176 kubelet[2452]: I1213 00:21:12.731117 2452 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 00:21:12.731449 kubelet[2452]: I1213 00:21:12.731423 2452 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 00:21:12.733201 kubelet[2452]: E1213 00:21:12.732986 2452 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 13 00:21:12.733201 kubelet[2452]: E1213 00:21:12.733031 2452 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 13 00:21:12.774888 kubelet[2452]: E1213 00:21:12.774799 2452 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.65:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.65:6443: connect: connection refused" interval="800ms" Dec 13 00:21:12.833730 kubelet[2452]: I1213 00:21:12.833634 2452 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 13 00:21:12.834157 kubelet[2452]: E1213 00:21:12.834111 2452 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.65:6443/api/v1/nodes\": dial tcp 10.0.0.65:6443: connect: connection refused" node="localhost" Dec 13 00:21:12.900782 systemd[1]: Created slice kubepods-burstable-pod8681337b22fde46a4723210deacb1500.slice - libcontainer container kubepods-burstable-pod8681337b22fde46a4723210deacb1500.slice. Dec 13 00:21:12.922263 kubelet[2452]: E1213 00:21:12.922229 2452 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 13 00:21:12.924255 systemd[1]: Created slice kubepods-burstable-pod55d9ac750f8c9141f337af8b08cf5c9d.slice - libcontainer container kubepods-burstable-pod55d9ac750f8c9141f337af8b08cf5c9d.slice. Dec 13 00:21:12.941328 kubelet[2452]: E1213 00:21:12.941222 2452 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 13 00:21:12.944104 systemd[1]: Created slice kubepods-burstable-pod0a68423804124305a9de061f38780871.slice - libcontainer container kubepods-burstable-pod0a68423804124305a9de061f38780871.slice. Dec 13 00:21:12.945826 kubelet[2452]: E1213 00:21:12.945791 2452 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 13 00:21:12.977162 kubelet[2452]: I1213 00:21:12.977119 2452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 00:21:12.977162 kubelet[2452]: I1213 00:21:12.977154 2452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 00:21:12.977273 kubelet[2452]: I1213 00:21:12.977223 2452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8681337b22fde46a4723210deacb1500-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8681337b22fde46a4723210deacb1500\") " pod="kube-system/kube-apiserver-localhost" Dec 13 00:21:12.977273 kubelet[2452]: I1213 00:21:12.977262 2452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8681337b22fde46a4723210deacb1500-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8681337b22fde46a4723210deacb1500\") " pod="kube-system/kube-apiserver-localhost" Dec 13 00:21:12.977367 kubelet[2452]: I1213 00:21:12.977281 2452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 00:21:12.977367 kubelet[2452]: I1213 00:21:12.977296 2452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 00:21:12.977439 kubelet[2452]: I1213 00:21:12.977319 2452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0a68423804124305a9de061f38780871-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0a68423804124305a9de061f38780871\") " pod="kube-system/kube-scheduler-localhost" Dec 13 00:21:12.977439 kubelet[2452]: I1213 00:21:12.977405 2452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8681337b22fde46a4723210deacb1500-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8681337b22fde46a4723210deacb1500\") " pod="kube-system/kube-apiserver-localhost" Dec 13 00:21:12.977439 kubelet[2452]: I1213 00:21:12.977424 2452 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 00:21:13.025119 kubelet[2452]: W1213 00:21:13.025014 2452 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.65:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Dec 13 00:21:13.025119 kubelet[2452]: E1213 00:21:13.025123 2452 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.65:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.65:6443: connect: connection refused" logger="UnhandledError" Dec 13 00:21:13.036035 kubelet[2452]: I1213 00:21:13.036000 2452 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 13 00:21:13.036439 kubelet[2452]: E1213 00:21:13.036396 2452 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.65:6443/api/v1/nodes\": dial tcp 10.0.0.65:6443: connect: connection refused" node="localhost" Dec 13 00:21:13.178836 kubelet[2452]: W1213 00:21:13.178751 2452 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.65:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Dec 13 00:21:13.178981 kubelet[2452]: E1213 00:21:13.178847 2452 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.65:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.65:6443: connect: connection refused" logger="UnhandledError" Dec 13 00:21:13.223782 kubelet[2452]: E1213 00:21:13.223636 2452 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:21:13.224410 containerd[1633]: time="2025-12-13T00:21:13.224377389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8681337b22fde46a4723210deacb1500,Namespace:kube-system,Attempt:0,}" Dec 13 00:21:13.242586 kubelet[2452]: E1213 00:21:13.242559 2452 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:21:13.242965 containerd[1633]: time="2025-12-13T00:21:13.242933973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:55d9ac750f8c9141f337af8b08cf5c9d,Namespace:kube-system,Attempt:0,}" Dec 13 00:21:13.246148 kubelet[2452]: E1213 00:21:13.246120 2452 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:21:13.246407 containerd[1633]: time="2025-12-13T00:21:13.246372873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0a68423804124305a9de061f38780871,Namespace:kube-system,Attempt:0,}" Dec 13 00:21:13.438069 kubelet[2452]: I1213 00:21:13.438033 2452 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 13 00:21:13.438395 kubelet[2452]: E1213 00:21:13.438361 2452 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.65:6443/api/v1/nodes\": dial tcp 10.0.0.65:6443: connect: connection refused" node="localhost" Dec 13 00:21:13.555887 kubelet[2452]: W1213 00:21:13.555680 2452 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.65:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Dec 13 00:21:13.555887 kubelet[2452]: E1213 00:21:13.555772 2452 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.65:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.65:6443: connect: connection refused" logger="UnhandledError" Dec 13 00:21:13.576121 kubelet[2452]: E1213 00:21:13.576060 2452 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.65:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.65:6443: connect: connection refused" interval="1.6s" Dec 13 00:21:13.688837 kubelet[2452]: W1213 00:21:13.688738 2452 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.65:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Dec 13 00:21:13.688837 kubelet[2452]: E1213 00:21:13.688836 2452 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.65:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.65:6443: connect: connection refused" logger="UnhandledError" Dec 13 00:21:13.968829 containerd[1633]: time="2025-12-13T00:21:13.968760632Z" level=info msg="connecting to shim 3645513019fd899878a24ff364f6f4597d54ad027d978c86ff3a20da27c8a42b" address="unix:///run/containerd/s/c6544bf91d7b769838213126cb0580521c3bfc0901c1a55e593cb1177ae8d8ae" namespace=k8s.io protocol=ttrpc version=3 Dec 13 00:21:13.970966 containerd[1633]: time="2025-12-13T00:21:13.970939671Z" level=info msg="connecting to shim 6677b9d9dc273c1fd8c8f678adbb5fbbb75f7ddabd7857b942a39ef2ab669056" address="unix:///run/containerd/s/01bf4dace157294311a8c0ba4beaa41af2191d29e34fa1e87dc0126f951d9c01" namespace=k8s.io protocol=ttrpc version=3 Dec 13 00:21:13.974502 containerd[1633]: time="2025-12-13T00:21:13.974465086Z" level=info msg="connecting to shim dc86adac56c14fbbee487bf2bf301add9af76d62b0d92a6649342eff02a6506a" address="unix:///run/containerd/s/96fe245c606edcb4b18d23d55358af09d30411bafbed6e6d7d27a40ed93d5cab" namespace=k8s.io protocol=ttrpc version=3 Dec 13 00:21:14.080993 systemd[1]: Started cri-containerd-3645513019fd899878a24ff364f6f4597d54ad027d978c86ff3a20da27c8a42b.scope - libcontainer container 3645513019fd899878a24ff364f6f4597d54ad027d978c86ff3a20da27c8a42b. Dec 13 00:21:14.082973 systemd[1]: Started cri-containerd-6677b9d9dc273c1fd8c8f678adbb5fbbb75f7ddabd7857b942a39ef2ab669056.scope - libcontainer container 6677b9d9dc273c1fd8c8f678adbb5fbbb75f7ddabd7857b942a39ef2ab669056. Dec 13 00:21:14.084898 systemd[1]: Started cri-containerd-dc86adac56c14fbbee487bf2bf301add9af76d62b0d92a6649342eff02a6506a.scope - libcontainer container dc86adac56c14fbbee487bf2bf301add9af76d62b0d92a6649342eff02a6506a. Dec 13 00:21:14.101000 audit: BPF prog-id=87 op=LOAD Dec 13 00:21:14.102000 audit: BPF prog-id=88 op=LOAD Dec 13 00:21:14.102000 audit[2549]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2509 pid=2549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:14.102000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336343535313330313966643839393837386132346666333634663666 Dec 13 00:21:14.102000 audit: BPF prog-id=88 op=UNLOAD Dec 13 00:21:14.102000 audit[2549]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2509 pid=2549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:14.102000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336343535313330313966643839393837386132346666333634663666 Dec 13 00:21:14.104000 audit: BPF prog-id=89 op=LOAD Dec 13 00:21:14.104000 audit: BPF prog-id=90 op=LOAD Dec 13 00:21:14.104000 audit[2549]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2509 pid=2549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:14.104000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336343535313330313966643839393837386132346666333634663666 Dec 13 00:21:14.104000 audit: BPF prog-id=91 op=LOAD Dec 13 00:21:14.104000 audit[2549]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2509 pid=2549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:14.104000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336343535313330313966643839393837386132346666333634663666 Dec 13 00:21:14.104000 audit: BPF prog-id=91 op=UNLOAD Dec 13 00:21:14.104000 audit[2549]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2509 pid=2549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:14.104000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336343535313330313966643839393837386132346666333634663666 Dec 13 00:21:14.104000 audit: BPF prog-id=90 op=UNLOAD Dec 13 00:21:14.104000 audit[2549]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2509 pid=2549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:14.104000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336343535313330313966643839393837386132346666333634663666 Dec 13 00:21:14.104000 audit: BPF prog-id=92 op=LOAD Dec 13 00:21:14.104000 audit[2549]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2509 pid=2549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:14.104000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336343535313330313966643839393837386132346666333634663666 Dec 13 00:21:14.105000 audit: BPF prog-id=93 op=LOAD Dec 13 00:21:14.105000 audit[2536]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2512 pid=2536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:14.105000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636373762396439646332373363316664386338663637386164626235 Dec 13 00:21:14.105000 audit: BPF prog-id=93 op=UNLOAD Dec 13 00:21:14.105000 audit[2536]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2512 pid=2536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:14.105000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636373762396439646332373363316664386338663637386164626235 Dec 13 00:21:14.105000 audit: BPF prog-id=94 op=LOAD Dec 13 00:21:14.105000 audit[2536]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2512 pid=2536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:14.105000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636373762396439646332373363316664386338663637386164626235 Dec 13 00:21:14.106000 audit: BPF prog-id=95 op=LOAD Dec 13 00:21:14.105000 audit: BPF prog-id=96 op=LOAD Dec 13 00:21:14.105000 audit[2536]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2512 pid=2536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:14.105000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636373762396439646332373363316664386338663637386164626235 Dec 13 00:21:14.106000 audit: BPF prog-id=96 op=UNLOAD Dec 13 00:21:14.106000 audit[2536]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2512 pid=2536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:14.106000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636373762396439646332373363316664386338663637386164626235 Dec 13 00:21:14.106000 audit: BPF prog-id=94 op=UNLOAD Dec 13 00:21:14.106000 audit[2536]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2512 pid=2536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:14.106000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636373762396439646332373363316664386338663637386164626235 Dec 13 00:21:14.106000 audit: BPF prog-id=97 op=LOAD Dec 13 00:21:14.106000 audit[2546]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2520 pid=2546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:14.106000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463383661646163353663313466626265653438376266326266333031 Dec 13 00:21:14.106000 audit: BPF prog-id=98 op=LOAD Dec 13 00:21:14.107000 audit: BPF prog-id=97 op=UNLOAD Dec 13 00:21:14.106000 audit[2536]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2512 pid=2536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:14.106000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636373762396439646332373363316664386338663637386164626235 Dec 13 00:21:14.107000 audit[2546]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2520 pid=2546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:14.107000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463383661646163353663313466626265653438376266326266333031 Dec 13 00:21:14.107000 audit: BPF prog-id=99 op=LOAD Dec 13 00:21:14.107000 audit[2546]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2520 pid=2546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:14.107000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463383661646163353663313466626265653438376266326266333031 Dec 13 00:21:14.107000 audit: BPF prog-id=100 op=LOAD Dec 13 00:21:14.107000 audit[2546]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2520 pid=2546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:14.107000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463383661646163353663313466626265653438376266326266333031 Dec 13 00:21:14.107000 audit: BPF prog-id=100 op=UNLOAD Dec 13 00:21:14.107000 audit[2546]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2520 pid=2546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:14.107000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463383661646163353663313466626265653438376266326266333031 Dec 13 00:21:14.107000 audit: BPF prog-id=99 op=UNLOAD Dec 13 00:21:14.107000 audit[2546]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2520 pid=2546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:14.107000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463383661646163353663313466626265653438376266326266333031 Dec 13 00:21:14.107000 audit: BPF prog-id=101 op=LOAD Dec 13 00:21:14.107000 audit[2546]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2520 pid=2546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:14.107000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463383661646163353663313466626265653438376266326266333031 Dec 13 00:21:14.152006 containerd[1633]: time="2025-12-13T00:21:14.151954638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8681337b22fde46a4723210deacb1500,Namespace:kube-system,Attempt:0,} returns sandbox id \"3645513019fd899878a24ff364f6f4597d54ad027d978c86ff3a20da27c8a42b\"" Dec 13 00:21:14.154256 kubelet[2452]: E1213 00:21:14.154233 2452 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:21:14.157084 containerd[1633]: time="2025-12-13T00:21:14.157053311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0a68423804124305a9de061f38780871,Namespace:kube-system,Attempt:0,} returns sandbox id \"dc86adac56c14fbbee487bf2bf301add9af76d62b0d92a6649342eff02a6506a\"" Dec 13 00:21:14.158892 containerd[1633]: time="2025-12-13T00:21:14.157682233Z" level=info msg="CreateContainer within sandbox \"3645513019fd899878a24ff364f6f4597d54ad027d978c86ff3a20da27c8a42b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 00:21:14.158892 containerd[1633]: time="2025-12-13T00:21:14.158277500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:55d9ac750f8c9141f337af8b08cf5c9d,Namespace:kube-system,Attempt:0,} returns sandbox id \"6677b9d9dc273c1fd8c8f678adbb5fbbb75f7ddabd7857b942a39ef2ab669056\"" Dec 13 00:21:14.159077 kubelet[2452]: E1213 00:21:14.158342 2452 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:21:14.160543 kubelet[2452]: E1213 00:21:14.160505 2452 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:21:14.163006 containerd[1633]: time="2025-12-13T00:21:14.162897348Z" level=info msg="CreateContainer within sandbox \"6677b9d9dc273c1fd8c8f678adbb5fbbb75f7ddabd7857b942a39ef2ab669056\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 00:21:14.163983 containerd[1633]: time="2025-12-13T00:21:14.163940541Z" level=info msg="CreateContainer within sandbox \"dc86adac56c14fbbee487bf2bf301add9af76d62b0d92a6649342eff02a6506a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 00:21:14.172648 containerd[1633]: time="2025-12-13T00:21:14.172605229Z" level=info msg="Container a640d923226b1533c12104f4807c092bf19fd0252545c151440c194d8dd4cfd5: CDI devices from CRI Config.CDIDevices: []" Dec 13 00:21:14.177846 containerd[1633]: time="2025-12-13T00:21:14.177764327Z" level=info msg="Container 7acc6889d29c23ce5154e7db8f2ee553c9850599f5b227d50bd2f5efd2396b91: CDI devices from CRI Config.CDIDevices: []" Dec 13 00:21:14.185320 containerd[1633]: time="2025-12-13T00:21:14.185277314Z" level=info msg="Container 32680e2e31a41ca51f4e60185223390e70f50872c772efbfdc79177932bdf705: CDI devices from CRI Config.CDIDevices: []" Dec 13 00:21:14.188425 containerd[1633]: time="2025-12-13T00:21:14.188388779Z" level=info msg="CreateContainer within sandbox \"3645513019fd899878a24ff364f6f4597d54ad027d978c86ff3a20da27c8a42b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a640d923226b1533c12104f4807c092bf19fd0252545c151440c194d8dd4cfd5\"" Dec 13 00:21:14.189143 containerd[1633]: time="2025-12-13T00:21:14.189095058Z" level=info msg="StartContainer for \"a640d923226b1533c12104f4807c092bf19fd0252545c151440c194d8dd4cfd5\"" Dec 13 00:21:14.190529 containerd[1633]: time="2025-12-13T00:21:14.190493611Z" level=info msg="connecting to shim a640d923226b1533c12104f4807c092bf19fd0252545c151440c194d8dd4cfd5" address="unix:///run/containerd/s/c6544bf91d7b769838213126cb0580521c3bfc0901c1a55e593cb1177ae8d8ae" protocol=ttrpc version=3 Dec 13 00:21:14.192525 containerd[1633]: time="2025-12-13T00:21:14.192488683Z" level=info msg="CreateContainer within sandbox \"6677b9d9dc273c1fd8c8f678adbb5fbbb75f7ddabd7857b942a39ef2ab669056\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7acc6889d29c23ce5154e7db8f2ee553c9850599f5b227d50bd2f5efd2396b91\"" Dec 13 00:21:14.192817 containerd[1633]: time="2025-12-13T00:21:14.192780520Z" level=info msg="StartContainer for \"7acc6889d29c23ce5154e7db8f2ee553c9850599f5b227d50bd2f5efd2396b91\"" Dec 13 00:21:14.193733 containerd[1633]: time="2025-12-13T00:21:14.193683146Z" level=info msg="connecting to shim 7acc6889d29c23ce5154e7db8f2ee553c9850599f5b227d50bd2f5efd2396b91" address="unix:///run/containerd/s/01bf4dace157294311a8c0ba4beaa41af2191d29e34fa1e87dc0126f951d9c01" protocol=ttrpc version=3 Dec 13 00:21:14.202503 containerd[1633]: time="2025-12-13T00:21:14.202458364Z" level=info msg="CreateContainer within sandbox \"dc86adac56c14fbbee487bf2bf301add9af76d62b0d92a6649342eff02a6506a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"32680e2e31a41ca51f4e60185223390e70f50872c772efbfdc79177932bdf705\"" Dec 13 00:21:14.203471 containerd[1633]: time="2025-12-13T00:21:14.203453095Z" level=info msg="StartContainer for \"32680e2e31a41ca51f4e60185223390e70f50872c772efbfdc79177932bdf705\"" Dec 13 00:21:14.205145 containerd[1633]: time="2025-12-13T00:21:14.205122285Z" level=info msg="connecting to shim 32680e2e31a41ca51f4e60185223390e70f50872c772efbfdc79177932bdf705" address="unix:///run/containerd/s/96fe245c606edcb4b18d23d55358af09d30411bafbed6e6d7d27a40ed93d5cab" protocol=ttrpc version=3 Dec 13 00:21:14.214991 systemd[1]: Started cri-containerd-a640d923226b1533c12104f4807c092bf19fd0252545c151440c194d8dd4cfd5.scope - libcontainer container a640d923226b1533c12104f4807c092bf19fd0252545c151440c194d8dd4cfd5. Dec 13 00:21:14.218605 systemd[1]: Started cri-containerd-7acc6889d29c23ce5154e7db8f2ee553c9850599f5b227d50bd2f5efd2396b91.scope - libcontainer container 7acc6889d29c23ce5154e7db8f2ee553c9850599f5b227d50bd2f5efd2396b91. Dec 13 00:21:14.222530 systemd[1]: Started cri-containerd-32680e2e31a41ca51f4e60185223390e70f50872c772efbfdc79177932bdf705.scope - libcontainer container 32680e2e31a41ca51f4e60185223390e70f50872c772efbfdc79177932bdf705. Dec 13 00:21:14.232000 audit: BPF prog-id=102 op=LOAD Dec 13 00:21:14.232000 audit: BPF prog-id=103 op=LOAD Dec 13 00:21:14.232000 audit[2623]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2509 pid=2623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:14.232000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136343064393233323236623135333363313231303466343830376330 Dec 13 00:21:14.232000 audit: BPF prog-id=103 op=UNLOAD Dec 13 00:21:14.232000 audit[2623]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2509 pid=2623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:14.232000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136343064393233323236623135333363313231303466343830376330 Dec 13 00:21:14.234000 audit: BPF prog-id=104 op=LOAD Dec 13 00:21:14.234000 audit[2623]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2509 pid=2623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:14.234000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136343064393233323236623135333363313231303466343830376330 Dec 13 00:21:14.234000 audit: BPF prog-id=105 op=LOAD Dec 13 00:21:14.234000 audit[2623]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2509 pid=2623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:14.234000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136343064393233323236623135333363313231303466343830376330 Dec 13 00:21:14.234000 audit: BPF prog-id=105 op=UNLOAD Dec 13 00:21:14.234000 audit[2623]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2509 pid=2623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:14.234000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136343064393233323236623135333363313231303466343830376330 Dec 13 00:21:14.234000 audit: BPF prog-id=104 op=UNLOAD Dec 13 00:21:14.234000 audit[2623]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2509 pid=2623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:14.234000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136343064393233323236623135333363313231303466343830376330 Dec 13 00:21:14.234000 audit: BPF prog-id=106 op=LOAD Dec 13 00:21:14.234000 audit[2623]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2509 pid=2623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:14.234000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136343064393233323236623135333363313231303466343830376330 Dec 13 00:21:14.241733 kubelet[2452]: I1213 00:21:14.241694 2452 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 13 00:21:14.242245 kubelet[2452]: E1213 00:21:14.242215 2452 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.65:6443/api/v1/nodes\": dial tcp 10.0.0.65:6443: connect: connection refused" node="localhost" Dec 13 00:21:14.242000 audit: BPF prog-id=107 op=LOAD Dec 13 00:21:14.242000 audit: BPF prog-id=108 op=LOAD Dec 13 00:21:14.242000 audit[2629]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2512 pid=2629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:14.242000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761636336383839643239633233636535313534653764623866326565 Dec 13 00:21:14.242000 audit: BPF prog-id=108 op=UNLOAD Dec 13 00:21:14.242000 audit[2629]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2512 pid=2629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:14.242000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761636336383839643239633233636535313534653764623866326565 Dec 13 00:21:14.242000 audit: BPF prog-id=109 op=LOAD Dec 13 00:21:14.242000 audit[2629]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2512 pid=2629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:14.242000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761636336383839643239633233636535313534653764623866326565 Dec 13 00:21:14.242000 audit: BPF prog-id=110 op=LOAD Dec 13 00:21:14.242000 audit[2629]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=2512 pid=2629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:14.242000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761636336383839643239633233636535313534653764623866326565 Dec 13 00:21:14.242000 audit: BPF prog-id=110 op=UNLOAD Dec 13 00:21:14.242000 audit[2629]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2512 pid=2629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:14.242000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761636336383839643239633233636535313534653764623866326565 Dec 13 00:21:14.242000 audit: BPF prog-id=109 op=UNLOAD Dec 13 00:21:14.242000 audit[2629]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2512 pid=2629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:14.242000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761636336383839643239633233636535313534653764623866326565 Dec 13 00:21:14.242000 audit: BPF prog-id=111 op=LOAD Dec 13 00:21:14.242000 audit[2629]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=2512 pid=2629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:14.242000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761636336383839643239633233636535313534653764623866326565 Dec 13 00:21:14.245000 audit: BPF prog-id=112 op=LOAD Dec 13 00:21:14.247000 audit: BPF prog-id=113 op=LOAD Dec 13 00:21:14.247000 audit[2648]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2520 pid=2648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:14.247000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332363830653265333161343163613531663465363031383532323333 Dec 13 00:21:14.247000 audit: BPF prog-id=113 op=UNLOAD Dec 13 00:21:14.247000 audit[2648]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2520 pid=2648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:14.247000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332363830653265333161343163613531663465363031383532323333 Dec 13 00:21:14.248000 audit: BPF prog-id=114 op=LOAD Dec 13 00:21:14.248000 audit[2648]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2520 pid=2648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:14.248000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332363830653265333161343163613531663465363031383532323333 Dec 13 00:21:14.248000 audit: BPF prog-id=115 op=LOAD Dec 13 00:21:14.248000 audit[2648]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2520 pid=2648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:14.248000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332363830653265333161343163613531663465363031383532323333 Dec 13 00:21:14.248000 audit: BPF prog-id=115 op=UNLOAD Dec 13 00:21:14.248000 audit[2648]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2520 pid=2648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:14.248000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332363830653265333161343163613531663465363031383532323333 Dec 13 00:21:14.248000 audit: BPF prog-id=114 op=UNLOAD Dec 13 00:21:14.248000 audit[2648]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2520 pid=2648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:14.248000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332363830653265333161343163613531663465363031383532323333 Dec 13 00:21:14.248000 audit: BPF prog-id=116 op=LOAD Dec 13 00:21:14.248000 audit[2648]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2520 pid=2648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:14.248000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332363830653265333161343163613531663465363031383532323333 Dec 13 00:21:14.286593 containerd[1633]: time="2025-12-13T00:21:14.286552718Z" level=info msg="StartContainer for \"a640d923226b1533c12104f4807c092bf19fd0252545c151440c194d8dd4cfd5\" returns successfully" Dec 13 00:21:14.294445 containerd[1633]: time="2025-12-13T00:21:14.294376988Z" level=info msg="StartContainer for \"7acc6889d29c23ce5154e7db8f2ee553c9850599f5b227d50bd2f5efd2396b91\" returns successfully" Dec 13 00:21:14.299284 containerd[1633]: time="2025-12-13T00:21:14.299257174Z" level=info msg="StartContainer for \"32680e2e31a41ca51f4e60185223390e70f50872c772efbfdc79177932bdf705\" returns successfully" Dec 13 00:21:14.327803 kubelet[2452]: E1213 00:21:14.327736 2452 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.65:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.65:6443: connect: connection refused" logger="UnhandledError" Dec 13 00:21:15.217585 kubelet[2452]: E1213 00:21:15.217300 2452 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 13 00:21:15.218772 kubelet[2452]: E1213 00:21:15.218493 2452 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:21:15.218772 kubelet[2452]: E1213 00:21:15.218565 2452 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 13 00:21:15.218772 kubelet[2452]: E1213 00:21:15.218674 2452 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:21:15.221519 kubelet[2452]: E1213 00:21:15.221486 2452 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 13 00:21:15.221644 kubelet[2452]: E1213 00:21:15.221619 2452 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:21:15.518253 kubelet[2452]: E1213 00:21:15.518043 2452 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 13 00:21:15.844592 kubelet[2452]: I1213 00:21:15.844294 2452 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 13 00:21:15.850128 kubelet[2452]: I1213 00:21:15.850084 2452 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 13 00:21:15.850128 kubelet[2452]: E1213 00:21:15.850115 2452 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Dec 13 00:21:15.857797 kubelet[2452]: E1213 00:21:15.857758 2452 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 00:21:15.958186 kubelet[2452]: E1213 00:21:15.958131 2452 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 00:21:16.058794 kubelet[2452]: E1213 00:21:16.058733 2452 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 00:21:16.159668 kubelet[2452]: E1213 00:21:16.159520 2452 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 00:21:16.222246 kubelet[2452]: I1213 00:21:16.222195 2452 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 13 00:21:16.222803 kubelet[2452]: I1213 00:21:16.222563 2452 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 13 00:21:16.222803 kubelet[2452]: I1213 00:21:16.222800 2452 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 13 00:21:16.227442 kubelet[2452]: E1213 00:21:16.227410 2452 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Dec 13 00:21:16.227527 kubelet[2452]: E1213 00:21:16.227488 2452 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Dec 13 00:21:16.227830 kubelet[2452]: E1213 00:21:16.227582 2452 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:21:16.227830 kubelet[2452]: E1213 00:21:16.227669 2452 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Dec 13 00:21:16.227830 kubelet[2452]: E1213 00:21:16.227684 2452 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:21:16.228026 kubelet[2452]: E1213 00:21:16.228007 2452 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:21:16.272887 kubelet[2452]: I1213 00:21:16.272839 2452 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 13 00:21:16.274320 kubelet[2452]: E1213 00:21:16.274283 2452 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Dec 13 00:21:16.274320 kubelet[2452]: I1213 00:21:16.274300 2452 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 13 00:21:16.275378 kubelet[2452]: E1213 00:21:16.275356 2452 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Dec 13 00:21:16.275378 kubelet[2452]: I1213 00:21:16.275372 2452 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 13 00:21:16.276384 kubelet[2452]: E1213 00:21:16.276358 2452 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Dec 13 00:21:17.160963 kubelet[2452]: I1213 00:21:17.160893 2452 apiserver.go:52] "Watching apiserver" Dec 13 00:21:17.172377 kubelet[2452]: I1213 00:21:17.172316 2452 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 13 00:21:17.223995 kubelet[2452]: I1213 00:21:17.223964 2452 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 13 00:21:17.224433 kubelet[2452]: I1213 00:21:17.224266 2452 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 13 00:21:17.230800 kubelet[2452]: E1213 00:21:17.230759 2452 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:21:17.231889 kubelet[2452]: E1213 00:21:17.231845 2452 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:21:17.788745 systemd[1]: Reload requested from client PID 2727 ('systemctl') (unit session-10.scope)... Dec 13 00:21:17.788764 systemd[1]: Reloading... Dec 13 00:21:17.873840 zram_generator::config[2772]: No configuration found. Dec 13 00:21:18.150550 systemd[1]: Reloading finished in 361 ms. Dec 13 00:21:18.184323 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 00:21:18.204340 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 00:21:18.204782 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 00:21:18.204000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:21:18.204900 systemd[1]: kubelet.service: Consumed 817ms CPU time, 130.3M memory peak. Dec 13 00:21:18.206108 kernel: kauditd_printk_skb: 202 callbacks suppressed Dec 13 00:21:18.206171 kernel: audit: type=1131 audit(1765585278.204:404): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:21:18.207364 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 00:21:18.208000 audit: BPF prog-id=117 op=LOAD Dec 13 00:21:18.211769 kernel: audit: type=1334 audit(1765585278.208:405): prog-id=117 op=LOAD Dec 13 00:21:18.211848 kernel: audit: type=1334 audit(1765585278.208:406): prog-id=76 op=UNLOAD Dec 13 00:21:18.208000 audit: BPF prog-id=76 op=UNLOAD Dec 13 00:21:18.209000 audit: BPF prog-id=118 op=LOAD Dec 13 00:21:18.209000 audit: BPF prog-id=73 op=UNLOAD Dec 13 00:21:18.214832 kernel: audit: type=1334 audit(1765585278.209:407): prog-id=118 op=LOAD Dec 13 00:21:18.214882 kernel: audit: type=1334 audit(1765585278.209:408): prog-id=73 op=UNLOAD Dec 13 00:21:18.209000 audit: BPF prog-id=119 op=LOAD Dec 13 00:21:18.217847 kernel: audit: type=1334 audit(1765585278.209:409): prog-id=119 op=LOAD Dec 13 00:21:18.217898 kernel: audit: type=1334 audit(1765585278.209:410): prog-id=120 op=LOAD Dec 13 00:21:18.209000 audit: BPF prog-id=120 op=LOAD Dec 13 00:21:18.209000 audit: BPF prog-id=74 op=UNLOAD Dec 13 00:21:18.220655 kernel: audit: type=1334 audit(1765585278.209:411): prog-id=74 op=UNLOAD Dec 13 00:21:18.220689 kernel: audit: type=1334 audit(1765585278.209:412): prog-id=75 op=UNLOAD Dec 13 00:21:18.209000 audit: BPF prog-id=75 op=UNLOAD Dec 13 00:21:18.210000 audit: BPF prog-id=121 op=LOAD Dec 13 00:21:18.223639 kernel: audit: type=1334 audit(1765585278.210:413): prog-id=121 op=LOAD Dec 13 00:21:18.210000 audit: BPF prog-id=80 op=UNLOAD Dec 13 00:21:18.210000 audit: BPF prog-id=122 op=LOAD Dec 13 00:21:18.210000 audit: BPF prog-id=123 op=LOAD Dec 13 00:21:18.210000 audit: BPF prog-id=81 op=UNLOAD Dec 13 00:21:18.210000 audit: BPF prog-id=82 op=UNLOAD Dec 13 00:21:18.210000 audit: BPF prog-id=124 op=LOAD Dec 13 00:21:18.211000 audit: BPF prog-id=86 op=UNLOAD Dec 13 00:21:18.213000 audit: BPF prog-id=125 op=LOAD Dec 13 00:21:18.213000 audit: BPF prog-id=70 op=UNLOAD Dec 13 00:21:18.213000 audit: BPF prog-id=126 op=LOAD Dec 13 00:21:18.213000 audit: BPF prog-id=127 op=LOAD Dec 13 00:21:18.213000 audit: BPF prog-id=71 op=UNLOAD Dec 13 00:21:18.213000 audit: BPF prog-id=72 op=UNLOAD Dec 13 00:21:18.214000 audit: BPF prog-id=128 op=LOAD Dec 13 00:21:18.232000 audit: BPF prog-id=129 op=LOAD Dec 13 00:21:18.232000 audit: BPF prog-id=68 op=UNLOAD Dec 13 00:21:18.232000 audit: BPF prog-id=69 op=UNLOAD Dec 13 00:21:18.234000 audit: BPF prog-id=130 op=LOAD Dec 13 00:21:18.234000 audit: BPF prog-id=83 op=UNLOAD Dec 13 00:21:18.235000 audit: BPF prog-id=131 op=LOAD Dec 13 00:21:18.235000 audit: BPF prog-id=132 op=LOAD Dec 13 00:21:18.235000 audit: BPF prog-id=84 op=UNLOAD Dec 13 00:21:18.235000 audit: BPF prog-id=85 op=UNLOAD Dec 13 00:21:18.237000 audit: BPF prog-id=133 op=LOAD Dec 13 00:21:18.237000 audit: BPF prog-id=67 op=UNLOAD Dec 13 00:21:18.238000 audit: BPF prog-id=134 op=LOAD Dec 13 00:21:18.238000 audit: BPF prog-id=77 op=UNLOAD Dec 13 00:21:18.238000 audit: BPF prog-id=135 op=LOAD Dec 13 00:21:18.238000 audit: BPF prog-id=136 op=LOAD Dec 13 00:21:18.238000 audit: BPF prog-id=78 op=UNLOAD Dec 13 00:21:18.238000 audit: BPF prog-id=79 op=UNLOAD Dec 13 00:21:18.442677 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 00:21:18.442000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:21:18.455276 (kubelet)[2818]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 00:21:18.508735 kubelet[2818]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 00:21:18.508735 kubelet[2818]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 13 00:21:18.508735 kubelet[2818]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 00:21:18.509190 kubelet[2818]: I1213 00:21:18.508788 2818 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 00:21:18.515556 kubelet[2818]: I1213 00:21:18.515510 2818 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 13 00:21:18.515556 kubelet[2818]: I1213 00:21:18.515538 2818 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 00:21:18.515832 kubelet[2818]: I1213 00:21:18.515793 2818 server.go:954] "Client rotation is on, will bootstrap in background" Dec 13 00:21:18.517002 kubelet[2818]: I1213 00:21:18.516986 2818 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 00:21:18.520926 kubelet[2818]: I1213 00:21:18.520897 2818 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 00:21:18.524989 kubelet[2818]: I1213 00:21:18.524953 2818 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 13 00:21:18.531103 kubelet[2818]: I1213 00:21:18.531072 2818 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 00:21:18.531403 kubelet[2818]: I1213 00:21:18.531357 2818 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 00:21:18.531599 kubelet[2818]: I1213 00:21:18.531396 2818 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 00:21:18.531707 kubelet[2818]: I1213 00:21:18.531611 2818 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 00:21:18.531707 kubelet[2818]: I1213 00:21:18.531623 2818 container_manager_linux.go:304] "Creating device plugin manager" Dec 13 00:21:18.531903 kubelet[2818]: I1213 00:21:18.531741 2818 state_mem.go:36] "Initialized new in-memory state store" Dec 13 00:21:18.531969 kubelet[2818]: I1213 00:21:18.531946 2818 kubelet.go:446] "Attempting to sync node with API server" Dec 13 00:21:18.532008 kubelet[2818]: I1213 00:21:18.531976 2818 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 00:21:18.532008 kubelet[2818]: I1213 00:21:18.532003 2818 kubelet.go:352] "Adding apiserver pod source" Dec 13 00:21:18.532065 kubelet[2818]: I1213 00:21:18.532015 2818 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 00:21:18.532830 kubelet[2818]: I1213 00:21:18.532719 2818 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 13 00:21:18.533244 kubelet[2818]: I1213 00:21:18.533187 2818 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 00:21:18.533703 kubelet[2818]: I1213 00:21:18.533682 2818 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 13 00:21:18.533703 kubelet[2818]: I1213 00:21:18.533716 2818 server.go:1287] "Started kubelet" Dec 13 00:21:18.536744 kubelet[2818]: I1213 00:21:18.536706 2818 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 00:21:18.540633 kubelet[2818]: I1213 00:21:18.540547 2818 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 00:21:18.541185 kubelet[2818]: I1213 00:21:18.541069 2818 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 13 00:21:18.541185 kubelet[2818]: I1213 00:21:18.541170 2818 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 13 00:21:18.541539 kubelet[2818]: I1213 00:21:18.541320 2818 reconciler.go:26] "Reconciler: start to sync state" Dec 13 00:21:18.541539 kubelet[2818]: I1213 00:21:18.541374 2818 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 00:21:18.542911 kubelet[2818]: I1213 00:21:18.542888 2818 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 00:21:18.544371 kubelet[2818]: I1213 00:21:18.543870 2818 server.go:479] "Adding debug handlers to kubelet server" Dec 13 00:21:18.545181 kubelet[2818]: I1213 00:21:18.544902 2818 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 00:21:18.545181 kubelet[2818]: I1213 00:21:18.545173 2818 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 00:21:18.549661 kubelet[2818]: E1213 00:21:18.549613 2818 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 00:21:18.551269 kubelet[2818]: I1213 00:21:18.551223 2818 factory.go:221] Registration of the containerd container factory successfully Dec 13 00:21:18.551269 kubelet[2818]: I1213 00:21:18.551249 2818 factory.go:221] Registration of the systemd container factory successfully Dec 13 00:21:18.569075 kubelet[2818]: I1213 00:21:18.569009 2818 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 00:21:18.571836 kubelet[2818]: I1213 00:21:18.571290 2818 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 00:21:18.571836 kubelet[2818]: I1213 00:21:18.571323 2818 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 13 00:21:18.571836 kubelet[2818]: I1213 00:21:18.571344 2818 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 13 00:21:18.571836 kubelet[2818]: I1213 00:21:18.571351 2818 kubelet.go:2382] "Starting kubelet main sync loop" Dec 13 00:21:18.571836 kubelet[2818]: E1213 00:21:18.571404 2818 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 00:21:18.593502 kubelet[2818]: I1213 00:21:18.593468 2818 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 13 00:21:18.593502 kubelet[2818]: I1213 00:21:18.593485 2818 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 13 00:21:18.593502 kubelet[2818]: I1213 00:21:18.593506 2818 state_mem.go:36] "Initialized new in-memory state store" Dec 13 00:21:18.593715 kubelet[2818]: I1213 00:21:18.593652 2818 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 00:21:18.593715 kubelet[2818]: I1213 00:21:18.593662 2818 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 00:21:18.593715 kubelet[2818]: I1213 00:21:18.593681 2818 policy_none.go:49] "None policy: Start" Dec 13 00:21:18.593715 kubelet[2818]: I1213 00:21:18.593690 2818 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 13 00:21:18.593715 kubelet[2818]: I1213 00:21:18.593700 2818 state_mem.go:35] "Initializing new in-memory state store" Dec 13 00:21:18.593869 kubelet[2818]: I1213 00:21:18.593838 2818 state_mem.go:75] "Updated machine memory state" Dec 13 00:21:18.597824 kubelet[2818]: I1213 00:21:18.597775 2818 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 00:21:18.598051 kubelet[2818]: I1213 00:21:18.598015 2818 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 00:21:18.598105 kubelet[2818]: I1213 00:21:18.598035 2818 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 00:21:18.598431 kubelet[2818]: I1213 00:21:18.598239 2818 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 00:21:18.599657 kubelet[2818]: E1213 00:21:18.599633 2818 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 13 00:21:18.672931 kubelet[2818]: I1213 00:21:18.672875 2818 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 13 00:21:18.673245 kubelet[2818]: I1213 00:21:18.672884 2818 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 13 00:21:18.673363 kubelet[2818]: I1213 00:21:18.672926 2818 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 13 00:21:18.679616 kubelet[2818]: E1213 00:21:18.679563 2818 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Dec 13 00:21:18.679695 kubelet[2818]: E1213 00:21:18.679641 2818 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 13 00:21:18.708468 kubelet[2818]: I1213 00:21:18.708342 2818 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 13 00:21:18.715869 kubelet[2818]: I1213 00:21:18.715834 2818 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Dec 13 00:21:18.716002 kubelet[2818]: I1213 00:21:18.715924 2818 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 13 00:21:18.842318 kubelet[2818]: I1213 00:21:18.842265 2818 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8681337b22fde46a4723210deacb1500-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8681337b22fde46a4723210deacb1500\") " pod="kube-system/kube-apiserver-localhost" Dec 13 00:21:18.842318 kubelet[2818]: I1213 00:21:18.842307 2818 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8681337b22fde46a4723210deacb1500-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8681337b22fde46a4723210deacb1500\") " pod="kube-system/kube-apiserver-localhost" Dec 13 00:21:18.842318 kubelet[2818]: I1213 00:21:18.842333 2818 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8681337b22fde46a4723210deacb1500-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8681337b22fde46a4723210deacb1500\") " pod="kube-system/kube-apiserver-localhost" Dec 13 00:21:18.842318 kubelet[2818]: I1213 00:21:18.842348 2818 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 00:21:18.842593 kubelet[2818]: I1213 00:21:18.842369 2818 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 00:21:18.842593 kubelet[2818]: I1213 00:21:18.842383 2818 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 00:21:18.842593 kubelet[2818]: I1213 00:21:18.842395 2818 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 00:21:18.842593 kubelet[2818]: I1213 00:21:18.842408 2818 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 00:21:18.842593 kubelet[2818]: I1213 00:21:18.842425 2818 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0a68423804124305a9de061f38780871-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0a68423804124305a9de061f38780871\") " pod="kube-system/kube-scheduler-localhost" Dec 13 00:21:18.978752 kubelet[2818]: E1213 00:21:18.978614 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:21:18.980900 kubelet[2818]: E1213 00:21:18.980772 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:21:18.980900 kubelet[2818]: E1213 00:21:18.980772 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:21:19.582141 kubelet[2818]: I1213 00:21:19.582108 2818 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 13 00:21:19.657246 kubelet[2818]: I1213 00:21:19.656713 2818 apiserver.go:52] "Watching apiserver" Dec 13 00:21:19.658080 kubelet[2818]: E1213 00:21:19.658055 2818 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 13 00:21:19.658273 kubelet[2818]: E1213 00:21:19.658250 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:21:19.663330 kubelet[2818]: E1213 00:21:19.663289 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:21:19.663500 kubelet[2818]: E1213 00:21:19.663363 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:21:19.706743 kubelet[2818]: I1213 00:21:19.706637 2818 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.706606093 podStartE2EDuration="2.706606093s" podCreationTimestamp="2025-12-13 00:21:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 00:21:19.706436401 +0000 UTC m=+1.237678705" watchObservedRunningTime="2025-12-13 00:21:19.706606093 +0000 UTC m=+1.237848397" Dec 13 00:21:19.707382 kubelet[2818]: I1213 00:21:19.707338 2818 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.7068007729999999 podStartE2EDuration="1.706800773s" podCreationTimestamp="2025-12-13 00:21:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 00:21:19.695095961 +0000 UTC m=+1.226338275" watchObservedRunningTime="2025-12-13 00:21:19.706800773 +0000 UTC m=+1.238043067" Dec 13 00:21:19.741958 kubelet[2818]: I1213 00:21:19.741911 2818 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 13 00:21:20.583666 kubelet[2818]: E1213 00:21:20.583624 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:21:20.584077 kubelet[2818]: I1213 00:21:20.583632 2818 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 13 00:21:20.589650 kubelet[2818]: E1213 00:21:20.589607 2818 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 13 00:21:20.589831 kubelet[2818]: E1213 00:21:20.589776 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:21:21.585743 kubelet[2818]: E1213 00:21:21.585685 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:21:21.586212 kubelet[2818]: E1213 00:21:21.585865 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:21:24.256875 kubelet[2818]: I1213 00:21:24.256839 2818 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 00:21:24.257335 kubelet[2818]: I1213 00:21:24.257288 2818 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 00:21:24.257371 containerd[1633]: time="2025-12-13T00:21:24.257140167Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 00:21:24.978544 kubelet[2818]: I1213 00:21:24.978464 2818 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=7.978442932 podStartE2EDuration="7.978442932s" podCreationTimestamp="2025-12-13 00:21:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 00:21:19.720640312 +0000 UTC m=+1.251882616" watchObservedRunningTime="2025-12-13 00:21:24.978442932 +0000 UTC m=+6.509685236" Dec 13 00:21:24.986732 systemd[1]: Created slice kubepods-besteffort-podd9c22346_c326_416c_aa5d_93946256756e.slice - libcontainer container kubepods-besteffort-podd9c22346_c326_416c_aa5d_93946256756e.slice. Dec 13 00:21:25.080147 kubelet[2818]: I1213 00:21:25.080084 2818 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d9c22346-c326-416c-aa5d-93946256756e-xtables-lock\") pod \"kube-proxy-lf57l\" (UID: \"d9c22346-c326-416c-aa5d-93946256756e\") " pod="kube-system/kube-proxy-lf57l" Dec 13 00:21:25.080147 kubelet[2818]: I1213 00:21:25.080121 2818 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d9c22346-c326-416c-aa5d-93946256756e-lib-modules\") pod \"kube-proxy-lf57l\" (UID: \"d9c22346-c326-416c-aa5d-93946256756e\") " pod="kube-system/kube-proxy-lf57l" Dec 13 00:21:25.080147 kubelet[2818]: I1213 00:21:25.080139 2818 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xmfn\" (UniqueName: \"kubernetes.io/projected/d9c22346-c326-416c-aa5d-93946256756e-kube-api-access-9xmfn\") pod \"kube-proxy-lf57l\" (UID: \"d9c22346-c326-416c-aa5d-93946256756e\") " pod="kube-system/kube-proxy-lf57l" Dec 13 00:21:25.080147 kubelet[2818]: I1213 00:21:25.080161 2818 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d9c22346-c326-416c-aa5d-93946256756e-kube-proxy\") pod \"kube-proxy-lf57l\" (UID: \"d9c22346-c326-416c-aa5d-93946256756e\") " pod="kube-system/kube-proxy-lf57l" Dec 13 00:21:25.295403 systemd[1]: Created slice kubepods-besteffort-podaf15c1e4_37df_4d53_8837_f9b860c3ec46.slice - libcontainer container kubepods-besteffort-podaf15c1e4_37df_4d53_8837_f9b860c3ec46.slice. Dec 13 00:21:25.301497 kubelet[2818]: E1213 00:21:25.301462 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:21:25.303049 containerd[1633]: time="2025-12-13T00:21:25.302993684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lf57l,Uid:d9c22346-c326-416c-aa5d-93946256756e,Namespace:kube-system,Attempt:0,}" Dec 13 00:21:25.338022 containerd[1633]: time="2025-12-13T00:21:25.337964768Z" level=info msg="connecting to shim aac1d3f02164d8a1e9377c8575ca42317e192d4ed6ca7fd36e316bc2e45e25b6" address="unix:///run/containerd/s/9bf404dbfcc29849d2a16f6cc5498e39628acd2127a15c2483019e78ccb7299b" namespace=k8s.io protocol=ttrpc version=3 Dec 13 00:21:25.381487 kubelet[2818]: I1213 00:21:25.381440 2818 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/af15c1e4-37df-4d53-8837-f9b860c3ec46-var-lib-calico\") pod \"tigera-operator-7dcd859c48-s28k9\" (UID: \"af15c1e4-37df-4d53-8837-f9b860c3ec46\") " pod="tigera-operator/tigera-operator-7dcd859c48-s28k9" Dec 13 00:21:25.381658 kubelet[2818]: I1213 00:21:25.381535 2818 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nr2vw\" (UniqueName: \"kubernetes.io/projected/af15c1e4-37df-4d53-8837-f9b860c3ec46-kube-api-access-nr2vw\") pod \"tigera-operator-7dcd859c48-s28k9\" (UID: \"af15c1e4-37df-4d53-8837-f9b860c3ec46\") " pod="tigera-operator/tigera-operator-7dcd859c48-s28k9" Dec 13 00:21:25.387012 systemd[1]: Started cri-containerd-aac1d3f02164d8a1e9377c8575ca42317e192d4ed6ca7fd36e316bc2e45e25b6.scope - libcontainer container aac1d3f02164d8a1e9377c8575ca42317e192d4ed6ca7fd36e316bc2e45e25b6. Dec 13 00:21:25.398000 audit: BPF prog-id=137 op=LOAD Dec 13 00:21:25.400854 kernel: kauditd_printk_skb: 32 callbacks suppressed Dec 13 00:21:25.400916 kernel: audit: type=1334 audit(1765585285.398:446): prog-id=137 op=LOAD Dec 13 00:21:25.398000 audit: BPF prog-id=138 op=LOAD Dec 13 00:21:25.403198 kernel: audit: type=1334 audit(1765585285.398:447): prog-id=138 op=LOAD Dec 13 00:21:25.403265 kernel: audit: type=1300 audit(1765585285.398:447): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2877 pid=2888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:25.398000 audit[2888]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2877 pid=2888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:25.398000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161633164336630323136346438613165393337376338353735636134 Dec 13 00:21:25.414446 kernel: audit: type=1327 audit(1765585285.398:447): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161633164336630323136346438613165393337376338353735636134 Dec 13 00:21:25.398000 audit: BPF prog-id=138 op=UNLOAD Dec 13 00:21:25.415918 kernel: audit: type=1334 audit(1765585285.398:448): prog-id=138 op=UNLOAD Dec 13 00:21:25.415945 kernel: audit: type=1300 audit(1765585285.398:448): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2877 pid=2888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:25.398000 audit[2888]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2877 pid=2888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:25.398000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161633164336630323136346438613165393337376338353735636134 Dec 13 00:21:25.426018 kernel: audit: type=1327 audit(1765585285.398:448): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161633164336630323136346438613165393337376338353735636134 Dec 13 00:21:25.426099 kernel: audit: type=1334 audit(1765585285.398:449): prog-id=139 op=LOAD Dec 13 00:21:25.398000 audit: BPF prog-id=139 op=LOAD Dec 13 00:21:25.398000 audit[2888]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2877 pid=2888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:25.429545 containerd[1633]: time="2025-12-13T00:21:25.429505077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lf57l,Uid:d9c22346-c326-416c-aa5d-93946256756e,Namespace:kube-system,Attempt:0,} returns sandbox id \"aac1d3f02164d8a1e9377c8575ca42317e192d4ed6ca7fd36e316bc2e45e25b6\"" Dec 13 00:21:25.430390 kubelet[2818]: E1213 00:21:25.430367 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:21:25.433019 kernel: audit: type=1300 audit(1765585285.398:449): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2877 pid=2888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:25.433134 kernel: audit: type=1327 audit(1765585285.398:449): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161633164336630323136346438613165393337376338353735636134 Dec 13 00:21:25.398000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161633164336630323136346438613165393337376338353735636134 Dec 13 00:21:25.433550 containerd[1633]: time="2025-12-13T00:21:25.433514316Z" level=info msg="CreateContainer within sandbox \"aac1d3f02164d8a1e9377c8575ca42317e192d4ed6ca7fd36e316bc2e45e25b6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 00:21:25.398000 audit: BPF prog-id=140 op=LOAD Dec 13 00:21:25.398000 audit[2888]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2877 pid=2888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:25.398000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161633164336630323136346438613165393337376338353735636134 Dec 13 00:21:25.398000 audit: BPF prog-id=140 op=UNLOAD Dec 13 00:21:25.398000 audit[2888]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2877 pid=2888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:25.398000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161633164336630323136346438613165393337376338353735636134 Dec 13 00:21:25.399000 audit: BPF prog-id=139 op=UNLOAD Dec 13 00:21:25.399000 audit[2888]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2877 pid=2888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:25.399000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161633164336630323136346438613165393337376338353735636134 Dec 13 00:21:25.399000 audit: BPF prog-id=141 op=LOAD Dec 13 00:21:25.399000 audit[2888]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2877 pid=2888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:25.399000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161633164336630323136346438613165393337376338353735636134 Dec 13 00:21:25.452683 containerd[1633]: time="2025-12-13T00:21:25.452632343Z" level=info msg="Container 08a7a0c6b0fee97ff1834dbe199e50d1d78ffdedf5aa05b6207818e10578c6c3: CDI devices from CRI Config.CDIDevices: []" Dec 13 00:21:25.463284 containerd[1633]: time="2025-12-13T00:21:25.463245893Z" level=info msg="CreateContainer within sandbox \"aac1d3f02164d8a1e9377c8575ca42317e192d4ed6ca7fd36e316bc2e45e25b6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"08a7a0c6b0fee97ff1834dbe199e50d1d78ffdedf5aa05b6207818e10578c6c3\"" Dec 13 00:21:25.463832 containerd[1633]: time="2025-12-13T00:21:25.463775044Z" level=info msg="StartContainer for \"08a7a0c6b0fee97ff1834dbe199e50d1d78ffdedf5aa05b6207818e10578c6c3\"" Dec 13 00:21:25.465193 containerd[1633]: time="2025-12-13T00:21:25.465150347Z" level=info msg="connecting to shim 08a7a0c6b0fee97ff1834dbe199e50d1d78ffdedf5aa05b6207818e10578c6c3" address="unix:///run/containerd/s/9bf404dbfcc29849d2a16f6cc5498e39628acd2127a15c2483019e78ccb7299b" protocol=ttrpc version=3 Dec 13 00:21:25.490040 systemd[1]: Started cri-containerd-08a7a0c6b0fee97ff1834dbe199e50d1d78ffdedf5aa05b6207818e10578c6c3.scope - libcontainer container 08a7a0c6b0fee97ff1834dbe199e50d1d78ffdedf5aa05b6207818e10578c6c3. Dec 13 00:21:25.571000 audit: BPF prog-id=142 op=LOAD Dec 13 00:21:25.571000 audit[2913]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2877 pid=2913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:25.571000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038613761306336623066656539376666313833346462653139396535 Dec 13 00:21:25.571000 audit: BPF prog-id=143 op=LOAD Dec 13 00:21:25.571000 audit[2913]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2877 pid=2913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:25.571000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038613761306336623066656539376666313833346462653139396535 Dec 13 00:21:25.571000 audit: BPF prog-id=143 op=UNLOAD Dec 13 00:21:25.571000 audit[2913]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2877 pid=2913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:25.571000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038613761306336623066656539376666313833346462653139396535 Dec 13 00:21:25.571000 audit: BPF prog-id=142 op=UNLOAD Dec 13 00:21:25.571000 audit[2913]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2877 pid=2913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:25.571000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038613761306336623066656539376666313833346462653139396535 Dec 13 00:21:25.571000 audit: BPF prog-id=144 op=LOAD Dec 13 00:21:25.571000 audit[2913]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2877 pid=2913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:25.571000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038613761306336623066656539376666313833346462653139396535 Dec 13 00:21:25.590615 containerd[1633]: time="2025-12-13T00:21:25.590574502Z" level=info msg="StartContainer for \"08a7a0c6b0fee97ff1834dbe199e50d1d78ffdedf5aa05b6207818e10578c6c3\" returns successfully" Dec 13 00:21:25.594342 kubelet[2818]: E1213 00:21:25.594301 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:21:25.596194 kubelet[2818]: E1213 00:21:25.596170 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:21:25.600760 containerd[1633]: time="2025-12-13T00:21:25.600713483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-s28k9,Uid:af15c1e4-37df-4d53-8837-f9b860c3ec46,Namespace:tigera-operator,Attempt:0,}" Dec 13 00:21:25.605232 kubelet[2818]: I1213 00:21:25.605164 2818 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lf57l" podStartSLOduration=1.605144331 podStartE2EDuration="1.605144331s" podCreationTimestamp="2025-12-13 00:21:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 00:21:25.603098308 +0000 UTC m=+7.134340612" watchObservedRunningTime="2025-12-13 00:21:25.605144331 +0000 UTC m=+7.136386635" Dec 13 00:21:25.625052 containerd[1633]: time="2025-12-13T00:21:25.624998280Z" level=info msg="connecting to shim a75d8fb16b5db56da939837b62644f72d2e15bf530421f7989240cdd552473ab" address="unix:///run/containerd/s/bd785cb8412d0ca941c998689f9b360d63cc484057278b3f2d5187f00b9cf046" namespace=k8s.io protocol=ttrpc version=3 Dec 13 00:21:25.654999 systemd[1]: Started cri-containerd-a75d8fb16b5db56da939837b62644f72d2e15bf530421f7989240cdd552473ab.scope - libcontainer container a75d8fb16b5db56da939837b62644f72d2e15bf530421f7989240cdd552473ab. Dec 13 00:21:25.670000 audit: BPF prog-id=145 op=LOAD Dec 13 00:21:25.670000 audit: BPF prog-id=146 op=LOAD Dec 13 00:21:25.670000 audit[2967]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2953 pid=2967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:25.670000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137356438666231366235646235366461393339383337623632363434 Dec 13 00:21:25.670000 audit: BPF prog-id=146 op=UNLOAD Dec 13 00:21:25.670000 audit[2967]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2953 pid=2967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:25.670000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137356438666231366235646235366461393339383337623632363434 Dec 13 00:21:25.670000 audit: BPF prog-id=147 op=LOAD Dec 13 00:21:25.670000 audit[2967]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2953 pid=2967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:25.670000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137356438666231366235646235366461393339383337623632363434 Dec 13 00:21:25.670000 audit: BPF prog-id=148 op=LOAD Dec 13 00:21:25.670000 audit[2967]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2953 pid=2967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:25.670000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137356438666231366235646235366461393339383337623632363434 Dec 13 00:21:25.670000 audit: BPF prog-id=148 op=UNLOAD Dec 13 00:21:25.670000 audit[2967]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2953 pid=2967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:25.670000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137356438666231366235646235366461393339383337623632363434 Dec 13 00:21:25.670000 audit: BPF prog-id=147 op=UNLOAD Dec 13 00:21:25.670000 audit[2967]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2953 pid=2967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:25.670000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137356438666231366235646235366461393339383337623632363434 Dec 13 00:21:25.670000 audit: BPF prog-id=149 op=LOAD Dec 13 00:21:25.670000 audit[2967]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2953 pid=2967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:25.670000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137356438666231366235646235366461393339383337623632363434 Dec 13 00:21:25.710014 containerd[1633]: time="2025-12-13T00:21:25.709950098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-s28k9,Uid:af15c1e4-37df-4d53-8837-f9b860c3ec46,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"a75d8fb16b5db56da939837b62644f72d2e15bf530421f7989240cdd552473ab\"" Dec 13 00:21:25.712083 containerd[1633]: time="2025-12-13T00:21:25.712050624Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 13 00:21:25.739000 audit[3023]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=3023 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:21:25.739000 audit[3023]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffb69d1c90 a2=0 a3=7fffb69d1c7c items=0 ppid=2927 pid=3023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:25.739000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 13 00:21:25.740000 audit[3024]: NETFILTER_CFG table=mangle:55 family=10 entries=1 op=nft_register_chain pid=3024 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:21:25.740000 audit[3024]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffda6b56b60 a2=0 a3=7ffda6b56b4c items=0 ppid=2927 pid=3024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:25.740000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 13 00:21:25.742000 audit[3025]: NETFILTER_CFG table=nat:56 family=2 entries=1 op=nft_register_chain pid=3025 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:21:25.742000 audit[3025]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffea7f0acb0 a2=0 a3=7ffea7f0ac9c items=0 ppid=2927 pid=3025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:25.742000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 13 00:21:25.744000 audit[3027]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_chain pid=3027 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:21:25.744000 audit[3027]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc5f5f82e0 a2=0 a3=7ffc5f5f82cc items=0 ppid=2927 pid=3027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:25.744000 audit[3026]: NETFILTER_CFG table=nat:58 family=10 entries=1 op=nft_register_chain pid=3026 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:21:25.744000 audit[3026]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe0a230b40 a2=0 a3=7ffe0a230b2c items=0 ppid=2927 pid=3026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:25.744000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 13 00:21:25.744000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 13 00:21:25.749000 audit[3028]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_chain pid=3028 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:21:25.749000 audit[3028]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffe8d0c4d0 a2=0 a3=7fffe8d0c4bc items=0 ppid=2927 pid=3028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:25.749000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 13 00:21:25.843000 audit[3030]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3030 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:21:25.843000 audit[3030]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffc1ae036f0 a2=0 a3=7ffc1ae036dc items=0 ppid=2927 pid=3030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:25.843000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 13 00:21:25.847000 audit[3032]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3032 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:21:25.847000 audit[3032]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc25a37cf0 a2=0 a3=7ffc25a37cdc items=0 ppid=2927 pid=3032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:25.847000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Dec 13 00:21:25.852000 audit[3035]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3035 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:21:25.852000 audit[3035]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffef8274dd0 a2=0 a3=7ffef8274dbc items=0 ppid=2927 pid=3035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:25.852000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Dec 13 00:21:25.854000 audit[3036]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3036 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:21:25.854000 audit[3036]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffeab56cb0 a2=0 a3=7fffeab56c9c items=0 ppid=2927 pid=3036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:25.854000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 13 00:21:25.857000 audit[3038]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3038 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:21:25.857000 audit[3038]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffccb3c4ce0 a2=0 a3=7ffccb3c4ccc items=0 ppid=2927 pid=3038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:25.857000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 13 00:21:25.859000 audit[3039]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3039 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:21:25.859000 audit[3039]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff223afc30 a2=0 a3=7fff223afc1c items=0 ppid=2927 pid=3039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:25.859000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 13 00:21:25.862000 audit[3041]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3041 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:21:25.862000 audit[3041]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffdd443a9e0 a2=0 a3=7ffdd443a9cc items=0 ppid=2927 pid=3041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:25.862000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 13 00:21:25.867000 audit[3044]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3044 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:21:25.867000 audit[3044]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fffb8efe8e0 a2=0 a3=7fffb8efe8cc items=0 ppid=2927 pid=3044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:25.867000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Dec 13 00:21:25.869000 audit[3045]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3045 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:21:25.869000 audit[3045]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffedf539080 a2=0 a3=7ffedf53906c items=0 ppid=2927 pid=3045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:25.869000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 13 00:21:25.872000 audit[3047]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3047 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:21:25.872000 audit[3047]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffdfe978a0 a2=0 a3=7fffdfe9788c items=0 ppid=2927 pid=3047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:25.872000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 13 00:21:25.873000 audit[3048]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3048 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:21:25.873000 audit[3048]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd4f3c9860 a2=0 a3=7ffd4f3c984c items=0 ppid=2927 pid=3048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:25.873000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 13 00:21:25.877000 audit[3050]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3050 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:21:25.877000 audit[3050]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd0043f920 a2=0 a3=7ffd0043f90c items=0 ppid=2927 pid=3050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:25.877000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 13 00:21:25.881000 audit[3053]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3053 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:21:25.881000 audit[3053]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffef90a9dc0 a2=0 a3=7ffef90a9dac items=0 ppid=2927 pid=3053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:25.881000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 13 00:21:25.886000 audit[3056]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3056 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:21:25.886000 audit[3056]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcc3c793b0 a2=0 a3=7ffcc3c7939c items=0 ppid=2927 pid=3056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:25.886000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 13 00:21:25.887000 audit[3057]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3057 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:21:25.887000 audit[3057]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffeebf5fa40 a2=0 a3=7ffeebf5fa2c items=0 ppid=2927 pid=3057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:25.887000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 13 00:21:25.890000 audit[3059]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3059 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:21:25.890000 audit[3059]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffc3d60cb40 a2=0 a3=7ffc3d60cb2c items=0 ppid=2927 pid=3059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:25.890000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 13 00:21:25.895000 audit[3062]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3062 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:21:25.895000 audit[3062]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffe7160c50 a2=0 a3=7fffe7160c3c items=0 ppid=2927 pid=3062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:25.895000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 13 00:21:25.896000 audit[3063]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3063 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:21:25.896000 audit[3063]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcad232890 a2=0 a3=7ffcad23287c items=0 ppid=2927 pid=3063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:25.896000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 13 00:21:25.899000 audit[3065]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3065 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 00:21:25.899000 audit[3065]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffe6e3c9880 a2=0 a3=7ffe6e3c986c items=0 ppid=2927 pid=3065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:25.899000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 13 00:21:25.920000 audit[3071]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3071 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:21:25.920000 audit[3071]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd258958d0 a2=0 a3=7ffd258958bc items=0 ppid=2927 pid=3071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:25.920000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:21:25.935000 audit[3071]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3071 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:21:25.935000 audit[3071]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffd258958d0 a2=0 a3=7ffd258958bc items=0 ppid=2927 pid=3071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:25.935000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:21:25.937000 audit[3076]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3076 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:21:25.937000 audit[3076]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffc504c6990 a2=0 a3=7ffc504c697c items=0 ppid=2927 pid=3076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:25.937000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 13 00:21:25.941000 audit[3078]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3078 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:21:25.941000 audit[3078]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffdf6512090 a2=0 a3=7ffdf651207c items=0 ppid=2927 pid=3078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:25.941000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Dec 13 00:21:25.945000 audit[3081]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3081 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:21:25.945000 audit[3081]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffcfb43b1a0 a2=0 a3=7ffcfb43b18c items=0 ppid=2927 pid=3081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:25.945000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Dec 13 00:21:25.947000 audit[3082]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3082 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:21:25.947000 audit[3082]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff297f4b50 a2=0 a3=7fff297f4b3c items=0 ppid=2927 pid=3082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:25.947000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 13 00:21:25.951000 audit[3084]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3084 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:21:25.951000 audit[3084]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffedd9ba270 a2=0 a3=7ffedd9ba25c items=0 ppid=2927 pid=3084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:25.951000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 13 00:21:25.952000 audit[3085]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3085 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:21:25.952000 audit[3085]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffce5a58460 a2=0 a3=7ffce5a5844c items=0 ppid=2927 pid=3085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:25.952000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 13 00:21:25.956000 audit[3087]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3087 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:21:25.956000 audit[3087]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffcd92a14e0 a2=0 a3=7ffcd92a14cc items=0 ppid=2927 pid=3087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:25.956000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Dec 13 00:21:25.961000 audit[3090]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3090 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:21:25.961000 audit[3090]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffcb8df9990 a2=0 a3=7ffcb8df997c items=0 ppid=2927 pid=3090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:25.961000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 13 00:21:25.962000 audit[3091]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3091 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:21:25.962000 audit[3091]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd5a8d6bc0 a2=0 a3=7ffd5a8d6bac items=0 ppid=2927 pid=3091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:25.962000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 13 00:21:25.965000 audit[3093]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3093 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:21:25.965000 audit[3093]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcd515ce30 a2=0 a3=7ffcd515ce1c items=0 ppid=2927 pid=3093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:25.965000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 13 00:21:25.967000 audit[3094]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3094 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:21:25.967000 audit[3094]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc56896f50 a2=0 a3=7ffc56896f3c items=0 ppid=2927 pid=3094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:25.967000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 13 00:21:25.970000 audit[3096]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3096 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:21:25.970000 audit[3096]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdac042ee0 a2=0 a3=7ffdac042ecc items=0 ppid=2927 pid=3096 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:25.970000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 13 00:21:25.975000 audit[3099]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3099 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:21:25.975000 audit[3099]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc36ebce10 a2=0 a3=7ffc36ebcdfc items=0 ppid=2927 pid=3099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:25.975000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 13 00:21:25.980000 audit[3102]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3102 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:21:25.980000 audit[3102]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc20932d90 a2=0 a3=7ffc20932d7c items=0 ppid=2927 pid=3102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:25.980000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Dec 13 00:21:25.982000 audit[3103]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3103 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:21:25.982000 audit[3103]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff34f67e90 a2=0 a3=7fff34f67e7c items=0 ppid=2927 pid=3103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:25.982000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 13 00:21:25.985000 audit[3105]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3105 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:21:25.985000 audit[3105]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffd600529b0 a2=0 a3=7ffd6005299c items=0 ppid=2927 pid=3105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:25.985000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 13 00:21:25.990000 audit[3108]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3108 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:21:25.990000 audit[3108]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdf5a0f570 a2=0 a3=7ffdf5a0f55c items=0 ppid=2927 pid=3108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:25.990000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 13 00:21:25.991000 audit[3109]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3109 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:21:25.991000 audit[3109]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffde1fd6ab0 a2=0 a3=7ffde1fd6a9c items=0 ppid=2927 pid=3109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:25.991000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 13 00:21:25.994000 audit[3111]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3111 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:21:25.994000 audit[3111]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffcf09e63b0 a2=0 a3=7ffcf09e639c items=0 ppid=2927 pid=3111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:25.994000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 13 00:21:25.995000 audit[3112]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3112 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:21:25.995000 audit[3112]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc80f6c9b0 a2=0 a3=7ffc80f6c99c items=0 ppid=2927 pid=3112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:25.995000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 13 00:21:25.999000 audit[3114]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3114 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:21:25.999000 audit[3114]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe3e9df730 a2=0 a3=7ffe3e9df71c items=0 ppid=2927 pid=3114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:25.999000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 13 00:21:26.004000 audit[3117]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3117 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 00:21:26.004000 audit[3117]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fffe39c8d60 a2=0 a3=7fffe39c8d4c items=0 ppid=2927 pid=3117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:26.004000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 13 00:21:26.008000 audit[3119]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3119 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 13 00:21:26.008000 audit[3119]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffd2e8db890 a2=0 a3=7ffd2e8db87c items=0 ppid=2927 pid=3119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:26.008000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:21:26.009000 audit[3119]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3119 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 13 00:21:26.009000 audit[3119]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffd2e8db890 a2=0 a3=7ffd2e8db87c items=0 ppid=2927 pid=3119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:26.009000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:21:26.376801 update_engine[1607]: I20251213 00:21:26.376637 1607 update_attempter.cc:509] Updating boot flags... Dec 13 00:21:26.596340 kubelet[2818]: E1213 00:21:26.596305 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:21:27.350422 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3097406571.mount: Deactivated successfully. Dec 13 00:21:27.690773 containerd[1633]: time="2025-12-13T00:21:27.690637641Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:21:27.691570 containerd[1633]: time="2025-12-13T00:21:27.691535719Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=0" Dec 13 00:21:27.692652 containerd[1633]: time="2025-12-13T00:21:27.692614208Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:21:27.694829 containerd[1633]: time="2025-12-13T00:21:27.694755927Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:21:27.695387 containerd[1633]: time="2025-12-13T00:21:27.695350071Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 1.9832595s" Dec 13 00:21:27.695387 containerd[1633]: time="2025-12-13T00:21:27.695382152Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Dec 13 00:21:27.697324 containerd[1633]: time="2025-12-13T00:21:27.697295830Z" level=info msg="CreateContainer within sandbox \"a75d8fb16b5db56da939837b62644f72d2e15bf530421f7989240cdd552473ab\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 13 00:21:27.705719 containerd[1633]: time="2025-12-13T00:21:27.705660545Z" level=info msg="Container e6f22bc52fa074def8733d3de61ddd302e41c572606e27c3b6d937cedafbda5a: CDI devices from CRI Config.CDIDevices: []" Dec 13 00:21:27.709053 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount564625494.mount: Deactivated successfully. Dec 13 00:21:27.712532 containerd[1633]: time="2025-12-13T00:21:27.712486852Z" level=info msg="CreateContainer within sandbox \"a75d8fb16b5db56da939837b62644f72d2e15bf530421f7989240cdd552473ab\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"e6f22bc52fa074def8733d3de61ddd302e41c572606e27c3b6d937cedafbda5a\"" Dec 13 00:21:27.713100 containerd[1633]: time="2025-12-13T00:21:27.713042181Z" level=info msg="StartContainer for \"e6f22bc52fa074def8733d3de61ddd302e41c572606e27c3b6d937cedafbda5a\"" Dec 13 00:21:27.713978 containerd[1633]: time="2025-12-13T00:21:27.713938085Z" level=info msg="connecting to shim e6f22bc52fa074def8733d3de61ddd302e41c572606e27c3b6d937cedafbda5a" address="unix:///run/containerd/s/bd785cb8412d0ca941c998689f9b360d63cc484057278b3f2d5187f00b9cf046" protocol=ttrpc version=3 Dec 13 00:21:27.734963 systemd[1]: Started cri-containerd-e6f22bc52fa074def8733d3de61ddd302e41c572606e27c3b6d937cedafbda5a.scope - libcontainer container e6f22bc52fa074def8733d3de61ddd302e41c572606e27c3b6d937cedafbda5a. Dec 13 00:21:27.746000 audit: BPF prog-id=150 op=LOAD Dec 13 00:21:27.746000 audit: BPF prog-id=151 op=LOAD Dec 13 00:21:27.746000 audit[3143]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2953 pid=3143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:27.746000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536663232626335326661303734646566383733336433646536316464 Dec 13 00:21:27.747000 audit: BPF prog-id=151 op=UNLOAD Dec 13 00:21:27.747000 audit[3143]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2953 pid=3143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:27.747000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536663232626335326661303734646566383733336433646536316464 Dec 13 00:21:27.747000 audit: BPF prog-id=152 op=LOAD Dec 13 00:21:27.747000 audit[3143]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2953 pid=3143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:27.747000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536663232626335326661303734646566383733336433646536316464 Dec 13 00:21:27.747000 audit: BPF prog-id=153 op=LOAD Dec 13 00:21:27.747000 audit[3143]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2953 pid=3143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:27.747000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536663232626335326661303734646566383733336433646536316464 Dec 13 00:21:27.747000 audit: BPF prog-id=153 op=UNLOAD Dec 13 00:21:27.747000 audit[3143]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2953 pid=3143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:27.747000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536663232626335326661303734646566383733336433646536316464 Dec 13 00:21:27.747000 audit: BPF prog-id=152 op=UNLOAD Dec 13 00:21:27.747000 audit[3143]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2953 pid=3143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:27.747000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536663232626335326661303734646566383733336433646536316464 Dec 13 00:21:27.747000 audit: BPF prog-id=154 op=LOAD Dec 13 00:21:27.747000 audit[3143]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2953 pid=3143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:27.747000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536663232626335326661303734646566383733336433646536316464 Dec 13 00:21:27.771646 containerd[1633]: time="2025-12-13T00:21:27.771601029Z" level=info msg="StartContainer for \"e6f22bc52fa074def8733d3de61ddd302e41c572606e27c3b6d937cedafbda5a\" returns successfully" Dec 13 00:21:28.608157 kubelet[2818]: I1213 00:21:28.608084 2818 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-s28k9" podStartSLOduration=1.6234084709999999 podStartE2EDuration="3.608064273s" podCreationTimestamp="2025-12-13 00:21:25 +0000 UTC" firstStartedPulling="2025-12-13 00:21:25.711484583 +0000 UTC m=+7.242726887" lastFinishedPulling="2025-12-13 00:21:27.696140385 +0000 UTC m=+9.227382689" observedRunningTime="2025-12-13 00:21:28.607976187 +0000 UTC m=+10.139218491" watchObservedRunningTime="2025-12-13 00:21:28.608064273 +0000 UTC m=+10.139306587" Dec 13 00:21:28.994014 kubelet[2818]: E1213 00:21:28.993793 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:21:29.601764 kubelet[2818]: E1213 00:21:29.601718 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:21:29.741034 systemd[1]: cri-containerd-e6f22bc52fa074def8733d3de61ddd302e41c572606e27c3b6d937cedafbda5a.scope: Deactivated successfully. Dec 13 00:21:29.743937 containerd[1633]: time="2025-12-13T00:21:29.743890407Z" level=info msg="received container exit event container_id:\"e6f22bc52fa074def8733d3de61ddd302e41c572606e27c3b6d937cedafbda5a\" id:\"e6f22bc52fa074def8733d3de61ddd302e41c572606e27c3b6d937cedafbda5a\" pid:3156 exit_status:1 exited_at:{seconds:1765585289 nanos:743104432}" Dec 13 00:21:29.746000 audit: BPF prog-id=150 op=UNLOAD Dec 13 00:21:29.746000 audit: BPF prog-id=154 op=UNLOAD Dec 13 00:21:29.775931 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e6f22bc52fa074def8733d3de61ddd302e41c572606e27c3b6d937cedafbda5a-rootfs.mount: Deactivated successfully. Dec 13 00:21:30.607686 kubelet[2818]: I1213 00:21:30.607613 2818 scope.go:117] "RemoveContainer" containerID="e6f22bc52fa074def8733d3de61ddd302e41c572606e27c3b6d937cedafbda5a" Dec 13 00:21:30.611761 containerd[1633]: time="2025-12-13T00:21:30.611711082Z" level=info msg="CreateContainer within sandbox \"a75d8fb16b5db56da939837b62644f72d2e15bf530421f7989240cdd552473ab\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Dec 13 00:21:30.624109 containerd[1633]: time="2025-12-13T00:21:30.624059135Z" level=info msg="Container c5813252c4fb119daecedfb84ebdb64bf5d347c93354f733c07e4455043f9426: CDI devices from CRI Config.CDIDevices: []" Dec 13 00:21:30.632073 containerd[1633]: time="2025-12-13T00:21:30.632023932Z" level=info msg="CreateContainer within sandbox \"a75d8fb16b5db56da939837b62644f72d2e15bf530421f7989240cdd552473ab\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"c5813252c4fb119daecedfb84ebdb64bf5d347c93354f733c07e4455043f9426\"" Dec 13 00:21:30.632756 containerd[1633]: time="2025-12-13T00:21:30.632726148Z" level=info msg="StartContainer for \"c5813252c4fb119daecedfb84ebdb64bf5d347c93354f733c07e4455043f9426\"" Dec 13 00:21:30.637163 containerd[1633]: time="2025-12-13T00:21:30.637120886Z" level=info msg="connecting to shim c5813252c4fb119daecedfb84ebdb64bf5d347c93354f733c07e4455043f9426" address="unix:///run/containerd/s/bd785cb8412d0ca941c998689f9b360d63cc484057278b3f2d5187f00b9cf046" protocol=ttrpc version=3 Dec 13 00:21:30.661066 systemd[1]: Started cri-containerd-c5813252c4fb119daecedfb84ebdb64bf5d347c93354f733c07e4455043f9426.scope - libcontainer container c5813252c4fb119daecedfb84ebdb64bf5d347c93354f733c07e4455043f9426. Dec 13 00:21:30.677000 audit: BPF prog-id=155 op=LOAD Dec 13 00:21:30.679398 kernel: kauditd_printk_skb: 226 callbacks suppressed Dec 13 00:21:30.679479 kernel: audit: type=1334 audit(1765585290.677:528): prog-id=155 op=LOAD Dec 13 00:21:30.677000 audit: BPF prog-id=156 op=LOAD Dec 13 00:21:30.682533 kernel: audit: type=1334 audit(1765585290.677:529): prog-id=156 op=LOAD Dec 13 00:21:30.677000 audit[3206]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2953 pid=3206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:30.688775 kernel: audit: type=1300 audit(1765585290.677:529): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2953 pid=3206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:30.677000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335383133323532633466623131396461656365646662383465626462 Dec 13 00:21:30.695826 kernel: audit: type=1327 audit(1765585290.677:529): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335383133323532633466623131396461656365646662383465626462 Dec 13 00:21:30.677000 audit: BPF prog-id=156 op=UNLOAD Dec 13 00:21:30.677000 audit[3206]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2953 pid=3206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:30.704549 kernel: audit: type=1334 audit(1765585290.677:530): prog-id=156 op=UNLOAD Dec 13 00:21:30.704600 kernel: audit: type=1300 audit(1765585290.677:530): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2953 pid=3206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:30.677000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335383133323532633466623131396461656365646662383465626462 Dec 13 00:21:30.710691 kernel: audit: type=1327 audit(1765585290.677:530): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335383133323532633466623131396461656365646662383465626462 Dec 13 00:21:30.678000 audit: BPF prog-id=157 op=LOAD Dec 13 00:21:30.678000 audit[3206]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2953 pid=3206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:30.718588 containerd[1633]: time="2025-12-13T00:21:30.718551095Z" level=info msg="StartContainer for \"c5813252c4fb119daecedfb84ebdb64bf5d347c93354f733c07e4455043f9426\" returns successfully" Dec 13 00:21:30.719943 kernel: audit: type=1334 audit(1765585290.678:531): prog-id=157 op=LOAD Dec 13 00:21:30.720006 kernel: audit: type=1300 audit(1765585290.678:531): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2953 pid=3206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:30.678000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335383133323532633466623131396461656365646662383465626462 Dec 13 00:21:30.728115 kernel: audit: type=1327 audit(1765585290.678:531): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335383133323532633466623131396461656365646662383465626462 Dec 13 00:21:30.678000 audit: BPF prog-id=158 op=LOAD Dec 13 00:21:30.678000 audit[3206]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2953 pid=3206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:30.678000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335383133323532633466623131396461656365646662383465626462 Dec 13 00:21:30.678000 audit: BPF prog-id=158 op=UNLOAD Dec 13 00:21:30.678000 audit[3206]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2953 pid=3206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:30.678000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335383133323532633466623131396461656365646662383465626462 Dec 13 00:21:30.678000 audit: BPF prog-id=157 op=UNLOAD Dec 13 00:21:30.678000 audit[3206]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2953 pid=3206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:30.678000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335383133323532633466623131396461656365646662383465626462 Dec 13 00:21:30.678000 audit: BPF prog-id=159 op=LOAD Dec 13 00:21:30.678000 audit[3206]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2953 pid=3206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:30.678000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335383133323532633466623131396461656365646662383465626462 Dec 13 00:21:31.264072 kubelet[2818]: E1213 00:21:31.264021 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:21:31.610746 kubelet[2818]: E1213 00:21:31.610712 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:21:33.261388 sudo[1868]: pam_unix(sudo:session): session closed for user root Dec 13 00:21:33.260000 audit[1868]: USER_END pid=1868 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 00:21:33.260000 audit[1868]: CRED_DISP pid=1868 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 00:21:33.266262 sshd[1867]: Connection closed by 10.0.0.1 port 33234 Dec 13 00:21:33.266791 sshd-session[1860]: pam_unix(sshd:session): session closed for user core Dec 13 00:21:33.268000 audit[1860]: USER_END pid=1860 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:21:33.268000 audit[1860]: CRED_DISP pid=1860 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:21:33.273719 systemd[1]: sshd@8-10.0.0.65:22-10.0.0.1:33234.service: Deactivated successfully. Dec 13 00:21:33.273000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.65:22-10.0.0.1:33234 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:21:33.275956 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 00:21:33.276204 systemd[1]: session-10.scope: Consumed 4.695s CPU time, 188.8M memory peak. Dec 13 00:21:33.277430 systemd-logind[1605]: Session 10 logged out. Waiting for processes to exit. Dec 13 00:21:33.278920 systemd-logind[1605]: Removed session 10. Dec 13 00:21:34.461000 audit[3282]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3282 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:21:34.461000 audit[3282]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff8dd73ec0 a2=0 a3=7fff8dd73eac items=0 ppid=2927 pid=3282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:34.461000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:21:34.466000 audit[3282]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3282 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:21:34.466000 audit[3282]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff8dd73ec0 a2=0 a3=0 items=0 ppid=2927 pid=3282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:34.466000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:21:34.482000 audit[3284]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3284 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:21:34.482000 audit[3284]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffdcb4e2a90 a2=0 a3=7ffdcb4e2a7c items=0 ppid=2927 pid=3284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:34.482000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:21:34.495000 audit[3284]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3284 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:21:34.495000 audit[3284]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdcb4e2a90 a2=0 a3=0 items=0 ppid=2927 pid=3284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:34.495000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:21:36.691205 kernel: kauditd_printk_skb: 29 callbacks suppressed Dec 13 00:21:36.691373 kernel: audit: type=1325 audit(1765585296.683:545): table=filter:109 family=2 entries=17 op=nft_register_rule pid=3287 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:21:36.683000 audit[3287]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3287 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:21:36.700585 kernel: audit: type=1300 audit(1765585296.683:545): arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffcb017c930 a2=0 a3=7ffcb017c91c items=0 ppid=2927 pid=3287 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:36.683000 audit[3287]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffcb017c930 a2=0 a3=7ffcb017c91c items=0 ppid=2927 pid=3287 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:36.683000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:21:36.709913 kernel: audit: type=1327 audit(1765585296.683:545): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:21:36.710041 kernel: audit: type=1325 audit(1765585296.702:546): table=nat:110 family=2 entries=12 op=nft_register_rule pid=3287 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:21:36.702000 audit[3287]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3287 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:21:36.702000 audit[3287]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcb017c930 a2=0 a3=0 items=0 ppid=2927 pid=3287 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:36.719349 kernel: audit: type=1300 audit(1765585296.702:546): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcb017c930 a2=0 a3=0 items=0 ppid=2927 pid=3287 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:36.702000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:21:36.725836 kernel: audit: type=1327 audit(1765585296.702:546): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:21:36.728000 audit[3289]: NETFILTER_CFG table=filter:111 family=2 entries=18 op=nft_register_rule pid=3289 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:21:36.728000 audit[3289]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffc7b748230 a2=0 a3=7ffc7b74821c items=0 ppid=2927 pid=3289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:36.740770 kernel: audit: type=1325 audit(1765585296.728:547): table=filter:111 family=2 entries=18 op=nft_register_rule pid=3289 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:21:36.740869 kernel: audit: type=1300 audit(1765585296.728:547): arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffc7b748230 a2=0 a3=7ffc7b74821c items=0 ppid=2927 pid=3289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:36.740915 kernel: audit: type=1327 audit(1765585296.728:547): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:21:36.728000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:21:36.747000 audit[3289]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3289 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:21:36.747000 audit[3289]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc7b748230 a2=0 a3=0 items=0 ppid=2927 pid=3289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:36.747000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:21:36.752839 kernel: audit: type=1325 audit(1765585296.747:548): table=nat:112 family=2 entries=12 op=nft_register_rule pid=3289 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:21:37.767000 audit[3291]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3291 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:21:37.767000 audit[3291]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffee4165b50 a2=0 a3=7ffee4165b3c items=0 ppid=2927 pid=3291 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:37.767000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:21:37.773000 audit[3291]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3291 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:21:37.773000 audit[3291]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffee4165b50 a2=0 a3=0 items=0 ppid=2927 pid=3291 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:37.773000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:21:38.384422 systemd[1]: Created slice kubepods-besteffort-podc71577c7_b0c3_44bf_9c04_5b263ed51d0a.slice - libcontainer container kubepods-besteffort-podc71577c7_b0c3_44bf_9c04_5b263ed51d0a.slice. Dec 13 00:21:38.455683 kubelet[2818]: I1213 00:21:38.455608 2818 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c71577c7-b0c3-44bf-9c04-5b263ed51d0a-tigera-ca-bundle\") pod \"calico-typha-5589dbc544-mfndd\" (UID: \"c71577c7-b0c3-44bf-9c04-5b263ed51d0a\") " pod="calico-system/calico-typha-5589dbc544-mfndd" Dec 13 00:21:38.455683 kubelet[2818]: I1213 00:21:38.455684 2818 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/c71577c7-b0c3-44bf-9c04-5b263ed51d0a-typha-certs\") pod \"calico-typha-5589dbc544-mfndd\" (UID: \"c71577c7-b0c3-44bf-9c04-5b263ed51d0a\") " pod="calico-system/calico-typha-5589dbc544-mfndd" Dec 13 00:21:38.456178 kubelet[2818]: I1213 00:21:38.455722 2818 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fn5zf\" (UniqueName: \"kubernetes.io/projected/c71577c7-b0c3-44bf-9c04-5b263ed51d0a-kube-api-access-fn5zf\") pod \"calico-typha-5589dbc544-mfndd\" (UID: \"c71577c7-b0c3-44bf-9c04-5b263ed51d0a\") " pod="calico-system/calico-typha-5589dbc544-mfndd" Dec 13 00:21:38.548887 systemd[1]: Created slice kubepods-besteffort-pod52a3053f_d422_4936_b4d9_e819ea93590b.slice - libcontainer container kubepods-besteffort-pod52a3053f_d422_4936_b4d9_e819ea93590b.slice. Dec 13 00:21:38.556257 kubelet[2818]: I1213 00:21:38.556191 2818 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/52a3053f-d422-4936-b4d9-e819ea93590b-lib-modules\") pod \"calico-node-cnwpg\" (UID: \"52a3053f-d422-4936-b4d9-e819ea93590b\") " pod="calico-system/calico-node-cnwpg" Dec 13 00:21:38.556523 kubelet[2818]: I1213 00:21:38.556453 2818 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/52a3053f-d422-4936-b4d9-e819ea93590b-var-lib-calico\") pod \"calico-node-cnwpg\" (UID: \"52a3053f-d422-4936-b4d9-e819ea93590b\") " pod="calico-system/calico-node-cnwpg" Dec 13 00:21:38.556523 kubelet[2818]: I1213 00:21:38.556490 2818 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/52a3053f-d422-4936-b4d9-e819ea93590b-cni-bin-dir\") pod \"calico-node-cnwpg\" (UID: \"52a3053f-d422-4936-b4d9-e819ea93590b\") " pod="calico-system/calico-node-cnwpg" Dec 13 00:21:38.556523 kubelet[2818]: I1213 00:21:38.556524 2818 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/52a3053f-d422-4936-b4d9-e819ea93590b-policysync\") pod \"calico-node-cnwpg\" (UID: \"52a3053f-d422-4936-b4d9-e819ea93590b\") " pod="calico-system/calico-node-cnwpg" Dec 13 00:21:38.556751 kubelet[2818]: I1213 00:21:38.556543 2818 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/52a3053f-d422-4936-b4d9-e819ea93590b-xtables-lock\") pod \"calico-node-cnwpg\" (UID: \"52a3053f-d422-4936-b4d9-e819ea93590b\") " pod="calico-system/calico-node-cnwpg" Dec 13 00:21:38.556751 kubelet[2818]: I1213 00:21:38.556567 2818 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/52a3053f-d422-4936-b4d9-e819ea93590b-cni-log-dir\") pod \"calico-node-cnwpg\" (UID: \"52a3053f-d422-4936-b4d9-e819ea93590b\") " pod="calico-system/calico-node-cnwpg" Dec 13 00:21:38.556751 kubelet[2818]: I1213 00:21:38.556584 2818 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/52a3053f-d422-4936-b4d9-e819ea93590b-cni-net-dir\") pod \"calico-node-cnwpg\" (UID: \"52a3053f-d422-4936-b4d9-e819ea93590b\") " pod="calico-system/calico-node-cnwpg" Dec 13 00:21:38.556751 kubelet[2818]: I1213 00:21:38.556603 2818 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/52a3053f-d422-4936-b4d9-e819ea93590b-flexvol-driver-host\") pod \"calico-node-cnwpg\" (UID: \"52a3053f-d422-4936-b4d9-e819ea93590b\") " pod="calico-system/calico-node-cnwpg" Dec 13 00:21:38.556751 kubelet[2818]: I1213 00:21:38.556625 2818 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/52a3053f-d422-4936-b4d9-e819ea93590b-tigera-ca-bundle\") pod \"calico-node-cnwpg\" (UID: \"52a3053f-d422-4936-b4d9-e819ea93590b\") " pod="calico-system/calico-node-cnwpg" Dec 13 00:21:38.556954 kubelet[2818]: I1213 00:21:38.556643 2818 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/52a3053f-d422-4936-b4d9-e819ea93590b-var-run-calico\") pod \"calico-node-cnwpg\" (UID: \"52a3053f-d422-4936-b4d9-e819ea93590b\") " pod="calico-system/calico-node-cnwpg" Dec 13 00:21:38.556954 kubelet[2818]: I1213 00:21:38.556678 2818 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/52a3053f-d422-4936-b4d9-e819ea93590b-node-certs\") pod \"calico-node-cnwpg\" (UID: \"52a3053f-d422-4936-b4d9-e819ea93590b\") " pod="calico-system/calico-node-cnwpg" Dec 13 00:21:38.556954 kubelet[2818]: I1213 00:21:38.556742 2818 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dg8p9\" (UniqueName: \"kubernetes.io/projected/52a3053f-d422-4936-b4d9-e819ea93590b-kube-api-access-dg8p9\") pod \"calico-node-cnwpg\" (UID: \"52a3053f-d422-4936-b4d9-e819ea93590b\") " pod="calico-system/calico-node-cnwpg" Dec 13 00:21:38.660889 kubelet[2818]: E1213 00:21:38.659342 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:38.660889 kubelet[2818]: W1213 00:21:38.659369 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:38.660889 kubelet[2818]: E1213 00:21:38.659413 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:38.664282 kubelet[2818]: E1213 00:21:38.664249 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:38.664282 kubelet[2818]: W1213 00:21:38.664277 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:38.664898 kubelet[2818]: E1213 00:21:38.664306 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:38.668337 kubelet[2818]: E1213 00:21:38.668220 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:38.668337 kubelet[2818]: W1213 00:21:38.668269 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:38.668337 kubelet[2818]: E1213 00:21:38.668297 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:38.688255 kubelet[2818]: E1213 00:21:38.688219 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:21:38.688831 containerd[1633]: time="2025-12-13T00:21:38.688777977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5589dbc544-mfndd,Uid:c71577c7-b0c3-44bf-9c04-5b263ed51d0a,Namespace:calico-system,Attempt:0,}" Dec 13 00:21:38.720884 containerd[1633]: time="2025-12-13T00:21:38.720651822Z" level=info msg="connecting to shim 0d05f5ec0f8b6b17c106eff5b08509c0dcefd76b5ff680ca6a548bc165d98483" address="unix:///run/containerd/s/56913886b47aa7247d2878fd35c73b813a92ac41ff0a81c582ff3f21185ca5cb" namespace=k8s.io protocol=ttrpc version=3 Dec 13 00:21:38.733941 kubelet[2818]: E1213 00:21:38.733894 2818 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wvdrp" podUID="dedbe661-92c2-4c3f-9ab9-3f4df404e3b1" Dec 13 00:21:38.735903 kubelet[2818]: I1213 00:21:38.735874 2818 status_manager.go:890] "Failed to get status for pod" podUID="dedbe661-92c2-4c3f-9ab9-3f4df404e3b1" pod="calico-system/csi-node-driver-wvdrp" err="pods \"csi-node-driver-wvdrp\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" Dec 13 00:21:38.752279 kubelet[2818]: E1213 00:21:38.752236 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:38.752279 kubelet[2818]: W1213 00:21:38.752257 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:38.752279 kubelet[2818]: E1213 00:21:38.752279 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:38.752485 kubelet[2818]: E1213 00:21:38.752472 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:38.752485 kubelet[2818]: W1213 00:21:38.752482 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:38.752571 kubelet[2818]: E1213 00:21:38.752498 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:38.752676 kubelet[2818]: E1213 00:21:38.752665 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:38.752676 kubelet[2818]: W1213 00:21:38.752673 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:38.752749 kubelet[2818]: E1213 00:21:38.752681 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:38.753061 systemd[1]: Started cri-containerd-0d05f5ec0f8b6b17c106eff5b08509c0dcefd76b5ff680ca6a548bc165d98483.scope - libcontainer container 0d05f5ec0f8b6b17c106eff5b08509c0dcefd76b5ff680ca6a548bc165d98483. Dec 13 00:21:38.754096 kubelet[2818]: E1213 00:21:38.754047 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:38.754096 kubelet[2818]: W1213 00:21:38.754059 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:38.754096 kubelet[2818]: E1213 00:21:38.754069 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:38.754297 kubelet[2818]: E1213 00:21:38.754245 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:38.754297 kubelet[2818]: W1213 00:21:38.754252 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:38.754297 kubelet[2818]: E1213 00:21:38.754260 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:38.754560 kubelet[2818]: E1213 00:21:38.754532 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:38.754560 kubelet[2818]: W1213 00:21:38.754539 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:38.754560 kubelet[2818]: E1213 00:21:38.754546 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:38.754742 kubelet[2818]: E1213 00:21:38.754706 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:38.754742 kubelet[2818]: W1213 00:21:38.754713 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:38.754742 kubelet[2818]: E1213 00:21:38.754720 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:38.754939 kubelet[2818]: E1213 00:21:38.754898 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:38.754939 kubelet[2818]: W1213 00:21:38.754906 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:38.754939 kubelet[2818]: E1213 00:21:38.754914 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:38.755083 kubelet[2818]: E1213 00:21:38.755069 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:38.755083 kubelet[2818]: W1213 00:21:38.755078 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:38.755133 kubelet[2818]: E1213 00:21:38.755086 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:38.755239 kubelet[2818]: E1213 00:21:38.755228 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:38.755239 kubelet[2818]: W1213 00:21:38.755237 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:38.755284 kubelet[2818]: E1213 00:21:38.755246 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:38.755398 kubelet[2818]: E1213 00:21:38.755387 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:38.755398 kubelet[2818]: W1213 00:21:38.755397 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:38.755440 kubelet[2818]: E1213 00:21:38.755404 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:38.756697 kubelet[2818]: E1213 00:21:38.755563 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:38.756697 kubelet[2818]: W1213 00:21:38.755572 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:38.756697 kubelet[2818]: E1213 00:21:38.755580 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:38.756697 kubelet[2818]: E1213 00:21:38.755738 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:38.756697 kubelet[2818]: W1213 00:21:38.755745 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:38.756697 kubelet[2818]: E1213 00:21:38.755752 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:38.756697 kubelet[2818]: E1213 00:21:38.755947 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:38.756697 kubelet[2818]: W1213 00:21:38.755954 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:38.756697 kubelet[2818]: E1213 00:21:38.755962 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:38.756697 kubelet[2818]: E1213 00:21:38.756132 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:38.757024 kubelet[2818]: W1213 00:21:38.756139 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:38.757024 kubelet[2818]: E1213 00:21:38.756146 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:38.757024 kubelet[2818]: E1213 00:21:38.756315 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:38.757024 kubelet[2818]: W1213 00:21:38.756322 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:38.757024 kubelet[2818]: E1213 00:21:38.756329 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:38.757024 kubelet[2818]: E1213 00:21:38.756525 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:38.757024 kubelet[2818]: W1213 00:21:38.756533 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:38.757024 kubelet[2818]: E1213 00:21:38.756545 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:38.757219 kubelet[2818]: E1213 00:21:38.757062 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:38.757219 kubelet[2818]: W1213 00:21:38.757070 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:38.757219 kubelet[2818]: E1213 00:21:38.757187 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:38.757844 kubelet[2818]: E1213 00:21:38.757463 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:38.757844 kubelet[2818]: W1213 00:21:38.757474 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:38.757844 kubelet[2818]: E1213 00:21:38.757482 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:38.757844 kubelet[2818]: E1213 00:21:38.757676 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:38.757844 kubelet[2818]: W1213 00:21:38.757685 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:38.757844 kubelet[2818]: E1213 00:21:38.757694 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:38.759385 kubelet[2818]: E1213 00:21:38.759371 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:38.759524 kubelet[2818]: W1213 00:21:38.759474 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:38.759524 kubelet[2818]: E1213 00:21:38.759505 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:38.760002 kubelet[2818]: I1213 00:21:38.759985 2818 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hq9mf\" (UniqueName: \"kubernetes.io/projected/dedbe661-92c2-4c3f-9ab9-3f4df404e3b1-kube-api-access-hq9mf\") pod \"csi-node-driver-wvdrp\" (UID: \"dedbe661-92c2-4c3f-9ab9-3f4df404e3b1\") " pod="calico-system/csi-node-driver-wvdrp" Dec 13 00:21:38.760379 kubelet[2818]: E1213 00:21:38.760353 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:38.760379 kubelet[2818]: W1213 00:21:38.760365 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:38.760528 kubelet[2818]: E1213 00:21:38.760473 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:38.760875 kubelet[2818]: E1213 00:21:38.760845 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:38.760875 kubelet[2818]: W1213 00:21:38.760859 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:38.761144 kubelet[2818]: E1213 00:21:38.761015 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:38.761340 kubelet[2818]: E1213 00:21:38.761327 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:38.761410 kubelet[2818]: W1213 00:21:38.761398 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:38.761477 kubelet[2818]: E1213 00:21:38.761464 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:38.761588 kubelet[2818]: I1213 00:21:38.761571 2818 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/dedbe661-92c2-4c3f-9ab9-3f4df404e3b1-socket-dir\") pod \"csi-node-driver-wvdrp\" (UID: \"dedbe661-92c2-4c3f-9ab9-3f4df404e3b1\") " pod="calico-system/csi-node-driver-wvdrp" Dec 13 00:21:38.762079 kubelet[2818]: E1213 00:21:38.762048 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:38.762079 kubelet[2818]: W1213 00:21:38.762061 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:38.762252 kubelet[2818]: E1213 00:21:38.762204 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:38.763124 kubelet[2818]: E1213 00:21:38.763053 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:38.763333 kubelet[2818]: W1213 00:21:38.763066 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:38.763333 kubelet[2818]: E1213 00:21:38.763284 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:38.763713 kubelet[2818]: E1213 00:21:38.763665 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:38.763713 kubelet[2818]: W1213 00:21:38.763679 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:38.763713 kubelet[2818]: E1213 00:21:38.763690 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:38.764135 kubelet[2818]: I1213 00:21:38.763955 2818 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/dedbe661-92c2-4c3f-9ab9-3f4df404e3b1-varrun\") pod \"csi-node-driver-wvdrp\" (UID: \"dedbe661-92c2-4c3f-9ab9-3f4df404e3b1\") " pod="calico-system/csi-node-driver-wvdrp" Dec 13 00:21:38.764387 kubelet[2818]: E1213 00:21:38.764360 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:38.764387 kubelet[2818]: W1213 00:21:38.764372 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:38.764540 kubelet[2818]: E1213 00:21:38.764482 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:38.764869 kubelet[2818]: E1213 00:21:38.764826 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:38.764869 kubelet[2818]: W1213 00:21:38.764855 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:38.765361 kubelet[2818]: E1213 00:21:38.764980 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:38.765452 kubelet[2818]: E1213 00:21:38.765436 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:38.765887 kubelet[2818]: W1213 00:21:38.765527 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:38.765887 kubelet[2818]: E1213 00:21:38.765541 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:38.765887 kubelet[2818]: I1213 00:21:38.765596 2818 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dedbe661-92c2-4c3f-9ab9-3f4df404e3b1-kubelet-dir\") pod \"csi-node-driver-wvdrp\" (UID: \"dedbe661-92c2-4c3f-9ab9-3f4df404e3b1\") " pod="calico-system/csi-node-driver-wvdrp" Dec 13 00:21:38.766420 kubelet[2818]: E1213 00:21:38.766170 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:38.766420 kubelet[2818]: W1213 00:21:38.766182 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:38.766420 kubelet[2818]: E1213 00:21:38.766200 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:38.766420 kubelet[2818]: I1213 00:21:38.766219 2818 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/dedbe661-92c2-4c3f-9ab9-3f4df404e3b1-registration-dir\") pod \"csi-node-driver-wvdrp\" (UID: \"dedbe661-92c2-4c3f-9ab9-3f4df404e3b1\") " pod="calico-system/csi-node-driver-wvdrp" Dec 13 00:21:38.766878 kubelet[2818]: E1213 00:21:38.766831 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:38.766878 kubelet[2818]: W1213 00:21:38.766860 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:38.767540 kubelet[2818]: E1213 00:21:38.766965 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:38.767540 kubelet[2818]: E1213 00:21:38.767404 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:38.767540 kubelet[2818]: W1213 00:21:38.767415 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:38.767869 kubelet[2818]: E1213 00:21:38.767853 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:38.768281 kubelet[2818]: E1213 00:21:38.768191 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:38.768281 kubelet[2818]: W1213 00:21:38.768205 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:38.768281 kubelet[2818]: E1213 00:21:38.768217 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:38.769151 kubelet[2818]: E1213 00:21:38.768691 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:38.769151 kubelet[2818]: W1213 00:21:38.768702 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:38.769151 kubelet[2818]: E1213 00:21:38.768713 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:38.768000 audit: BPF prog-id=160 op=LOAD Dec 13 00:21:38.769000 audit: BPF prog-id=161 op=LOAD Dec 13 00:21:38.769000 audit[3321]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3307 pid=3321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:38.769000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064303566356563306638623662313763313036656666356230383530 Dec 13 00:21:38.769000 audit: BPF prog-id=161 op=UNLOAD Dec 13 00:21:38.769000 audit[3321]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3307 pid=3321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:38.769000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064303566356563306638623662313763313036656666356230383530 Dec 13 00:21:38.769000 audit: BPF prog-id=162 op=LOAD Dec 13 00:21:38.769000 audit[3321]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3307 pid=3321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:38.769000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064303566356563306638623662313763313036656666356230383530 Dec 13 00:21:38.769000 audit: BPF prog-id=163 op=LOAD Dec 13 00:21:38.769000 audit[3321]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3307 pid=3321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:38.769000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064303566356563306638623662313763313036656666356230383530 Dec 13 00:21:38.769000 audit: BPF prog-id=163 op=UNLOAD Dec 13 00:21:38.769000 audit[3321]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3307 pid=3321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:38.769000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064303566356563306638623662313763313036656666356230383530 Dec 13 00:21:38.769000 audit: BPF prog-id=162 op=UNLOAD Dec 13 00:21:38.769000 audit[3321]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3307 pid=3321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:38.769000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064303566356563306638623662313763313036656666356230383530 Dec 13 00:21:38.769000 audit: BPF prog-id=164 op=LOAD Dec 13 00:21:38.769000 audit[3321]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3307 pid=3321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:38.769000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064303566356563306638623662313763313036656666356230383530 Dec 13 00:21:38.794000 audit[3385]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3385 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:21:38.794000 audit[3385]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fff0631e8c0 a2=0 a3=7fff0631e8ac items=0 ppid=2927 pid=3385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:38.794000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:21:38.799000 audit[3385]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3385 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:21:38.799000 audit[3385]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff0631e8c0 a2=0 a3=0 items=0 ppid=2927 pid=3385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:38.799000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:21:38.817508 containerd[1633]: time="2025-12-13T00:21:38.817440967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5589dbc544-mfndd,Uid:c71577c7-b0c3-44bf-9c04-5b263ed51d0a,Namespace:calico-system,Attempt:0,} returns sandbox id \"0d05f5ec0f8b6b17c106eff5b08509c0dcefd76b5ff680ca6a548bc165d98483\"" Dec 13 00:21:38.818592 kubelet[2818]: E1213 00:21:38.818553 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:21:38.819432 containerd[1633]: time="2025-12-13T00:21:38.819392382Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 13 00:21:38.853802 kubelet[2818]: E1213 00:21:38.852695 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:21:38.854186 containerd[1633]: time="2025-12-13T00:21:38.854113407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-cnwpg,Uid:52a3053f-d422-4936-b4d9-e819ea93590b,Namespace:calico-system,Attempt:0,}" Dec 13 00:21:38.869365 kubelet[2818]: E1213 00:21:38.868564 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:38.869365 kubelet[2818]: W1213 00:21:38.868866 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:38.869365 kubelet[2818]: E1213 00:21:38.868898 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:38.870584 kubelet[2818]: E1213 00:21:38.870555 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:38.870584 kubelet[2818]: W1213 00:21:38.870568 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:38.870915 kubelet[2818]: E1213 00:21:38.870858 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:38.871314 kubelet[2818]: E1213 00:21:38.871273 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:38.871475 kubelet[2818]: W1213 00:21:38.871310 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:38.871475 kubelet[2818]: E1213 00:21:38.871353 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:38.873080 kubelet[2818]: E1213 00:21:38.871683 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:38.873080 kubelet[2818]: W1213 00:21:38.871712 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:38.873080 kubelet[2818]: E1213 00:21:38.871759 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:38.873080 kubelet[2818]: E1213 00:21:38.872091 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:38.873080 kubelet[2818]: W1213 00:21:38.872102 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:38.873080 kubelet[2818]: E1213 00:21:38.872113 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:38.873080 kubelet[2818]: E1213 00:21:38.872386 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:38.873080 kubelet[2818]: W1213 00:21:38.872397 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:38.873080 kubelet[2818]: E1213 00:21:38.872409 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:38.874024 kubelet[2818]: E1213 00:21:38.873996 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:38.874076 kubelet[2818]: W1213 00:21:38.874015 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:38.874076 kubelet[2818]: E1213 00:21:38.874068 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:38.874558 kubelet[2818]: E1213 00:21:38.874522 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:38.874558 kubelet[2818]: W1213 00:21:38.874549 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:38.874661 kubelet[2818]: E1213 00:21:38.874629 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:38.875802 kubelet[2818]: E1213 00:21:38.875756 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:38.875888 kubelet[2818]: W1213 00:21:38.875803 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:38.875928 kubelet[2818]: E1213 00:21:38.875882 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:38.876094 kubelet[2818]: E1213 00:21:38.876067 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:38.876094 kubelet[2818]: W1213 00:21:38.876085 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:38.876178 kubelet[2818]: E1213 00:21:38.876132 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:38.876838 kubelet[2818]: E1213 00:21:38.876300 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:38.876838 kubelet[2818]: W1213 00:21:38.876318 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:38.876838 kubelet[2818]: E1213 00:21:38.876356 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:38.876838 kubelet[2818]: E1213 00:21:38.876564 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:38.876838 kubelet[2818]: W1213 00:21:38.876573 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:38.876838 kubelet[2818]: E1213 00:21:38.876643 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:38.876838 kubelet[2818]: E1213 00:21:38.876804 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:38.876838 kubelet[2818]: W1213 00:21:38.876825 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:38.876838 kubelet[2818]: E1213 00:21:38.876841 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:38.877161 kubelet[2818]: E1213 00:21:38.877060 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:38.877161 kubelet[2818]: W1213 00:21:38.877070 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:38.877161 kubelet[2818]: E1213 00:21:38.877097 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:38.877836 kubelet[2818]: E1213 00:21:38.877461 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:38.877836 kubelet[2818]: W1213 00:21:38.877478 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:38.877836 kubelet[2818]: E1213 00:21:38.877511 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:38.877836 kubelet[2818]: E1213 00:21:38.877741 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:38.877836 kubelet[2818]: W1213 00:21:38.877751 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:38.878022 kubelet[2818]: E1213 00:21:38.877883 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:38.878022 kubelet[2818]: E1213 00:21:38.877954 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:38.878022 kubelet[2818]: W1213 00:21:38.877965 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:38.879475 kubelet[2818]: E1213 00:21:38.878143 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:38.879475 kubelet[2818]: W1213 00:21:38.878157 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:38.879475 kubelet[2818]: E1213 00:21:38.878382 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:38.879475 kubelet[2818]: W1213 00:21:38.878393 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:38.879475 kubelet[2818]: E1213 00:21:38.878408 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:38.879475 kubelet[2818]: E1213 00:21:38.878531 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:38.879475 kubelet[2818]: E1213 00:21:38.878558 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:38.879475 kubelet[2818]: E1213 00:21:38.878666 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:38.879475 kubelet[2818]: W1213 00:21:38.878676 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:38.879475 kubelet[2818]: E1213 00:21:38.878696 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:38.879913 kubelet[2818]: E1213 00:21:38.878898 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:38.879913 kubelet[2818]: W1213 00:21:38.878907 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:38.879913 kubelet[2818]: E1213 00:21:38.878919 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:38.879913 kubelet[2818]: E1213 00:21:38.879087 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:38.879913 kubelet[2818]: W1213 00:21:38.879097 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:38.879913 kubelet[2818]: E1213 00:21:38.879106 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:38.879913 kubelet[2818]: E1213 00:21:38.879328 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:38.879913 kubelet[2818]: W1213 00:21:38.879339 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:38.879913 kubelet[2818]: E1213 00:21:38.879366 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:38.879913 kubelet[2818]: E1213 00:21:38.879666 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:38.880143 kubelet[2818]: W1213 00:21:38.879677 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:38.880143 kubelet[2818]: E1213 00:21:38.879688 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:38.880143 kubelet[2818]: E1213 00:21:38.879992 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:38.880143 kubelet[2818]: W1213 00:21:38.880003 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:38.880143 kubelet[2818]: E1213 00:21:38.880014 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:38.885172 kubelet[2818]: E1213 00:21:38.885112 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:38.885172 kubelet[2818]: W1213 00:21:38.885133 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:38.885172 kubelet[2818]: E1213 00:21:38.885177 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:38.891979 containerd[1633]: time="2025-12-13T00:21:38.891939150Z" level=info msg="connecting to shim 27c3b67149de18d5045a92a9e592549042bca539ac8d82af916eb24f5a96bce5" address="unix:///run/containerd/s/d2bacede5c6d59f83bd4b4a2be62373a6fa675f111edf7d9a0dca69f6f52eb29" namespace=k8s.io protocol=ttrpc version=3 Dec 13 00:21:38.923978 systemd[1]: Started cri-containerd-27c3b67149de18d5045a92a9e592549042bca539ac8d82af916eb24f5a96bce5.scope - libcontainer container 27c3b67149de18d5045a92a9e592549042bca539ac8d82af916eb24f5a96bce5. Dec 13 00:21:38.935000 audit: BPF prog-id=165 op=LOAD Dec 13 00:21:38.936000 audit: BPF prog-id=166 op=LOAD Dec 13 00:21:38.936000 audit[3440]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3429 pid=3440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:38.936000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237633362363731343964653138643530343561393261396535393235 Dec 13 00:21:38.936000 audit: BPF prog-id=166 op=UNLOAD Dec 13 00:21:38.936000 audit[3440]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3429 pid=3440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:38.936000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237633362363731343964653138643530343561393261396535393235 Dec 13 00:21:38.936000 audit: BPF prog-id=167 op=LOAD Dec 13 00:21:38.936000 audit[3440]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3429 pid=3440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:38.936000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237633362363731343964653138643530343561393261396535393235 Dec 13 00:21:38.937000 audit: BPF prog-id=168 op=LOAD Dec 13 00:21:38.937000 audit[3440]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3429 pid=3440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:38.937000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237633362363731343964653138643530343561393261396535393235 Dec 13 00:21:38.937000 audit: BPF prog-id=168 op=UNLOAD Dec 13 00:21:38.937000 audit[3440]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3429 pid=3440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:38.937000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237633362363731343964653138643530343561393261396535393235 Dec 13 00:21:38.937000 audit: BPF prog-id=167 op=UNLOAD Dec 13 00:21:38.937000 audit[3440]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3429 pid=3440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:38.937000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237633362363731343964653138643530343561393261396535393235 Dec 13 00:21:38.937000 audit: BPF prog-id=169 op=LOAD Dec 13 00:21:38.937000 audit[3440]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3429 pid=3440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:38.937000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237633362363731343964653138643530343561393261396535393235 Dec 13 00:21:38.964946 containerd[1633]: time="2025-12-13T00:21:38.964903745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-cnwpg,Uid:52a3053f-d422-4936-b4d9-e819ea93590b,Namespace:calico-system,Attempt:0,} returns sandbox id \"27c3b67149de18d5045a92a9e592549042bca539ac8d82af916eb24f5a96bce5\"" Dec 13 00:21:38.966086 kubelet[2818]: E1213 00:21:38.965731 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:21:40.256259 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2496693337.mount: Deactivated successfully. Dec 13 00:21:40.572832 kubelet[2818]: E1213 00:21:40.572110 2818 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wvdrp" podUID="dedbe661-92c2-4c3f-9ab9-3f4df404e3b1" Dec 13 00:21:41.419932 containerd[1633]: time="2025-12-13T00:21:41.419843927Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:21:41.421147 containerd[1633]: time="2025-12-13T00:21:41.421105251Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Dec 13 00:21:41.422511 containerd[1633]: time="2025-12-13T00:21:41.422462975Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:21:41.435579 containerd[1633]: time="2025-12-13T00:21:41.435535072Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:21:41.436136 containerd[1633]: time="2025-12-13T00:21:41.436089124Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.616647059s" Dec 13 00:21:41.436136 containerd[1633]: time="2025-12-13T00:21:41.436126073Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Dec 13 00:21:41.437841 containerd[1633]: time="2025-12-13T00:21:41.437207408Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 13 00:21:41.447518 containerd[1633]: time="2025-12-13T00:21:41.447455561Z" level=info msg="CreateContainer within sandbox \"0d05f5ec0f8b6b17c106eff5b08509c0dcefd76b5ff680ca6a548bc165d98483\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 13 00:21:41.463389 containerd[1633]: time="2025-12-13T00:21:41.463300144Z" level=info msg="Container 2071a5678bedef74eff7b6d56f7a5de93e77d6e7a8d2866a11abd41b153f5366: CDI devices from CRI Config.CDIDevices: []" Dec 13 00:21:41.465741 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2366538061.mount: Deactivated successfully. Dec 13 00:21:41.474237 containerd[1633]: time="2025-12-13T00:21:41.474188451Z" level=info msg="CreateContainer within sandbox \"0d05f5ec0f8b6b17c106eff5b08509c0dcefd76b5ff680ca6a548bc165d98483\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"2071a5678bedef74eff7b6d56f7a5de93e77d6e7a8d2866a11abd41b153f5366\"" Dec 13 00:21:41.474805 containerd[1633]: time="2025-12-13T00:21:41.474755808Z" level=info msg="StartContainer for \"2071a5678bedef74eff7b6d56f7a5de93e77d6e7a8d2866a11abd41b153f5366\"" Dec 13 00:21:41.476508 containerd[1633]: time="2025-12-13T00:21:41.476470304Z" level=info msg="connecting to shim 2071a5678bedef74eff7b6d56f7a5de93e77d6e7a8d2866a11abd41b153f5366" address="unix:///run/containerd/s/56913886b47aa7247d2878fd35c73b813a92ac41ff0a81c582ff3f21185ca5cb" protocol=ttrpc version=3 Dec 13 00:21:41.499346 systemd[1]: Started cri-containerd-2071a5678bedef74eff7b6d56f7a5de93e77d6e7a8d2866a11abd41b153f5366.scope - libcontainer container 2071a5678bedef74eff7b6d56f7a5de93e77d6e7a8d2866a11abd41b153f5366. Dec 13 00:21:41.517000 audit: BPF prog-id=170 op=LOAD Dec 13 00:21:41.518000 audit: BPF prog-id=171 op=LOAD Dec 13 00:21:41.518000 audit[3475]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3307 pid=3475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:41.518000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230373161353637386265646566373465666637623664353666376135 Dec 13 00:21:41.518000 audit: BPF prog-id=171 op=UNLOAD Dec 13 00:21:41.518000 audit[3475]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3307 pid=3475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:41.518000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230373161353637386265646566373465666637623664353666376135 Dec 13 00:21:41.518000 audit: BPF prog-id=172 op=LOAD Dec 13 00:21:41.518000 audit[3475]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3307 pid=3475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:41.518000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230373161353637386265646566373465666637623664353666376135 Dec 13 00:21:41.518000 audit: BPF prog-id=173 op=LOAD Dec 13 00:21:41.518000 audit[3475]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3307 pid=3475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:41.518000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230373161353637386265646566373465666637623664353666376135 Dec 13 00:21:41.518000 audit: BPF prog-id=173 op=UNLOAD Dec 13 00:21:41.518000 audit[3475]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3307 pid=3475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:41.518000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230373161353637386265646566373465666637623664353666376135 Dec 13 00:21:41.518000 audit: BPF prog-id=172 op=UNLOAD Dec 13 00:21:41.518000 audit[3475]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3307 pid=3475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:41.518000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230373161353637386265646566373465666637623664353666376135 Dec 13 00:21:41.518000 audit: BPF prog-id=174 op=LOAD Dec 13 00:21:41.518000 audit[3475]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3307 pid=3475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:41.518000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230373161353637386265646566373465666637623664353666376135 Dec 13 00:21:41.572623 containerd[1633]: time="2025-12-13T00:21:41.572571165Z" level=info msg="StartContainer for \"2071a5678bedef74eff7b6d56f7a5de93e77d6e7a8d2866a11abd41b153f5366\" returns successfully" Dec 13 00:21:41.643160 kubelet[2818]: E1213 00:21:41.643111 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:21:41.670228 kubelet[2818]: I1213 00:21:41.670057 2818 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5589dbc544-mfndd" podStartSLOduration=1.052102613 podStartE2EDuration="3.670038998s" podCreationTimestamp="2025-12-13 00:21:38 +0000 UTC" firstStartedPulling="2025-12-13 00:21:38.819071306 +0000 UTC m=+20.350313611" lastFinishedPulling="2025-12-13 00:21:41.437007692 +0000 UTC m=+22.968249996" observedRunningTime="2025-12-13 00:21:41.669750505 +0000 UTC m=+23.200992809" watchObservedRunningTime="2025-12-13 00:21:41.670038998 +0000 UTC m=+23.201281302" Dec 13 00:21:41.677156 kubelet[2818]: E1213 00:21:41.677068 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:41.677156 kubelet[2818]: W1213 00:21:41.677145 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:41.677358 kubelet[2818]: E1213 00:21:41.677197 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:41.677683 kubelet[2818]: E1213 00:21:41.677652 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:41.677683 kubelet[2818]: W1213 00:21:41.677668 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:41.677683 kubelet[2818]: E1213 00:21:41.677681 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:41.678835 kubelet[2818]: E1213 00:21:41.677997 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:41.678835 kubelet[2818]: W1213 00:21:41.678012 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:41.678835 kubelet[2818]: E1213 00:21:41.678022 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:41.678835 kubelet[2818]: E1213 00:21:41.678371 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:41.678835 kubelet[2818]: W1213 00:21:41.678382 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:41.678835 kubelet[2818]: E1213 00:21:41.678394 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:41.679090 kubelet[2818]: E1213 00:21:41.679024 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:41.679090 kubelet[2818]: W1213 00:21:41.679036 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:41.679090 kubelet[2818]: E1213 00:21:41.679049 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:41.679355 kubelet[2818]: E1213 00:21:41.679332 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:41.679355 kubelet[2818]: W1213 00:21:41.679347 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:41.679355 kubelet[2818]: E1213 00:21:41.679357 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:41.679637 kubelet[2818]: E1213 00:21:41.679608 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:41.679714 kubelet[2818]: W1213 00:21:41.679624 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:41.679714 kubelet[2818]: E1213 00:21:41.679675 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:41.679989 kubelet[2818]: E1213 00:21:41.679963 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:41.679989 kubelet[2818]: W1213 00:21:41.679980 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:41.679989 kubelet[2818]: E1213 00:21:41.679990 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:41.680465 kubelet[2818]: E1213 00:21:41.680371 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:41.680465 kubelet[2818]: W1213 00:21:41.680389 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:41.680465 kubelet[2818]: E1213 00:21:41.680405 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:41.680669 kubelet[2818]: E1213 00:21:41.680646 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:41.680727 kubelet[2818]: W1213 00:21:41.680684 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:41.680727 kubelet[2818]: E1213 00:21:41.680699 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:41.681622 kubelet[2818]: E1213 00:21:41.680948 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:41.681622 kubelet[2818]: W1213 00:21:41.680963 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:41.681622 kubelet[2818]: E1213 00:21:41.680973 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:41.681774 kubelet[2818]: E1213 00:21:41.681667 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:41.681774 kubelet[2818]: W1213 00:21:41.681682 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:41.681774 kubelet[2818]: E1213 00:21:41.681695 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:41.681985 kubelet[2818]: E1213 00:21:41.681957 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:41.681985 kubelet[2818]: W1213 00:21:41.681972 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:41.681985 kubelet[2818]: E1213 00:21:41.681985 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:41.682581 kubelet[2818]: E1213 00:21:41.682554 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:41.682581 kubelet[2818]: W1213 00:21:41.682570 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:41.682581 kubelet[2818]: E1213 00:21:41.682579 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:41.683838 kubelet[2818]: E1213 00:21:41.682852 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:41.683838 kubelet[2818]: W1213 00:21:41.682871 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:41.683838 kubelet[2818]: E1213 00:21:41.682887 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:41.695593 kubelet[2818]: E1213 00:21:41.695495 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:41.695593 kubelet[2818]: W1213 00:21:41.695521 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:41.695593 kubelet[2818]: E1213 00:21:41.695543 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:41.695875 kubelet[2818]: E1213 00:21:41.695796 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:41.695875 kubelet[2818]: W1213 00:21:41.695804 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:41.695875 kubelet[2818]: E1213 00:21:41.695836 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:41.696071 kubelet[2818]: E1213 00:21:41.696044 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:41.696071 kubelet[2818]: W1213 00:21:41.696056 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:41.696071 kubelet[2818]: E1213 00:21:41.696067 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:41.696305 kubelet[2818]: E1213 00:21:41.696278 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:41.696305 kubelet[2818]: W1213 00:21:41.696292 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:41.696351 kubelet[2818]: E1213 00:21:41.696318 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:41.696610 kubelet[2818]: E1213 00:21:41.696584 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:41.696610 kubelet[2818]: W1213 00:21:41.696596 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:41.696662 kubelet[2818]: E1213 00:21:41.696617 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:41.696916 kubelet[2818]: E1213 00:21:41.696897 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:41.696916 kubelet[2818]: W1213 00:21:41.696909 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:41.696985 kubelet[2818]: E1213 00:21:41.696960 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:41.697156 kubelet[2818]: E1213 00:21:41.697126 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:41.697156 kubelet[2818]: W1213 00:21:41.697141 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:41.697216 kubelet[2818]: E1213 00:21:41.697189 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:41.697390 kubelet[2818]: E1213 00:21:41.697375 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:41.697390 kubelet[2818]: W1213 00:21:41.697385 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:41.697478 kubelet[2818]: E1213 00:21:41.697462 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:41.697670 kubelet[2818]: E1213 00:21:41.697651 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:41.697670 kubelet[2818]: W1213 00:21:41.697661 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:41.697670 kubelet[2818]: E1213 00:21:41.697672 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:41.698251 kubelet[2818]: E1213 00:21:41.698223 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:41.698251 kubelet[2818]: W1213 00:21:41.698241 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:41.698251 kubelet[2818]: E1213 00:21:41.698251 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:41.698524 kubelet[2818]: E1213 00:21:41.698504 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:41.698524 kubelet[2818]: W1213 00:21:41.698517 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:41.698677 kubelet[2818]: E1213 00:21:41.698531 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:41.698794 kubelet[2818]: E1213 00:21:41.698777 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:41.698794 kubelet[2818]: W1213 00:21:41.698790 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:41.698965 kubelet[2818]: E1213 00:21:41.698923 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:41.699675 kubelet[2818]: E1213 00:21:41.699646 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:41.699675 kubelet[2818]: W1213 00:21:41.699662 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:41.699675 kubelet[2818]: E1213 00:21:41.699676 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:41.699927 kubelet[2818]: E1213 00:21:41.699914 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:41.699927 kubelet[2818]: W1213 00:21:41.699924 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:41.700000 kubelet[2818]: E1213 00:21:41.699943 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:41.700196 kubelet[2818]: E1213 00:21:41.700178 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:41.700196 kubelet[2818]: W1213 00:21:41.700190 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:41.700278 kubelet[2818]: E1213 00:21:41.700249 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:41.701004 kubelet[2818]: E1213 00:21:41.700975 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:41.701004 kubelet[2818]: W1213 00:21:41.700989 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:41.701259 kubelet[2818]: E1213 00:21:41.701239 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:41.702996 kubelet[2818]: E1213 00:21:41.702974 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:41.702996 kubelet[2818]: W1213 00:21:41.702987 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:41.702996 kubelet[2818]: E1213 00:21:41.703001 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:41.703614 kubelet[2818]: E1213 00:21:41.703596 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:41.703614 kubelet[2818]: W1213 00:21:41.703608 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:41.703681 kubelet[2818]: E1213 00:21:41.703618 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:42.572101 kubelet[2818]: E1213 00:21:42.572031 2818 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wvdrp" podUID="dedbe661-92c2-4c3f-9ab9-3f4df404e3b1" Dec 13 00:21:42.644606 kubelet[2818]: I1213 00:21:42.644563 2818 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 00:21:42.645112 kubelet[2818]: E1213 00:21:42.644918 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:21:42.687260 kubelet[2818]: E1213 00:21:42.687204 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:42.687260 kubelet[2818]: W1213 00:21:42.687235 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:42.687260 kubelet[2818]: E1213 00:21:42.687262 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:42.687613 kubelet[2818]: E1213 00:21:42.687580 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:42.687613 kubelet[2818]: W1213 00:21:42.687606 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:42.687691 kubelet[2818]: E1213 00:21:42.687633 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:42.687948 kubelet[2818]: E1213 00:21:42.687902 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:42.687948 kubelet[2818]: W1213 00:21:42.687924 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:42.687948 kubelet[2818]: E1213 00:21:42.687937 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:42.688210 kubelet[2818]: E1213 00:21:42.688182 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:42.688210 kubelet[2818]: W1213 00:21:42.688204 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:42.688288 kubelet[2818]: E1213 00:21:42.688219 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:42.688469 kubelet[2818]: E1213 00:21:42.688450 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:42.688469 kubelet[2818]: W1213 00:21:42.688464 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:42.688541 kubelet[2818]: E1213 00:21:42.688476 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:42.688709 kubelet[2818]: E1213 00:21:42.688683 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:42.688709 kubelet[2818]: W1213 00:21:42.688697 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:42.688709 kubelet[2818]: E1213 00:21:42.688707 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:42.688947 kubelet[2818]: E1213 00:21:42.688925 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:42.688947 kubelet[2818]: W1213 00:21:42.688939 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:42.689026 kubelet[2818]: E1213 00:21:42.688950 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:42.689212 kubelet[2818]: E1213 00:21:42.689194 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:42.689212 kubelet[2818]: W1213 00:21:42.689207 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:42.689312 kubelet[2818]: E1213 00:21:42.689218 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:42.689451 kubelet[2818]: E1213 00:21:42.689430 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:42.689451 kubelet[2818]: W1213 00:21:42.689444 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:42.689529 kubelet[2818]: E1213 00:21:42.689455 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:42.689697 kubelet[2818]: E1213 00:21:42.689664 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:42.689697 kubelet[2818]: W1213 00:21:42.689683 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:42.689697 kubelet[2818]: E1213 00:21:42.689695 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:42.689929 kubelet[2818]: E1213 00:21:42.689911 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:42.689929 kubelet[2818]: W1213 00:21:42.689923 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:42.690019 kubelet[2818]: E1213 00:21:42.689933 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:42.690146 kubelet[2818]: E1213 00:21:42.690128 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:42.690146 kubelet[2818]: W1213 00:21:42.690140 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:42.690215 kubelet[2818]: E1213 00:21:42.690150 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:42.690383 kubelet[2818]: E1213 00:21:42.690365 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:42.690383 kubelet[2818]: W1213 00:21:42.690377 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:42.690463 kubelet[2818]: E1213 00:21:42.690387 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:42.690635 kubelet[2818]: E1213 00:21:42.690615 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:42.690635 kubelet[2818]: W1213 00:21:42.690628 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:42.690710 kubelet[2818]: E1213 00:21:42.690639 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:42.690873 kubelet[2818]: E1213 00:21:42.690856 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:42.690873 kubelet[2818]: W1213 00:21:42.690868 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:42.690939 kubelet[2818]: E1213 00:21:42.690877 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:42.701379 kubelet[2818]: E1213 00:21:42.701332 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:42.701379 kubelet[2818]: W1213 00:21:42.701358 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:42.701379 kubelet[2818]: E1213 00:21:42.701382 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:42.701668 kubelet[2818]: E1213 00:21:42.701637 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:42.701668 kubelet[2818]: W1213 00:21:42.701665 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:42.701737 kubelet[2818]: E1213 00:21:42.701684 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:42.701938 kubelet[2818]: E1213 00:21:42.701914 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:42.701938 kubelet[2818]: W1213 00:21:42.701930 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:42.702013 kubelet[2818]: E1213 00:21:42.701947 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:42.702176 kubelet[2818]: E1213 00:21:42.702151 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:42.702176 kubelet[2818]: W1213 00:21:42.702161 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:42.702176 kubelet[2818]: E1213 00:21:42.702176 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:42.702364 kubelet[2818]: E1213 00:21:42.702339 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:42.702364 kubelet[2818]: W1213 00:21:42.702350 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:42.702452 kubelet[2818]: E1213 00:21:42.702364 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:42.702626 kubelet[2818]: E1213 00:21:42.702609 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:42.702626 kubelet[2818]: W1213 00:21:42.702620 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:42.702704 kubelet[2818]: E1213 00:21:42.702636 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:42.702977 kubelet[2818]: E1213 00:21:42.702949 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:42.702977 kubelet[2818]: W1213 00:21:42.702962 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:42.703055 kubelet[2818]: E1213 00:21:42.702976 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:42.703166 kubelet[2818]: E1213 00:21:42.703148 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:42.703166 kubelet[2818]: W1213 00:21:42.703159 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:42.703226 kubelet[2818]: E1213 00:21:42.703174 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:42.703514 kubelet[2818]: E1213 00:21:42.703477 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:42.703567 kubelet[2818]: W1213 00:21:42.703508 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:42.703567 kubelet[2818]: E1213 00:21:42.703545 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:42.703768 kubelet[2818]: E1213 00:21:42.703751 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:42.703768 kubelet[2818]: W1213 00:21:42.703762 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:42.703856 kubelet[2818]: E1213 00:21:42.703777 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:42.703996 kubelet[2818]: E1213 00:21:42.703980 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:42.703996 kubelet[2818]: W1213 00:21:42.703991 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:42.704069 kubelet[2818]: E1213 00:21:42.704004 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:42.704179 kubelet[2818]: E1213 00:21:42.704162 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:42.704179 kubelet[2818]: W1213 00:21:42.704171 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:42.704243 kubelet[2818]: E1213 00:21:42.704183 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:42.704388 kubelet[2818]: E1213 00:21:42.704372 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:42.704388 kubelet[2818]: W1213 00:21:42.704381 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:42.704480 kubelet[2818]: E1213 00:21:42.704418 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:42.704611 kubelet[2818]: E1213 00:21:42.704593 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:42.704611 kubelet[2818]: W1213 00:21:42.704603 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:42.704674 kubelet[2818]: E1213 00:21:42.704620 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:42.704859 kubelet[2818]: E1213 00:21:42.704842 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:42.704859 kubelet[2818]: W1213 00:21:42.704852 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:42.704929 kubelet[2818]: E1213 00:21:42.704865 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:42.705100 kubelet[2818]: E1213 00:21:42.705082 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:42.705100 kubelet[2818]: W1213 00:21:42.705092 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:42.705166 kubelet[2818]: E1213 00:21:42.705107 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:42.705431 kubelet[2818]: E1213 00:21:42.705398 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:42.705431 kubelet[2818]: W1213 00:21:42.705426 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:42.705524 kubelet[2818]: E1213 00:21:42.705446 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:42.705676 kubelet[2818]: E1213 00:21:42.705659 2818 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 00:21:42.705676 kubelet[2818]: W1213 00:21:42.705670 2818 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 00:21:42.705735 kubelet[2818]: E1213 00:21:42.705680 2818 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 00:21:43.047779 containerd[1633]: time="2025-12-13T00:21:43.047638130Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:21:43.048997 containerd[1633]: time="2025-12-13T00:21:43.048753157Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Dec 13 00:21:43.050511 containerd[1633]: time="2025-12-13T00:21:43.050477680Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:21:43.052509 containerd[1633]: time="2025-12-13T00:21:43.052473665Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:21:43.053147 containerd[1633]: time="2025-12-13T00:21:43.053117015Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.6158745s" Dec 13 00:21:43.053207 containerd[1633]: time="2025-12-13T00:21:43.053150388Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Dec 13 00:21:43.055021 containerd[1633]: time="2025-12-13T00:21:43.054986842Z" level=info msg="CreateContainer within sandbox \"27c3b67149de18d5045a92a9e592549042bca539ac8d82af916eb24f5a96bce5\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 00:21:43.064219 containerd[1633]: time="2025-12-13T00:21:43.064156929Z" level=info msg="Container 6766a73b9c934c8896883949f4db1a0c17b81cd259b04f7a8d185531b515c7f0: CDI devices from CRI Config.CDIDevices: []" Dec 13 00:21:43.072731 containerd[1633]: time="2025-12-13T00:21:43.072695699Z" level=info msg="CreateContainer within sandbox \"27c3b67149de18d5045a92a9e592549042bca539ac8d82af916eb24f5a96bce5\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6766a73b9c934c8896883949f4db1a0c17b81cd259b04f7a8d185531b515c7f0\"" Dec 13 00:21:43.073196 containerd[1633]: time="2025-12-13T00:21:43.073172435Z" level=info msg="StartContainer for \"6766a73b9c934c8896883949f4db1a0c17b81cd259b04f7a8d185531b515c7f0\"" Dec 13 00:21:43.075188 containerd[1633]: time="2025-12-13T00:21:43.075161045Z" level=info msg="connecting to shim 6766a73b9c934c8896883949f4db1a0c17b81cd259b04f7a8d185531b515c7f0" address="unix:///run/containerd/s/d2bacede5c6d59f83bd4b4a2be62373a6fa675f111edf7d9a0dca69f6f52eb29" protocol=ttrpc version=3 Dec 13 00:21:43.101990 systemd[1]: Started cri-containerd-6766a73b9c934c8896883949f4db1a0c17b81cd259b04f7a8d185531b515c7f0.scope - libcontainer container 6766a73b9c934c8896883949f4db1a0c17b81cd259b04f7a8d185531b515c7f0. Dec 13 00:21:43.171000 audit: BPF prog-id=175 op=LOAD Dec 13 00:21:43.173241 kernel: kauditd_printk_skb: 80 callbacks suppressed Dec 13 00:21:43.173316 kernel: audit: type=1334 audit(1765585303.171:577): prog-id=175 op=LOAD Dec 13 00:21:43.171000 audit[3590]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3429 pid=3590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:43.180280 kernel: audit: type=1300 audit(1765585303.171:577): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3429 pid=3590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:43.180455 kernel: audit: type=1327 audit(1765585303.171:577): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637363661373362396339333463383839363838333934396634646231 Dec 13 00:21:43.171000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637363661373362396339333463383839363838333934396634646231 Dec 13 00:21:43.171000 audit: BPF prog-id=176 op=LOAD Dec 13 00:21:43.187069 kernel: audit: type=1334 audit(1765585303.171:578): prog-id=176 op=LOAD Dec 13 00:21:43.187121 kernel: audit: type=1300 audit(1765585303.171:578): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3429 pid=3590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:43.171000 audit[3590]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3429 pid=3590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:43.171000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637363661373362396339333463383839363838333934396634646231 Dec 13 00:21:43.198682 kernel: audit: type=1327 audit(1765585303.171:578): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637363661373362396339333463383839363838333934396634646231 Dec 13 00:21:43.171000 audit: BPF prog-id=176 op=UNLOAD Dec 13 00:21:43.200304 kernel: audit: type=1334 audit(1765585303.171:579): prog-id=176 op=UNLOAD Dec 13 00:21:43.171000 audit[3590]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3429 pid=3590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:43.205971 kernel: audit: type=1300 audit(1765585303.171:579): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3429 pid=3590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:43.206032 kernel: audit: type=1327 audit(1765585303.171:579): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637363661373362396339333463383839363838333934396634646231 Dec 13 00:21:43.171000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637363661373362396339333463383839363838333934396634646231 Dec 13 00:21:43.171000 audit: BPF prog-id=175 op=UNLOAD Dec 13 00:21:43.212720 kernel: audit: type=1334 audit(1765585303.171:580): prog-id=175 op=UNLOAD Dec 13 00:21:43.171000 audit[3590]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3429 pid=3590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:43.171000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637363661373362396339333463383839363838333934396634646231 Dec 13 00:21:43.171000 audit: BPF prog-id=177 op=LOAD Dec 13 00:21:43.171000 audit[3590]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3429 pid=3590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:43.171000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637363661373362396339333463383839363838333934396634646231 Dec 13 00:21:43.217667 containerd[1633]: time="2025-12-13T00:21:43.217624173Z" level=info msg="StartContainer for \"6766a73b9c934c8896883949f4db1a0c17b81cd259b04f7a8d185531b515c7f0\" returns successfully" Dec 13 00:21:43.229658 systemd[1]: cri-containerd-6766a73b9c934c8896883949f4db1a0c17b81cd259b04f7a8d185531b515c7f0.scope: Deactivated successfully. Dec 13 00:21:43.233495 containerd[1633]: time="2025-12-13T00:21:43.233451642Z" level=info msg="received container exit event container_id:\"6766a73b9c934c8896883949f4db1a0c17b81cd259b04f7a8d185531b515c7f0\" id:\"6766a73b9c934c8896883949f4db1a0c17b81cd259b04f7a8d185531b515c7f0\" pid:3603 exited_at:{seconds:1765585303 nanos:233160945}" Dec 13 00:21:43.233000 audit: BPF prog-id=177 op=UNLOAD Dec 13 00:21:43.257663 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6766a73b9c934c8896883949f4db1a0c17b81cd259b04f7a8d185531b515c7f0-rootfs.mount: Deactivated successfully. Dec 13 00:21:43.649597 kubelet[2818]: E1213 00:21:43.649546 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:21:43.651051 containerd[1633]: time="2025-12-13T00:21:43.650993013Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 13 00:21:44.572391 kubelet[2818]: E1213 00:21:44.572314 2818 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wvdrp" podUID="dedbe661-92c2-4c3f-9ab9-3f4df404e3b1" Dec 13 00:21:46.572804 kubelet[2818]: E1213 00:21:46.572694 2818 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wvdrp" podUID="dedbe661-92c2-4c3f-9ab9-3f4df404e3b1" Dec 13 00:21:48.574839 kubelet[2818]: E1213 00:21:48.574770 2818 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wvdrp" podUID="dedbe661-92c2-4c3f-9ab9-3f4df404e3b1" Dec 13 00:21:48.783998 containerd[1633]: time="2025-12-13T00:21:48.783908612Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:21:48.786291 containerd[1633]: time="2025-12-13T00:21:48.786212692Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Dec 13 00:21:48.788206 containerd[1633]: time="2025-12-13T00:21:48.788160954Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:21:48.819787 containerd[1633]: time="2025-12-13T00:21:48.819721877Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:21:48.820370 containerd[1633]: time="2025-12-13T00:21:48.820311525Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 5.169269601s" Dec 13 00:21:48.820370 containerd[1633]: time="2025-12-13T00:21:48.820358664Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Dec 13 00:21:48.822734 containerd[1633]: time="2025-12-13T00:21:48.822682140Z" level=info msg="CreateContainer within sandbox \"27c3b67149de18d5045a92a9e592549042bca539ac8d82af916eb24f5a96bce5\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 00:21:48.849380 containerd[1633]: time="2025-12-13T00:21:48.849238197Z" level=info msg="Container 553219c2cfa60e577e08b73bb16cff6fd413793cc29c4cdb95c5167234709aea: CDI devices from CRI Config.CDIDevices: []" Dec 13 00:21:48.860945 containerd[1633]: time="2025-12-13T00:21:48.860882069Z" level=info msg="CreateContainer within sandbox \"27c3b67149de18d5045a92a9e592549042bca539ac8d82af916eb24f5a96bce5\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"553219c2cfa60e577e08b73bb16cff6fd413793cc29c4cdb95c5167234709aea\"" Dec 13 00:21:48.861944 containerd[1633]: time="2025-12-13T00:21:48.861886587Z" level=info msg="StartContainer for \"553219c2cfa60e577e08b73bb16cff6fd413793cc29c4cdb95c5167234709aea\"" Dec 13 00:21:48.865042 containerd[1633]: time="2025-12-13T00:21:48.864997244Z" level=info msg="connecting to shim 553219c2cfa60e577e08b73bb16cff6fd413793cc29c4cdb95c5167234709aea" address="unix:///run/containerd/s/d2bacede5c6d59f83bd4b4a2be62373a6fa675f111edf7d9a0dca69f6f52eb29" protocol=ttrpc version=3 Dec 13 00:21:48.891058 systemd[1]: Started cri-containerd-553219c2cfa60e577e08b73bb16cff6fd413793cc29c4cdb95c5167234709aea.scope - libcontainer container 553219c2cfa60e577e08b73bb16cff6fd413793cc29c4cdb95c5167234709aea. Dec 13 00:21:48.966000 audit: BPF prog-id=178 op=LOAD Dec 13 00:21:48.968401 kernel: kauditd_printk_skb: 6 callbacks suppressed Dec 13 00:21:48.968486 kernel: audit: type=1334 audit(1765585308.966:583): prog-id=178 op=LOAD Dec 13 00:21:48.966000 audit[3651]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3429 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:48.974750 kernel: audit: type=1300 audit(1765585308.966:583): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3429 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:48.974861 kernel: audit: type=1327 audit(1765585308.966:583): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535333231396332636661363065353737653038623733626231366366 Dec 13 00:21:48.966000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535333231396332636661363065353737653038623733626231366366 Dec 13 00:21:48.966000 audit: BPF prog-id=179 op=LOAD Dec 13 00:21:48.981226 kernel: audit: type=1334 audit(1765585308.966:584): prog-id=179 op=LOAD Dec 13 00:21:48.966000 audit[3651]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3429 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:48.986830 kernel: audit: type=1300 audit(1765585308.966:584): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3429 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:48.966000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535333231396332636661363065353737653038623733626231366366 Dec 13 00:21:48.991844 kernel: audit: type=1327 audit(1765585308.966:584): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535333231396332636661363065353737653038623733626231366366 Dec 13 00:21:48.991961 kernel: audit: type=1334 audit(1765585308.966:585): prog-id=179 op=UNLOAD Dec 13 00:21:48.966000 audit: BPF prog-id=179 op=UNLOAD Dec 13 00:21:48.966000 audit[3651]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3429 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:48.998260 kernel: audit: type=1300 audit(1765585308.966:585): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3429 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:48.998506 kernel: audit: type=1327 audit(1765585308.966:585): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535333231396332636661363065353737653038623733626231366366 Dec 13 00:21:48.966000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535333231396332636661363065353737653038623733626231366366 Dec 13 00:21:48.966000 audit: BPF prog-id=178 op=UNLOAD Dec 13 00:21:49.005891 kernel: audit: type=1334 audit(1765585308.966:586): prog-id=178 op=UNLOAD Dec 13 00:21:48.966000 audit[3651]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3429 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:48.966000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535333231396332636661363065353737653038623733626231366366 Dec 13 00:21:48.966000 audit: BPF prog-id=180 op=LOAD Dec 13 00:21:48.966000 audit[3651]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3429 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:21:48.966000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535333231396332636661363065353737653038623733626231366366 Dec 13 00:21:49.009131 containerd[1633]: time="2025-12-13T00:21:49.009065080Z" level=info msg="StartContainer for \"553219c2cfa60e577e08b73bb16cff6fd413793cc29c4cdb95c5167234709aea\" returns successfully" Dec 13 00:21:49.663108 kubelet[2818]: E1213 00:21:49.663076 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:21:50.572755 kubelet[2818]: E1213 00:21:50.572704 2818 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wvdrp" podUID="dedbe661-92c2-4c3f-9ab9-3f4df404e3b1" Dec 13 00:21:50.577437 systemd[1]: cri-containerd-553219c2cfa60e577e08b73bb16cff6fd413793cc29c4cdb95c5167234709aea.scope: Deactivated successfully. Dec 13 00:21:50.578337 systemd[1]: cri-containerd-553219c2cfa60e577e08b73bb16cff6fd413793cc29c4cdb95c5167234709aea.scope: Consumed 602ms CPU time, 176.4M memory peak, 2.7M read from disk, 171.3M written to disk. Dec 13 00:21:50.580228 containerd[1633]: time="2025-12-13T00:21:50.580151743Z" level=info msg="received container exit event container_id:\"553219c2cfa60e577e08b73bb16cff6fd413793cc29c4cdb95c5167234709aea\" id:\"553219c2cfa60e577e08b73bb16cff6fd413793cc29c4cdb95c5167234709aea\" pid:3664 exited_at:{seconds:1765585310 nanos:579910049}" Dec 13 00:21:50.583000 audit: BPF prog-id=180 op=UNLOAD Dec 13 00:21:50.606070 kubelet[2818]: I1213 00:21:50.606041 2818 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 13 00:21:50.610274 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-553219c2cfa60e577e08b73bb16cff6fd413793cc29c4cdb95c5167234709aea-rootfs.mount: Deactivated successfully. Dec 13 00:21:50.636245 systemd[1]: Created slice kubepods-burstable-pod7a41fb7c_fb92_4424_905a_7d7e492fd340.slice - libcontainer container kubepods-burstable-pod7a41fb7c_fb92_4424_905a_7d7e492fd340.slice. Dec 13 00:21:50.656467 systemd[1]: Created slice kubepods-besteffort-pod9ea90886_dcbd_4bc0_9591_0e36e6c0d4a7.slice - libcontainer container kubepods-besteffort-pod9ea90886_dcbd_4bc0_9591_0e36e6c0d4a7.slice. Dec 13 00:21:50.661515 kubelet[2818]: I1213 00:21:50.661468 2818 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4m5lk\" (UniqueName: \"kubernetes.io/projected/7a41fb7c-fb92-4424-905a-7d7e492fd340-kube-api-access-4m5lk\") pod \"coredns-668d6bf9bc-2dg5q\" (UID: \"7a41fb7c-fb92-4424-905a-7d7e492fd340\") " pod="kube-system/coredns-668d6bf9bc-2dg5q" Dec 13 00:21:50.661515 kubelet[2818]: I1213 00:21:50.661502 2818 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a41fb7c-fb92-4424-905a-7d7e492fd340-config-volume\") pod \"coredns-668d6bf9bc-2dg5q\" (UID: \"7a41fb7c-fb92-4424-905a-7d7e492fd340\") " pod="kube-system/coredns-668d6bf9bc-2dg5q" Dec 13 00:21:50.663529 systemd[1]: Created slice kubepods-besteffort-podfce4aad9_52fa_4b91_82ff_c6436952148b.slice - libcontainer container kubepods-besteffort-podfce4aad9_52fa_4b91_82ff_c6436952148b.slice. Dec 13 00:21:50.665102 kubelet[2818]: E1213 00:21:50.665071 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:21:50.670325 systemd[1]: Created slice kubepods-burstable-pod63ef95d4_df6b_4b05_b59b_3f84085ee71d.slice - libcontainer container kubepods-burstable-pod63ef95d4_df6b_4b05_b59b_3f84085ee71d.slice. Dec 13 00:21:50.674947 systemd[1]: Created slice kubepods-besteffort-pod00a965a0_569e_4742_bf83_196c624e0f8f.slice - libcontainer container kubepods-besteffort-pod00a965a0_569e_4742_bf83_196c624e0f8f.slice. Dec 13 00:21:50.680442 systemd[1]: Created slice kubepods-besteffort-pod7ce6ad04_f89c_40a1_981e_2b7e39fe58e0.slice - libcontainer container kubepods-besteffort-pod7ce6ad04_f89c_40a1_981e_2b7e39fe58e0.slice. Dec 13 00:21:50.687760 systemd[1]: Created slice kubepods-besteffort-podb431ca61_6062_45e4_a35d_3ec7ff6dccb1.slice - libcontainer container kubepods-besteffort-podb431ca61_6062_45e4_a35d_3ec7ff6dccb1.slice. Dec 13 00:21:50.762419 kubelet[2818]: I1213 00:21:50.762330 2818 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhk5x\" (UniqueName: \"kubernetes.io/projected/00a965a0-569e-4742-bf83-196c624e0f8f-kube-api-access-dhk5x\") pod \"calico-apiserver-58486567b6-lnz72\" (UID: \"00a965a0-569e-4742-bf83-196c624e0f8f\") " pod="calico-apiserver/calico-apiserver-58486567b6-lnz72" Dec 13 00:21:50.762419 kubelet[2818]: I1213 00:21:50.762390 2818 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x68s4\" (UniqueName: \"kubernetes.io/projected/63ef95d4-df6b-4b05-b59b-3f84085ee71d-kube-api-access-x68s4\") pod \"coredns-668d6bf9bc-wj7mw\" (UID: \"63ef95d4-df6b-4b05-b59b-3f84085ee71d\") " pod="kube-system/coredns-668d6bf9bc-wj7mw" Dec 13 00:21:50.762419 kubelet[2818]: I1213 00:21:50.762419 2818 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/00a965a0-569e-4742-bf83-196c624e0f8f-calico-apiserver-certs\") pod \"calico-apiserver-58486567b6-lnz72\" (UID: \"00a965a0-569e-4742-bf83-196c624e0f8f\") " pod="calico-apiserver/calico-apiserver-58486567b6-lnz72" Dec 13 00:21:50.762419 kubelet[2818]: I1213 00:21:50.762441 2818 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfmql\" (UniqueName: \"kubernetes.io/projected/b431ca61-6062-45e4-a35d-3ec7ff6dccb1-kube-api-access-gfmql\") pod \"calico-apiserver-58486567b6-tgd79\" (UID: \"b431ca61-6062-45e4-a35d-3ec7ff6dccb1\") " pod="calico-apiserver/calico-apiserver-58486567b6-tgd79" Dec 13 00:21:50.762739 kubelet[2818]: I1213 00:21:50.762471 2818 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntr4p\" (UniqueName: \"kubernetes.io/projected/fce4aad9-52fa-4b91-82ff-c6436952148b-kube-api-access-ntr4p\") pod \"calico-kube-controllers-7cf9f886c6-9fch9\" (UID: \"fce4aad9-52fa-4b91-82ff-c6436952148b\") " pod="calico-system/calico-kube-controllers-7cf9f886c6-9fch9" Dec 13 00:21:50.762739 kubelet[2818]: I1213 00:21:50.762516 2818 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbbhp\" (UniqueName: \"kubernetes.io/projected/7ce6ad04-f89c-40a1-981e-2b7e39fe58e0-kube-api-access-sbbhp\") pod \"goldmane-666569f655-f54tn\" (UID: \"7ce6ad04-f89c-40a1-981e-2b7e39fe58e0\") " pod="calico-system/goldmane-666569f655-f54tn" Dec 13 00:21:50.762739 kubelet[2818]: I1213 00:21:50.762577 2818 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/63ef95d4-df6b-4b05-b59b-3f84085ee71d-config-volume\") pod \"coredns-668d6bf9bc-wj7mw\" (UID: \"63ef95d4-df6b-4b05-b59b-3f84085ee71d\") " pod="kube-system/coredns-668d6bf9bc-wj7mw" Dec 13 00:21:50.762739 kubelet[2818]: I1213 00:21:50.762620 2818 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fce4aad9-52fa-4b91-82ff-c6436952148b-tigera-ca-bundle\") pod \"calico-kube-controllers-7cf9f886c6-9fch9\" (UID: \"fce4aad9-52fa-4b91-82ff-c6436952148b\") " pod="calico-system/calico-kube-controllers-7cf9f886c6-9fch9" Dec 13 00:21:50.762739 kubelet[2818]: I1213 00:21:50.762641 2818 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ce6ad04-f89c-40a1-981e-2b7e39fe58e0-config\") pod \"goldmane-666569f655-f54tn\" (UID: \"7ce6ad04-f89c-40a1-981e-2b7e39fe58e0\") " pod="calico-system/goldmane-666569f655-f54tn" Dec 13 00:21:50.762928 kubelet[2818]: I1213 00:21:50.762664 2818 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/7ce6ad04-f89c-40a1-981e-2b7e39fe58e0-goldmane-key-pair\") pod \"goldmane-666569f655-f54tn\" (UID: \"7ce6ad04-f89c-40a1-981e-2b7e39fe58e0\") " pod="calico-system/goldmane-666569f655-f54tn" Dec 13 00:21:50.762928 kubelet[2818]: I1213 00:21:50.762685 2818 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b431ca61-6062-45e4-a35d-3ec7ff6dccb1-calico-apiserver-certs\") pod \"calico-apiserver-58486567b6-tgd79\" (UID: \"b431ca61-6062-45e4-a35d-3ec7ff6dccb1\") " pod="calico-apiserver/calico-apiserver-58486567b6-tgd79" Dec 13 00:21:50.762928 kubelet[2818]: I1213 00:21:50.762732 2818 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ce6ad04-f89c-40a1-981e-2b7e39fe58e0-goldmane-ca-bundle\") pod \"goldmane-666569f655-f54tn\" (UID: \"7ce6ad04-f89c-40a1-981e-2b7e39fe58e0\") " pod="calico-system/goldmane-666569f655-f54tn" Dec 13 00:21:50.762928 kubelet[2818]: I1213 00:21:50.762753 2818 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqjvk\" (UniqueName: \"kubernetes.io/projected/9ea90886-dcbd-4bc0-9591-0e36e6c0d4a7-kube-api-access-qqjvk\") pod \"whisker-59c87b4cdb-28zx8\" (UID: \"9ea90886-dcbd-4bc0-9591-0e36e6c0d4a7\") " pod="calico-system/whisker-59c87b4cdb-28zx8" Dec 13 00:21:50.762928 kubelet[2818]: I1213 00:21:50.762785 2818 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ea90886-dcbd-4bc0-9591-0e36e6c0d4a7-whisker-ca-bundle\") pod \"whisker-59c87b4cdb-28zx8\" (UID: \"9ea90886-dcbd-4bc0-9591-0e36e6c0d4a7\") " pod="calico-system/whisker-59c87b4cdb-28zx8" Dec 13 00:21:50.763084 kubelet[2818]: I1213 00:21:50.762837 2818 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9ea90886-dcbd-4bc0-9591-0e36e6c0d4a7-whisker-backend-key-pair\") pod \"whisker-59c87b4cdb-28zx8\" (UID: \"9ea90886-dcbd-4bc0-9591-0e36e6c0d4a7\") " pod="calico-system/whisker-59c87b4cdb-28zx8" Dec 13 00:21:50.940131 kubelet[2818]: E1213 00:21:50.939977 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:21:50.942173 containerd[1633]: time="2025-12-13T00:21:50.942104143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2dg5q,Uid:7a41fb7c-fb92-4424-905a-7d7e492fd340,Namespace:kube-system,Attempt:0,}" Dec 13 00:21:51.076144 containerd[1633]: time="2025-12-13T00:21:51.076061229Z" level=error msg="Failed to destroy network for sandbox \"268303510866935a195bc59f8ee345c7b9a389c9fae078f239f1dfa637fbf5b4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:21:51.260793 containerd[1633]: time="2025-12-13T00:21:51.260647676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-59c87b4cdb-28zx8,Uid:9ea90886-dcbd-4bc0-9591-0e36e6c0d4a7,Namespace:calico-system,Attempt:0,}" Dec 13 00:21:51.268302 containerd[1633]: time="2025-12-13T00:21:51.268268433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cf9f886c6-9fch9,Uid:fce4aad9-52fa-4b91-82ff-c6436952148b,Namespace:calico-system,Attempt:0,}" Dec 13 00:21:51.273573 kubelet[2818]: E1213 00:21:51.273517 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:21:51.273884 containerd[1633]: time="2025-12-13T00:21:51.273846382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wj7mw,Uid:63ef95d4-df6b-4b05-b59b-3f84085ee71d,Namespace:kube-system,Attempt:0,}" Dec 13 00:21:51.278766 containerd[1633]: time="2025-12-13T00:21:51.278726960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58486567b6-lnz72,Uid:00a965a0-569e-4742-bf83-196c624e0f8f,Namespace:calico-apiserver,Attempt:0,}" Dec 13 00:21:51.280826 containerd[1633]: time="2025-12-13T00:21:51.280646115Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2dg5q,Uid:7a41fb7c-fb92-4424-905a-7d7e492fd340,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"268303510866935a195bc59f8ee345c7b9a389c9fae078f239f1dfa637fbf5b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:21:51.280938 kubelet[2818]: E1213 00:21:51.280884 2818 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"268303510866935a195bc59f8ee345c7b9a389c9fae078f239f1dfa637fbf5b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:21:51.281017 kubelet[2818]: E1213 00:21:51.280962 2818 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"268303510866935a195bc59f8ee345c7b9a389c9fae078f239f1dfa637fbf5b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-2dg5q" Dec 13 00:21:51.281017 kubelet[2818]: E1213 00:21:51.281003 2818 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"268303510866935a195bc59f8ee345c7b9a389c9fae078f239f1dfa637fbf5b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-2dg5q" Dec 13 00:21:51.281082 kubelet[2818]: E1213 00:21:51.281055 2818 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-2dg5q_kube-system(7a41fb7c-fb92-4424-905a-7d7e492fd340)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-2dg5q_kube-system(7a41fb7c-fb92-4424-905a-7d7e492fd340)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"268303510866935a195bc59f8ee345c7b9a389c9fae078f239f1dfa637fbf5b4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-2dg5q" podUID="7a41fb7c-fb92-4424-905a-7d7e492fd340" Dec 13 00:21:51.283419 containerd[1633]: time="2025-12-13T00:21:51.283393276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-f54tn,Uid:7ce6ad04-f89c-40a1-981e-2b7e39fe58e0,Namespace:calico-system,Attempt:0,}" Dec 13 00:21:51.293123 containerd[1633]: time="2025-12-13T00:21:51.293073552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58486567b6-tgd79,Uid:b431ca61-6062-45e4-a35d-3ec7ff6dccb1,Namespace:calico-apiserver,Attempt:0,}" Dec 13 00:21:51.636304 containerd[1633]: time="2025-12-13T00:21:51.636072126Z" level=error msg="Failed to destroy network for sandbox \"f23fb2e8f40de91b3be2647469c2d0fb9173dbc71a2e2f4f2c82ca58aefe0b01\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:21:51.640008 systemd[1]: run-netns-cni\x2d088566be\x2d4989\x2d9c37\x2dda7d\x2d47191df3ce25.mount: Deactivated successfully. Dec 13 00:21:51.650318 containerd[1633]: time="2025-12-13T00:21:51.650153600Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cf9f886c6-9fch9,Uid:fce4aad9-52fa-4b91-82ff-c6436952148b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f23fb2e8f40de91b3be2647469c2d0fb9173dbc71a2e2f4f2c82ca58aefe0b01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:21:51.650563 kubelet[2818]: E1213 00:21:51.650506 2818 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f23fb2e8f40de91b3be2647469c2d0fb9173dbc71a2e2f4f2c82ca58aefe0b01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:21:51.650632 kubelet[2818]: E1213 00:21:51.650599 2818 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f23fb2e8f40de91b3be2647469c2d0fb9173dbc71a2e2f4f2c82ca58aefe0b01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7cf9f886c6-9fch9" Dec 13 00:21:51.650667 kubelet[2818]: E1213 00:21:51.650631 2818 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f23fb2e8f40de91b3be2647469c2d0fb9173dbc71a2e2f4f2c82ca58aefe0b01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7cf9f886c6-9fch9" Dec 13 00:21:51.650828 containerd[1633]: time="2025-12-13T00:21:51.650770028Z" level=error msg="Failed to destroy network for sandbox \"0c7926013f722e567db0ab238494174e3abaeaa859187bab49c11aba89e864b2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:21:51.650888 kubelet[2818]: E1213 00:21:51.650708 2818 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7cf9f886c6-9fch9_calico-system(fce4aad9-52fa-4b91-82ff-c6436952148b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7cf9f886c6-9fch9_calico-system(fce4aad9-52fa-4b91-82ff-c6436952148b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f23fb2e8f40de91b3be2647469c2d0fb9173dbc71a2e2f4f2c82ca58aefe0b01\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7cf9f886c6-9fch9" podUID="fce4aad9-52fa-4b91-82ff-c6436952148b" Dec 13 00:21:51.652435 containerd[1633]: time="2025-12-13T00:21:51.652348323Z" level=error msg="Failed to destroy network for sandbox \"93772c15522790ebf3206ed9279845be51c551c03b619d9eb669db8c17518888\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:21:51.653669 systemd[1]: run-netns-cni\x2d387ece12\x2db0d9\x2deed8\x2d88a7\x2d48caf2535c8a.mount: Deactivated successfully. Dec 13 00:21:51.657865 systemd[1]: run-netns-cni\x2d205f6437\x2d021a\x2d377e\x2d9cc6\x2dfa825f851a7d.mount: Deactivated successfully. Dec 13 00:21:51.660783 containerd[1633]: time="2025-12-13T00:21:51.660685324Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wj7mw,Uid:63ef95d4-df6b-4b05-b59b-3f84085ee71d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c7926013f722e567db0ab238494174e3abaeaa859187bab49c11aba89e864b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:21:51.661143 kubelet[2818]: E1213 00:21:51.661093 2818 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c7926013f722e567db0ab238494174e3abaeaa859187bab49c11aba89e864b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:21:51.661201 kubelet[2818]: E1213 00:21:51.661175 2818 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c7926013f722e567db0ab238494174e3abaeaa859187bab49c11aba89e864b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wj7mw" Dec 13 00:21:51.661247 kubelet[2818]: E1213 00:21:51.661221 2818 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c7926013f722e567db0ab238494174e3abaeaa859187bab49c11aba89e864b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wj7mw" Dec 13 00:21:51.661314 kubelet[2818]: E1213 00:21:51.661278 2818 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-wj7mw_kube-system(63ef95d4-df6b-4b05-b59b-3f84085ee71d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-wj7mw_kube-system(63ef95d4-df6b-4b05-b59b-3f84085ee71d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0c7926013f722e567db0ab238494174e3abaeaa859187bab49c11aba89e864b2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-wj7mw" podUID="63ef95d4-df6b-4b05-b59b-3f84085ee71d" Dec 13 00:21:51.664828 containerd[1633]: time="2025-12-13T00:21:51.664638311Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-59c87b4cdb-28zx8,Uid:9ea90886-dcbd-4bc0-9591-0e36e6c0d4a7,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"93772c15522790ebf3206ed9279845be51c551c03b619d9eb669db8c17518888\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:21:51.665092 kubelet[2818]: E1213 00:21:51.665040 2818 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93772c15522790ebf3206ed9279845be51c551c03b619d9eb669db8c17518888\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:21:51.665159 kubelet[2818]: E1213 00:21:51.665124 2818 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93772c15522790ebf3206ed9279845be51c551c03b619d9eb669db8c17518888\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-59c87b4cdb-28zx8" Dec 13 00:21:51.665488 kubelet[2818]: E1213 00:21:51.665169 2818 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93772c15522790ebf3206ed9279845be51c551c03b619d9eb669db8c17518888\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-59c87b4cdb-28zx8" Dec 13 00:21:51.665488 kubelet[2818]: E1213 00:21:51.665236 2818 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-59c87b4cdb-28zx8_calico-system(9ea90886-dcbd-4bc0-9591-0e36e6c0d4a7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-59c87b4cdb-28zx8_calico-system(9ea90886-dcbd-4bc0-9591-0e36e6c0d4a7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"93772c15522790ebf3206ed9279845be51c551c03b619d9eb669db8c17518888\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-59c87b4cdb-28zx8" podUID="9ea90886-dcbd-4bc0-9591-0e36e6c0d4a7" Dec 13 00:21:51.677608 kubelet[2818]: E1213 00:21:51.677550 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:21:51.682698 containerd[1633]: time="2025-12-13T00:21:51.682595846Z" level=error msg="Failed to destroy network for sandbox \"71a55647113ceeee97c05f771c9c02043357a8aab48fb4d9fc9c6865959c6620\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:21:51.685340 containerd[1633]: time="2025-12-13T00:21:51.685283214Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 13 00:21:51.687128 systemd[1]: run-netns-cni\x2d7a002b05\x2d8526\x2d5ae9\x2d71a8\x2d3476ff7f7ca1.mount: Deactivated successfully. Dec 13 00:21:51.689753 containerd[1633]: time="2025-12-13T00:21:51.689596658Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58486567b6-lnz72,Uid:00a965a0-569e-4742-bf83-196c624e0f8f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"71a55647113ceeee97c05f771c9c02043357a8aab48fb4d9fc9c6865959c6620\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:21:51.690019 kubelet[2818]: E1213 00:21:51.689977 2818 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71a55647113ceeee97c05f771c9c02043357a8aab48fb4d9fc9c6865959c6620\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:21:51.690075 kubelet[2818]: E1213 00:21:51.690039 2818 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71a55647113ceeee97c05f771c9c02043357a8aab48fb4d9fc9c6865959c6620\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-58486567b6-lnz72" Dec 13 00:21:51.690100 kubelet[2818]: E1213 00:21:51.690068 2818 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71a55647113ceeee97c05f771c9c02043357a8aab48fb4d9fc9c6865959c6620\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-58486567b6-lnz72" Dec 13 00:21:51.690167 kubelet[2818]: E1213 00:21:51.690125 2818 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-58486567b6-lnz72_calico-apiserver(00a965a0-569e-4742-bf83-196c624e0f8f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-58486567b6-lnz72_calico-apiserver(00a965a0-569e-4742-bf83-196c624e0f8f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"71a55647113ceeee97c05f771c9c02043357a8aab48fb4d9fc9c6865959c6620\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-58486567b6-lnz72" podUID="00a965a0-569e-4742-bf83-196c624e0f8f" Dec 13 00:21:51.695229 containerd[1633]: time="2025-12-13T00:21:51.695133880Z" level=error msg="Failed to destroy network for sandbox \"336b1785c88f4adaa519e07bcf24a22dccdf3033eb6fd9947562ab29957532db\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:21:51.700138 containerd[1633]: time="2025-12-13T00:21:51.698299066Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58486567b6-tgd79,Uid:b431ca61-6062-45e4-a35d-3ec7ff6dccb1,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"336b1785c88f4adaa519e07bcf24a22dccdf3033eb6fd9947562ab29957532db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:21:51.700351 kubelet[2818]: E1213 00:21:51.698718 2818 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"336b1785c88f4adaa519e07bcf24a22dccdf3033eb6fd9947562ab29957532db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:21:51.700351 kubelet[2818]: E1213 00:21:51.698770 2818 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"336b1785c88f4adaa519e07bcf24a22dccdf3033eb6fd9947562ab29957532db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-58486567b6-tgd79" Dec 13 00:21:51.700351 kubelet[2818]: E1213 00:21:51.698789 2818 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"336b1785c88f4adaa519e07bcf24a22dccdf3033eb6fd9947562ab29957532db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-58486567b6-tgd79" Dec 13 00:21:51.700462 kubelet[2818]: E1213 00:21:51.699151 2818 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-58486567b6-tgd79_calico-apiserver(b431ca61-6062-45e4-a35d-3ec7ff6dccb1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-58486567b6-tgd79_calico-apiserver(b431ca61-6062-45e4-a35d-3ec7ff6dccb1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"336b1785c88f4adaa519e07bcf24a22dccdf3033eb6fd9947562ab29957532db\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-58486567b6-tgd79" podUID="b431ca61-6062-45e4-a35d-3ec7ff6dccb1" Dec 13 00:21:51.702345 containerd[1633]: time="2025-12-13T00:21:51.702282339Z" level=error msg="Failed to destroy network for sandbox \"9993995c0dcbcb86c5b9d1f58b12b775d1950b91510ddf25efa6cda8852348b6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:21:51.705345 containerd[1633]: time="2025-12-13T00:21:51.705303635Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-f54tn,Uid:7ce6ad04-f89c-40a1-981e-2b7e39fe58e0,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9993995c0dcbcb86c5b9d1f58b12b775d1950b91510ddf25efa6cda8852348b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:21:51.705650 kubelet[2818]: E1213 00:21:51.705551 2818 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9993995c0dcbcb86c5b9d1f58b12b775d1950b91510ddf25efa6cda8852348b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:21:51.705650 kubelet[2818]: E1213 00:21:51.705629 2818 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9993995c0dcbcb86c5b9d1f58b12b775d1950b91510ddf25efa6cda8852348b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-f54tn" Dec 13 00:21:51.705787 kubelet[2818]: E1213 00:21:51.705656 2818 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9993995c0dcbcb86c5b9d1f58b12b775d1950b91510ddf25efa6cda8852348b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-f54tn" Dec 13 00:21:51.705787 kubelet[2818]: E1213 00:21:51.705706 2818 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-f54tn_calico-system(7ce6ad04-f89c-40a1-981e-2b7e39fe58e0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-f54tn_calico-system(7ce6ad04-f89c-40a1-981e-2b7e39fe58e0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9993995c0dcbcb86c5b9d1f58b12b775d1950b91510ddf25efa6cda8852348b6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-f54tn" podUID="7ce6ad04-f89c-40a1-981e-2b7e39fe58e0" Dec 13 00:21:52.579118 systemd[1]: Created slice kubepods-besteffort-poddedbe661_92c2_4c3f_9ab9_3f4df404e3b1.slice - libcontainer container kubepods-besteffort-poddedbe661_92c2_4c3f_9ab9_3f4df404e3b1.slice. Dec 13 00:21:52.581720 containerd[1633]: time="2025-12-13T00:21:52.581677007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wvdrp,Uid:dedbe661-92c2-4c3f-9ab9-3f4df404e3b1,Namespace:calico-system,Attempt:0,}" Dec 13 00:21:52.607089 systemd[1]: run-netns-cni\x2d34121f89\x2d8467\x2d9b79\x2d08bc\x2df2c9a9a3e929.mount: Deactivated successfully. Dec 13 00:21:52.607223 systemd[1]: run-netns-cni\x2d847c9715\x2df0d6\x2dcee6\x2df762\x2d23df6f593a4c.mount: Deactivated successfully. Dec 13 00:21:52.635091 containerd[1633]: time="2025-12-13T00:21:52.635021670Z" level=error msg="Failed to destroy network for sandbox \"e99bcf2688f5010e480c42fc72a80f2db486e02c622306d81febc5baf4351de9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:21:52.637874 systemd[1]: run-netns-cni\x2d40c3806d\x2d9722\x2de669\x2dca57\x2d00ac55cd37d5.mount: Deactivated successfully. Dec 13 00:21:52.640075 containerd[1633]: time="2025-12-13T00:21:52.640024729Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wvdrp,Uid:dedbe661-92c2-4c3f-9ab9-3f4df404e3b1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e99bcf2688f5010e480c42fc72a80f2db486e02c622306d81febc5baf4351de9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:21:52.640481 kubelet[2818]: E1213 00:21:52.640341 2818 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e99bcf2688f5010e480c42fc72a80f2db486e02c622306d81febc5baf4351de9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 00:21:52.640481 kubelet[2818]: E1213 00:21:52.640410 2818 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e99bcf2688f5010e480c42fc72a80f2db486e02c622306d81febc5baf4351de9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wvdrp" Dec 13 00:21:52.640481 kubelet[2818]: E1213 00:21:52.640436 2818 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e99bcf2688f5010e480c42fc72a80f2db486e02c622306d81febc5baf4351de9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wvdrp" Dec 13 00:21:52.640616 kubelet[2818]: E1213 00:21:52.640490 2818 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wvdrp_calico-system(dedbe661-92c2-4c3f-9ab9-3f4df404e3b1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wvdrp_calico-system(dedbe661-92c2-4c3f-9ab9-3f4df404e3b1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e99bcf2688f5010e480c42fc72a80f2db486e02c622306d81febc5baf4351de9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wvdrp" podUID="dedbe661-92c2-4c3f-9ab9-3f4df404e3b1" Dec 13 00:21:59.576589 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4003326870.mount: Deactivated successfully. Dec 13 00:22:01.406248 containerd[1633]: time="2025-12-13T00:22:01.406159131Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:22:01.415680 containerd[1633]: time="2025-12-13T00:22:01.415587067Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Dec 13 00:22:01.423185 containerd[1633]: time="2025-12-13T00:22:01.423130757Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:22:01.426272 containerd[1633]: time="2025-12-13T00:22:01.426237779Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 00:22:01.427051 containerd[1633]: time="2025-12-13T00:22:01.426996273Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 9.741665148s" Dec 13 00:22:01.427104 containerd[1633]: time="2025-12-13T00:22:01.427056676Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Dec 13 00:22:01.435745 containerd[1633]: time="2025-12-13T00:22:01.435698517Z" level=info msg="CreateContainer within sandbox \"27c3b67149de18d5045a92a9e592549042bca539ac8d82af916eb24f5a96bce5\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 00:22:01.694362 containerd[1633]: time="2025-12-13T00:22:01.694232343Z" level=info msg="Container 2eab804dc15f4ccc240452880ab996d7ccea3bef38728c464b7e7a94368f4838: CDI devices from CRI Config.CDIDevices: []" Dec 13 00:22:01.767136 containerd[1633]: time="2025-12-13T00:22:01.767086796Z" level=info msg="CreateContainer within sandbox \"27c3b67149de18d5045a92a9e592549042bca539ac8d82af916eb24f5a96bce5\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"2eab804dc15f4ccc240452880ab996d7ccea3bef38728c464b7e7a94368f4838\"" Dec 13 00:22:01.767637 containerd[1633]: time="2025-12-13T00:22:01.767616279Z" level=info msg="StartContainer for \"2eab804dc15f4ccc240452880ab996d7ccea3bef38728c464b7e7a94368f4838\"" Dec 13 00:22:01.769128 containerd[1633]: time="2025-12-13T00:22:01.769089194Z" level=info msg="connecting to shim 2eab804dc15f4ccc240452880ab996d7ccea3bef38728c464b7e7a94368f4838" address="unix:///run/containerd/s/d2bacede5c6d59f83bd4b4a2be62373a6fa675f111edf7d9a0dca69f6f52eb29" protocol=ttrpc version=3 Dec 13 00:22:01.795092 systemd[1]: Started cri-containerd-2eab804dc15f4ccc240452880ab996d7ccea3bef38728c464b7e7a94368f4838.scope - libcontainer container 2eab804dc15f4ccc240452880ab996d7ccea3bef38728c464b7e7a94368f4838. Dec 13 00:22:01.875000 audit: BPF prog-id=181 op=LOAD Dec 13 00:22:01.878074 kernel: kauditd_printk_skb: 6 callbacks suppressed Dec 13 00:22:01.878148 kernel: audit: type=1334 audit(1765585321.875:589): prog-id=181 op=LOAD Dec 13 00:22:01.875000 audit[3973]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3429 pid=3973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:01.885585 kernel: audit: type=1300 audit(1765585321.875:589): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3429 pid=3973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:01.885640 kernel: audit: type=1327 audit(1765585321.875:589): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265616238303464633135663463636332343034353238383061623939 Dec 13 00:22:01.875000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265616238303464633135663463636332343034353238383061623939 Dec 13 00:22:01.875000 audit: BPF prog-id=182 op=LOAD Dec 13 00:22:01.892996 kernel: audit: type=1334 audit(1765585321.875:590): prog-id=182 op=LOAD Dec 13 00:22:01.893054 kernel: audit: type=1300 audit(1765585321.875:590): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3429 pid=3973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:01.875000 audit[3973]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3429 pid=3973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:01.875000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265616238303464633135663463636332343034353238383061623939 Dec 13 00:22:01.905758 kernel: audit: type=1327 audit(1765585321.875:590): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265616238303464633135663463636332343034353238383061623939 Dec 13 00:22:01.876000 audit: BPF prog-id=182 op=UNLOAD Dec 13 00:22:01.876000 audit[3973]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3429 pid=3973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:01.913482 kernel: audit: type=1334 audit(1765585321.876:591): prog-id=182 op=UNLOAD Dec 13 00:22:01.913537 kernel: audit: type=1300 audit(1765585321.876:591): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3429 pid=3973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:01.913569 kernel: audit: type=1327 audit(1765585321.876:591): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265616238303464633135663463636332343034353238383061623939 Dec 13 00:22:01.876000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265616238303464633135663463636332343034353238383061623939 Dec 13 00:22:01.876000 audit: BPF prog-id=181 op=UNLOAD Dec 13 00:22:01.921250 kernel: audit: type=1334 audit(1765585321.876:592): prog-id=181 op=UNLOAD Dec 13 00:22:01.876000 audit[3973]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3429 pid=3973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:01.876000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265616238303464633135663463636332343034353238383061623939 Dec 13 00:22:01.876000 audit: BPF prog-id=183 op=LOAD Dec 13 00:22:01.876000 audit[3973]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3429 pid=3973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:01.876000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265616238303464633135663463636332343034353238383061623939 Dec 13 00:22:01.998861 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 13 00:22:01.998991 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 13 00:22:02.066019 containerd[1633]: time="2025-12-13T00:22:02.065945857Z" level=info msg="StartContainer for \"2eab804dc15f4ccc240452880ab996d7ccea3bef38728c464b7e7a94368f4838\" returns successfully" Dec 13 00:22:02.572732 kubelet[2818]: E1213 00:22:02.572698 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:22:02.573616 containerd[1633]: time="2025-12-13T00:22:02.573165437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wj7mw,Uid:63ef95d4-df6b-4b05-b59b-3f84085ee71d,Namespace:kube-system,Attempt:0,}" Dec 13 00:22:02.704844 kubelet[2818]: E1213 00:22:02.704742 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:22:02.794610 kubelet[2818]: I1213 00:22:02.794535 2818 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-cnwpg" podStartSLOduration=2.3329825140000002 podStartE2EDuration="24.794485697s" podCreationTimestamp="2025-12-13 00:21:38 +0000 UTC" firstStartedPulling="2025-12-13 00:21:38.966437663 +0000 UTC m=+20.497679967" lastFinishedPulling="2025-12-13 00:22:01.427940846 +0000 UTC m=+42.959183150" observedRunningTime="2025-12-13 00:22:02.792314702 +0000 UTC m=+44.323557006" watchObservedRunningTime="2025-12-13 00:22:02.794485697 +0000 UTC m=+44.325728001" Dec 13 00:22:02.847263 kubelet[2818]: I1213 00:22:02.847134 2818 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ea90886-dcbd-4bc0-9591-0e36e6c0d4a7-whisker-ca-bundle\") pod \"9ea90886-dcbd-4bc0-9591-0e36e6c0d4a7\" (UID: \"9ea90886-dcbd-4bc0-9591-0e36e6c0d4a7\") " Dec 13 00:22:02.847263 kubelet[2818]: I1213 00:22:02.847195 2818 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqjvk\" (UniqueName: \"kubernetes.io/projected/9ea90886-dcbd-4bc0-9591-0e36e6c0d4a7-kube-api-access-qqjvk\") pod \"9ea90886-dcbd-4bc0-9591-0e36e6c0d4a7\" (UID: \"9ea90886-dcbd-4bc0-9591-0e36e6c0d4a7\") " Dec 13 00:22:02.847263 kubelet[2818]: I1213 00:22:02.847231 2818 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9ea90886-dcbd-4bc0-9591-0e36e6c0d4a7-whisker-backend-key-pair\") pod \"9ea90886-dcbd-4bc0-9591-0e36e6c0d4a7\" (UID: \"9ea90886-dcbd-4bc0-9591-0e36e6c0d4a7\") " Dec 13 00:22:02.847874 kubelet[2818]: I1213 00:22:02.847651 2818 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ea90886-dcbd-4bc0-9591-0e36e6c0d4a7-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "9ea90886-dcbd-4bc0-9591-0e36e6c0d4a7" (UID: "9ea90886-dcbd-4bc0-9591-0e36e6c0d4a7"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 13 00:22:02.852087 kubelet[2818]: I1213 00:22:02.852017 2818 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ea90886-dcbd-4bc0-9591-0e36e6c0d4a7-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "9ea90886-dcbd-4bc0-9591-0e36e6c0d4a7" (UID: "9ea90886-dcbd-4bc0-9591-0e36e6c0d4a7"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 13 00:22:02.852087 kubelet[2818]: I1213 00:22:02.852075 2818 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ea90886-dcbd-4bc0-9591-0e36e6c0d4a7-kube-api-access-qqjvk" (OuterVolumeSpecName: "kube-api-access-qqjvk") pod "9ea90886-dcbd-4bc0-9591-0e36e6c0d4a7" (UID: "9ea90886-dcbd-4bc0-9591-0e36e6c0d4a7"). InnerVolumeSpecName "kube-api-access-qqjvk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 13 00:22:02.853065 systemd[1]: var-lib-kubelet-pods-9ea90886\x2ddcbd\x2d4bc0\x2d9591\x2d0e36e6c0d4a7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqqjvk.mount: Deactivated successfully. Dec 13 00:22:02.853185 systemd[1]: var-lib-kubelet-pods-9ea90886\x2ddcbd\x2d4bc0\x2d9591\x2d0e36e6c0d4a7-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 13 00:22:02.948157 kubelet[2818]: I1213 00:22:02.948101 2818 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9ea90886-dcbd-4bc0-9591-0e36e6c0d4a7-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Dec 13 00:22:02.948157 kubelet[2818]: I1213 00:22:02.948140 2818 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ea90886-dcbd-4bc0-9591-0e36e6c0d4a7-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Dec 13 00:22:02.948157 kubelet[2818]: I1213 00:22:02.948148 2818 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qqjvk\" (UniqueName: \"kubernetes.io/projected/9ea90886-dcbd-4bc0-9591-0e36e6c0d4a7-kube-api-access-qqjvk\") on node \"localhost\" DevicePath \"\"" Dec 13 00:22:03.048791 systemd-networkd[1321]: calic7d30e05a04: Link UP Dec 13 00:22:03.049308 systemd-networkd[1321]: calic7d30e05a04: Gained carrier Dec 13 00:22:03.065279 containerd[1633]: 2025-12-13 00:22:02.727 [INFO][4027] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 00:22:03.065279 containerd[1633]: 2025-12-13 00:22:02.807 [INFO][4027] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--wj7mw-eth0 coredns-668d6bf9bc- kube-system 63ef95d4-df6b-4b05-b59b-3f84085ee71d 882 0 2025-12-13 00:21:25 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-wj7mw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic7d30e05a04 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="48d9fb1f0ec087eaa7c4be71aba1ad1864adb43d114badf4261c58c8e3bda615" Namespace="kube-system" Pod="coredns-668d6bf9bc-wj7mw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wj7mw-" Dec 13 00:22:03.065279 containerd[1633]: 2025-12-13 00:22:02.807 [INFO][4027] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="48d9fb1f0ec087eaa7c4be71aba1ad1864adb43d114badf4261c58c8e3bda615" Namespace="kube-system" Pod="coredns-668d6bf9bc-wj7mw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wj7mw-eth0" Dec 13 00:22:03.065279 containerd[1633]: 2025-12-13 00:22:02.980 [INFO][4069] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="48d9fb1f0ec087eaa7c4be71aba1ad1864adb43d114badf4261c58c8e3bda615" HandleID="k8s-pod-network.48d9fb1f0ec087eaa7c4be71aba1ad1864adb43d114badf4261c58c8e3bda615" Workload="localhost-k8s-coredns--668d6bf9bc--wj7mw-eth0" Dec 13 00:22:03.066217 containerd[1633]: 2025-12-13 00:22:02.981 [INFO][4069] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="48d9fb1f0ec087eaa7c4be71aba1ad1864adb43d114badf4261c58c8e3bda615" HandleID="k8s-pod-network.48d9fb1f0ec087eaa7c4be71aba1ad1864adb43d114badf4261c58c8e3bda615" Workload="localhost-k8s-coredns--668d6bf9bc--wj7mw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000479e70), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-wj7mw", "timestamp":"2025-12-13 00:22:02.980171422 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 00:22:03.066217 containerd[1633]: 2025-12-13 00:22:02.981 [INFO][4069] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 13 00:22:03.066217 containerd[1633]: 2025-12-13 00:22:02.981 [INFO][4069] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 13 00:22:03.066217 containerd[1633]: 2025-12-13 00:22:02.981 [INFO][4069] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 00:22:03.066217 containerd[1633]: 2025-12-13 00:22:03.000 [INFO][4069] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.48d9fb1f0ec087eaa7c4be71aba1ad1864adb43d114badf4261c58c8e3bda615" host="localhost" Dec 13 00:22:03.066217 containerd[1633]: 2025-12-13 00:22:03.011 [INFO][4069] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 13 00:22:03.066217 containerd[1633]: 2025-12-13 00:22:03.016 [INFO][4069] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 00:22:03.066217 containerd[1633]: 2025-12-13 00:22:03.018 [INFO][4069] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 00:22:03.066217 containerd[1633]: 2025-12-13 00:22:03.020 [INFO][4069] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 00:22:03.066217 containerd[1633]: 2025-12-13 00:22:03.021 [INFO][4069] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.48d9fb1f0ec087eaa7c4be71aba1ad1864adb43d114badf4261c58c8e3bda615" host="localhost" Dec 13 00:22:03.066977 containerd[1633]: 2025-12-13 00:22:03.022 [INFO][4069] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.48d9fb1f0ec087eaa7c4be71aba1ad1864adb43d114badf4261c58c8e3bda615 Dec 13 00:22:03.066977 containerd[1633]: 2025-12-13 00:22:03.029 [INFO][4069] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.48d9fb1f0ec087eaa7c4be71aba1ad1864adb43d114badf4261c58c8e3bda615" host="localhost" Dec 13 00:22:03.066977 containerd[1633]: 2025-12-13 00:22:03.035 [INFO][4069] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.48d9fb1f0ec087eaa7c4be71aba1ad1864adb43d114badf4261c58c8e3bda615" host="localhost" Dec 13 00:22:03.066977 containerd[1633]: 2025-12-13 00:22:03.035 [INFO][4069] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.48d9fb1f0ec087eaa7c4be71aba1ad1864adb43d114badf4261c58c8e3bda615" host="localhost" Dec 13 00:22:03.066977 containerd[1633]: 2025-12-13 00:22:03.035 [INFO][4069] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 13 00:22:03.066977 containerd[1633]: 2025-12-13 00:22:03.035 [INFO][4069] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="48d9fb1f0ec087eaa7c4be71aba1ad1864adb43d114badf4261c58c8e3bda615" HandleID="k8s-pod-network.48d9fb1f0ec087eaa7c4be71aba1ad1864adb43d114badf4261c58c8e3bda615" Workload="localhost-k8s-coredns--668d6bf9bc--wj7mw-eth0" Dec 13 00:22:03.067149 containerd[1633]: 2025-12-13 00:22:03.041 [INFO][4027] cni-plugin/k8s.go 418: Populated endpoint ContainerID="48d9fb1f0ec087eaa7c4be71aba1ad1864adb43d114badf4261c58c8e3bda615" Namespace="kube-system" Pod="coredns-668d6bf9bc-wj7mw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wj7mw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--wj7mw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"63ef95d4-df6b-4b05-b59b-3f84085ee71d", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 0, 21, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-wj7mw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic7d30e05a04", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 00:22:03.067458 containerd[1633]: 2025-12-13 00:22:03.041 [INFO][4027] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="48d9fb1f0ec087eaa7c4be71aba1ad1864adb43d114badf4261c58c8e3bda615" Namespace="kube-system" Pod="coredns-668d6bf9bc-wj7mw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wj7mw-eth0" Dec 13 00:22:03.067458 containerd[1633]: 2025-12-13 00:22:03.041 [INFO][4027] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic7d30e05a04 ContainerID="48d9fb1f0ec087eaa7c4be71aba1ad1864adb43d114badf4261c58c8e3bda615" Namespace="kube-system" Pod="coredns-668d6bf9bc-wj7mw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wj7mw-eth0" Dec 13 00:22:03.067458 containerd[1633]: 2025-12-13 00:22:03.049 [INFO][4027] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="48d9fb1f0ec087eaa7c4be71aba1ad1864adb43d114badf4261c58c8e3bda615" Namespace="kube-system" Pod="coredns-668d6bf9bc-wj7mw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wj7mw-eth0" Dec 13 00:22:03.067723 containerd[1633]: 2025-12-13 00:22:03.050 [INFO][4027] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="48d9fb1f0ec087eaa7c4be71aba1ad1864adb43d114badf4261c58c8e3bda615" Namespace="kube-system" Pod="coredns-668d6bf9bc-wj7mw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wj7mw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--wj7mw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"63ef95d4-df6b-4b05-b59b-3f84085ee71d", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 0, 21, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"48d9fb1f0ec087eaa7c4be71aba1ad1864adb43d114badf4261c58c8e3bda615", Pod:"coredns-668d6bf9bc-wj7mw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic7d30e05a04", MAC:"6a:d6:96:4d:36:90", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 00:22:03.067723 containerd[1633]: 2025-12-13 00:22:03.061 [INFO][4027] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="48d9fb1f0ec087eaa7c4be71aba1ad1864adb43d114badf4261c58c8e3bda615" Namespace="kube-system" Pod="coredns-668d6bf9bc-wj7mw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wj7mw-eth0" Dec 13 00:22:03.573927 kubelet[2818]: E1213 00:22:03.572759 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:22:03.575362 containerd[1633]: time="2025-12-13T00:22:03.573188994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-f54tn,Uid:7ce6ad04-f89c-40a1-981e-2b7e39fe58e0,Namespace:calico-system,Attempt:0,}" Dec 13 00:22:03.575362 containerd[1633]: time="2025-12-13T00:22:03.573188944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2dg5q,Uid:7a41fb7c-fb92-4424-905a-7d7e492fd340,Namespace:kube-system,Attempt:0,}" Dec 13 00:22:03.708719 kubelet[2818]: E1213 00:22:03.708179 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:22:03.715621 systemd[1]: Removed slice kubepods-besteffort-pod9ea90886_dcbd_4bc0_9591_0e36e6c0d4a7.slice - libcontainer container kubepods-besteffort-pod9ea90886_dcbd_4bc0_9591_0e36e6c0d4a7.slice. Dec 13 00:22:03.853262 systemd[1]: Created slice kubepods-besteffort-pod5353c832_e4bf_4b05_bc32_552262f10d42.slice - libcontainer container kubepods-besteffort-pod5353c832_e4bf_4b05_bc32_552262f10d42.slice. Dec 13 00:22:03.854342 containerd[1633]: time="2025-12-13T00:22:03.853243207Z" level=info msg="connecting to shim 48d9fb1f0ec087eaa7c4be71aba1ad1864adb43d114badf4261c58c8e3bda615" address="unix:///run/containerd/s/a108df024adc629b652cd9a73764788fec69e1ef3c03f2f48fa6ded233b61f5a" namespace=k8s.io protocol=ttrpc version=3 Dec 13 00:22:03.898049 systemd[1]: Started cri-containerd-48d9fb1f0ec087eaa7c4be71aba1ad1864adb43d114badf4261c58c8e3bda615.scope - libcontainer container 48d9fb1f0ec087eaa7c4be71aba1ad1864adb43d114badf4261c58c8e3bda615. Dec 13 00:22:03.915000 audit: BPF prog-id=184 op=LOAD Dec 13 00:22:03.915000 audit: BPF prog-id=185 op=LOAD Dec 13 00:22:03.915000 audit[4167]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4153 pid=4167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:03.915000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438643966623166306563303837656161376334626537316162613161 Dec 13 00:22:03.915000 audit: BPF prog-id=185 op=UNLOAD Dec 13 00:22:03.915000 audit[4167]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4153 pid=4167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:03.915000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438643966623166306563303837656161376334626537316162613161 Dec 13 00:22:03.915000 audit: BPF prog-id=186 op=LOAD Dec 13 00:22:03.915000 audit[4167]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4153 pid=4167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:03.915000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438643966623166306563303837656161376334626537316162613161 Dec 13 00:22:03.915000 audit: BPF prog-id=187 op=LOAD Dec 13 00:22:03.915000 audit[4167]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4153 pid=4167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:03.915000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438643966623166306563303837656161376334626537316162613161 Dec 13 00:22:03.916000 audit: BPF prog-id=187 op=UNLOAD Dec 13 00:22:03.916000 audit[4167]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4153 pid=4167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:03.916000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438643966623166306563303837656161376334626537316162613161 Dec 13 00:22:03.916000 audit: BPF prog-id=186 op=UNLOAD Dec 13 00:22:03.916000 audit[4167]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4153 pid=4167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:03.916000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438643966623166306563303837656161376334626537316162613161 Dec 13 00:22:03.916000 audit: BPF prog-id=188 op=LOAD Dec 13 00:22:03.916000 audit[4167]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4153 pid=4167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:03.916000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3438643966623166306563303837656161376334626537316162613161 Dec 13 00:22:03.918107 systemd-resolved[1290]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 00:22:03.957360 kubelet[2818]: I1213 00:22:03.957296 2818 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5353c832-e4bf-4b05-bc32-552262f10d42-whisker-backend-key-pair\") pod \"whisker-6d686f8ffb-wbp4f\" (UID: \"5353c832-e4bf-4b05-bc32-552262f10d42\") " pod="calico-system/whisker-6d686f8ffb-wbp4f" Dec 13 00:22:03.957511 kubelet[2818]: I1213 00:22:03.957374 2818 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5353c832-e4bf-4b05-bc32-552262f10d42-whisker-ca-bundle\") pod \"whisker-6d686f8ffb-wbp4f\" (UID: \"5353c832-e4bf-4b05-bc32-552262f10d42\") " pod="calico-system/whisker-6d686f8ffb-wbp4f" Dec 13 00:22:03.957511 kubelet[2818]: I1213 00:22:03.957410 2818 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkjbx\" (UniqueName: \"kubernetes.io/projected/5353c832-e4bf-4b05-bc32-552262f10d42-kube-api-access-zkjbx\") pod \"whisker-6d686f8ffb-wbp4f\" (UID: \"5353c832-e4bf-4b05-bc32-552262f10d42\") " pod="calico-system/whisker-6d686f8ffb-wbp4f" Dec 13 00:22:04.218459 systemd-networkd[1321]: cali97c3f55573c: Link UP Dec 13 00:22:04.220665 systemd-networkd[1321]: cali97c3f55573c: Gained carrier Dec 13 00:22:04.282939 containerd[1633]: 2025-12-13 00:22:03.845 [INFO][4120] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 00:22:04.282939 containerd[1633]: 2025-12-13 00:22:03.871 [INFO][4120] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--f54tn-eth0 goldmane-666569f655- calico-system 7ce6ad04-f89c-40a1-981e-2b7e39fe58e0 878 0 2025-12-13 00:21:36 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-f54tn eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali97c3f55573c [] [] }} ContainerID="cd7c1b91f5c6ad5be69df5e7f80643128f529ccc4ac477dc8b585d29bd42c33b" Namespace="calico-system" Pod="goldmane-666569f655-f54tn" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--f54tn-" Dec 13 00:22:04.282939 containerd[1633]: 2025-12-13 00:22:03.871 [INFO][4120] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cd7c1b91f5c6ad5be69df5e7f80643128f529ccc4ac477dc8b585d29bd42c33b" Namespace="calico-system" Pod="goldmane-666569f655-f54tn" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--f54tn-eth0" Dec 13 00:22:04.282939 containerd[1633]: 2025-12-13 00:22:03.924 [INFO][4180] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cd7c1b91f5c6ad5be69df5e7f80643128f529ccc4ac477dc8b585d29bd42c33b" HandleID="k8s-pod-network.cd7c1b91f5c6ad5be69df5e7f80643128f529ccc4ac477dc8b585d29bd42c33b" Workload="localhost-k8s-goldmane--666569f655--f54tn-eth0" Dec 13 00:22:04.282939 containerd[1633]: 2025-12-13 00:22:03.925 [INFO][4180] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="cd7c1b91f5c6ad5be69df5e7f80643128f529ccc4ac477dc8b585d29bd42c33b" HandleID="k8s-pod-network.cd7c1b91f5c6ad5be69df5e7f80643128f529ccc4ac477dc8b585d29bd42c33b" Workload="localhost-k8s-goldmane--666569f655--f54tn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000325390), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-f54tn", "timestamp":"2025-12-13 00:22:03.924724307 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 00:22:04.282939 containerd[1633]: 2025-12-13 00:22:03.925 [INFO][4180] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 13 00:22:04.282939 containerd[1633]: 2025-12-13 00:22:03.925 [INFO][4180] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 13 00:22:04.282939 containerd[1633]: 2025-12-13 00:22:03.925 [INFO][4180] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 00:22:04.282939 containerd[1633]: 2025-12-13 00:22:03.932 [INFO][4180] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cd7c1b91f5c6ad5be69df5e7f80643128f529ccc4ac477dc8b585d29bd42c33b" host="localhost" Dec 13 00:22:04.282939 containerd[1633]: 2025-12-13 00:22:03.938 [INFO][4180] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 13 00:22:04.282939 containerd[1633]: 2025-12-13 00:22:03.941 [INFO][4180] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 00:22:04.282939 containerd[1633]: 2025-12-13 00:22:03.943 [INFO][4180] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 00:22:04.282939 containerd[1633]: 2025-12-13 00:22:03.946 [INFO][4180] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 00:22:04.282939 containerd[1633]: 2025-12-13 00:22:03.946 [INFO][4180] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.cd7c1b91f5c6ad5be69df5e7f80643128f529ccc4ac477dc8b585d29bd42c33b" host="localhost" Dec 13 00:22:04.282939 containerd[1633]: 2025-12-13 00:22:03.948 [INFO][4180] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.cd7c1b91f5c6ad5be69df5e7f80643128f529ccc4ac477dc8b585d29bd42c33b Dec 13 00:22:04.282939 containerd[1633]: 2025-12-13 00:22:04.190 [INFO][4180] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.cd7c1b91f5c6ad5be69df5e7f80643128f529ccc4ac477dc8b585d29bd42c33b" host="localhost" Dec 13 00:22:04.282939 containerd[1633]: 2025-12-13 00:22:04.211 [INFO][4180] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.cd7c1b91f5c6ad5be69df5e7f80643128f529ccc4ac477dc8b585d29bd42c33b" host="localhost" Dec 13 00:22:04.282939 containerd[1633]: 2025-12-13 00:22:04.211 [INFO][4180] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.cd7c1b91f5c6ad5be69df5e7f80643128f529ccc4ac477dc8b585d29bd42c33b" host="localhost" Dec 13 00:22:04.282939 containerd[1633]: 2025-12-13 00:22:04.211 [INFO][4180] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 13 00:22:04.282939 containerd[1633]: 2025-12-13 00:22:04.211 [INFO][4180] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="cd7c1b91f5c6ad5be69df5e7f80643128f529ccc4ac477dc8b585d29bd42c33b" HandleID="k8s-pod-network.cd7c1b91f5c6ad5be69df5e7f80643128f529ccc4ac477dc8b585d29bd42c33b" Workload="localhost-k8s-goldmane--666569f655--f54tn-eth0" Dec 13 00:22:04.283581 containerd[1633]: 2025-12-13 00:22:04.216 [INFO][4120] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cd7c1b91f5c6ad5be69df5e7f80643128f529ccc4ac477dc8b585d29bd42c33b" Namespace="calico-system" Pod="goldmane-666569f655-f54tn" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--f54tn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--f54tn-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"7ce6ad04-f89c-40a1-981e-2b7e39fe58e0", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 0, 21, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-f54tn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali97c3f55573c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 00:22:04.283581 containerd[1633]: 2025-12-13 00:22:04.216 [INFO][4120] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="cd7c1b91f5c6ad5be69df5e7f80643128f529ccc4ac477dc8b585d29bd42c33b" Namespace="calico-system" Pod="goldmane-666569f655-f54tn" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--f54tn-eth0" Dec 13 00:22:04.283581 containerd[1633]: 2025-12-13 00:22:04.216 [INFO][4120] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali97c3f55573c ContainerID="cd7c1b91f5c6ad5be69df5e7f80643128f529ccc4ac477dc8b585d29bd42c33b" Namespace="calico-system" Pod="goldmane-666569f655-f54tn" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--f54tn-eth0" Dec 13 00:22:04.283581 containerd[1633]: 2025-12-13 00:22:04.219 [INFO][4120] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cd7c1b91f5c6ad5be69df5e7f80643128f529ccc4ac477dc8b585d29bd42c33b" Namespace="calico-system" Pod="goldmane-666569f655-f54tn" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--f54tn-eth0" Dec 13 00:22:04.283581 containerd[1633]: 2025-12-13 00:22:04.219 [INFO][4120] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cd7c1b91f5c6ad5be69df5e7f80643128f529ccc4ac477dc8b585d29bd42c33b" Namespace="calico-system" Pod="goldmane-666569f655-f54tn" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--f54tn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--f54tn-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"7ce6ad04-f89c-40a1-981e-2b7e39fe58e0", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 0, 21, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cd7c1b91f5c6ad5be69df5e7f80643128f529ccc4ac477dc8b585d29bd42c33b", Pod:"goldmane-666569f655-f54tn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali97c3f55573c", MAC:"ba:a7:4c:ec:df:04", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 00:22:04.283581 containerd[1633]: 2025-12-13 00:22:04.269 [INFO][4120] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cd7c1b91f5c6ad5be69df5e7f80643128f529ccc4ac477dc8b585d29bd42c33b" Namespace="calico-system" Pod="goldmane-666569f655-f54tn" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--f54tn-eth0" Dec 13 00:22:04.323843 containerd[1633]: time="2025-12-13T00:22:04.323597105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wj7mw,Uid:63ef95d4-df6b-4b05-b59b-3f84085ee71d,Namespace:kube-system,Attempt:0,} returns sandbox id \"48d9fb1f0ec087eaa7c4be71aba1ad1864adb43d114badf4261c58c8e3bda615\"" Dec 13 00:22:04.325117 kubelet[2818]: E1213 00:22:04.325082 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:22:04.327969 containerd[1633]: time="2025-12-13T00:22:04.327934957Z" level=info msg="CreateContainer within sandbox \"48d9fb1f0ec087eaa7c4be71aba1ad1864adb43d114badf4261c58c8e3bda615\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 00:22:04.374980 systemd-networkd[1321]: cali12af00b7061: Link UP Dec 13 00:22:04.376019 systemd-networkd[1321]: cali12af00b7061: Gained carrier Dec 13 00:22:04.398199 containerd[1633]: time="2025-12-13T00:22:04.398153270Z" level=info msg="connecting to shim cd7c1b91f5c6ad5be69df5e7f80643128f529ccc4ac477dc8b585d29bd42c33b" address="unix:///run/containerd/s/a963841d0934c857803b8d83defdbe44aa93fd19f54d6dca2cb8af139a1bd56b" namespace=k8s.io protocol=ttrpc version=3 Dec 13 00:22:04.399305 containerd[1633]: time="2025-12-13T00:22:04.399249037Z" level=info msg="Container 06783d0d0d5f15472f39cc5d73ca68a76791e426bf44b4f1b4944c8b978000e9: CDI devices from CRI Config.CDIDevices: []" Dec 13 00:22:04.401938 containerd[1633]: 2025-12-13 00:22:03.862 [INFO][4128] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 00:22:04.401938 containerd[1633]: 2025-12-13 00:22:03.897 [INFO][4128] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--2dg5q-eth0 coredns-668d6bf9bc- kube-system 7a41fb7c-fb92-4424-905a-7d7e492fd340 869 0 2025-12-13 00:21:25 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-2dg5q eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali12af00b7061 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="af7ebcbb461e707db869446743e250c271f2c507c9012d1f132f446c09cef775" Namespace="kube-system" Pod="coredns-668d6bf9bc-2dg5q" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--2dg5q-" Dec 13 00:22:04.401938 containerd[1633]: 2025-12-13 00:22:03.899 [INFO][4128] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="af7ebcbb461e707db869446743e250c271f2c507c9012d1f132f446c09cef775" Namespace="kube-system" Pod="coredns-668d6bf9bc-2dg5q" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--2dg5q-eth0" Dec 13 00:22:04.401938 containerd[1633]: 2025-12-13 00:22:03.936 [INFO][4194] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="af7ebcbb461e707db869446743e250c271f2c507c9012d1f132f446c09cef775" HandleID="k8s-pod-network.af7ebcbb461e707db869446743e250c271f2c507c9012d1f132f446c09cef775" Workload="localhost-k8s-coredns--668d6bf9bc--2dg5q-eth0" Dec 13 00:22:04.401938 containerd[1633]: 2025-12-13 00:22:03.936 [INFO][4194] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="af7ebcbb461e707db869446743e250c271f2c507c9012d1f132f446c09cef775" HandleID="k8s-pod-network.af7ebcbb461e707db869446743e250c271f2c507c9012d1f132f446c09cef775" Workload="localhost-k8s-coredns--668d6bf9bc--2dg5q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002df590), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-2dg5q", "timestamp":"2025-12-13 00:22:03.936393408 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 00:22:04.401938 containerd[1633]: 2025-12-13 00:22:03.936 [INFO][4194] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 13 00:22:04.401938 containerd[1633]: 2025-12-13 00:22:04.211 [INFO][4194] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 13 00:22:04.401938 containerd[1633]: 2025-12-13 00:22:04.211 [INFO][4194] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 00:22:04.401938 containerd[1633]: 2025-12-13 00:22:04.221 [INFO][4194] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.af7ebcbb461e707db869446743e250c271f2c507c9012d1f132f446c09cef775" host="localhost" Dec 13 00:22:04.401938 containerd[1633]: 2025-12-13 00:22:04.268 [INFO][4194] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 13 00:22:04.401938 containerd[1633]: 2025-12-13 00:22:04.280 [INFO][4194] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 00:22:04.401938 containerd[1633]: 2025-12-13 00:22:04.287 [INFO][4194] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 00:22:04.401938 containerd[1633]: 2025-12-13 00:22:04.291 [INFO][4194] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 00:22:04.401938 containerd[1633]: 2025-12-13 00:22:04.291 [INFO][4194] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.af7ebcbb461e707db869446743e250c271f2c507c9012d1f132f446c09cef775" host="localhost" Dec 13 00:22:04.401938 containerd[1633]: 2025-12-13 00:22:04.329 [INFO][4194] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.af7ebcbb461e707db869446743e250c271f2c507c9012d1f132f446c09cef775 Dec 13 00:22:04.401938 containerd[1633]: 2025-12-13 00:22:04.349 [INFO][4194] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.af7ebcbb461e707db869446743e250c271f2c507c9012d1f132f446c09cef775" host="localhost" Dec 13 00:22:04.401938 containerd[1633]: 2025-12-13 00:22:04.358 [INFO][4194] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.af7ebcbb461e707db869446743e250c271f2c507c9012d1f132f446c09cef775" host="localhost" Dec 13 00:22:04.401938 containerd[1633]: 2025-12-13 00:22:04.358 [INFO][4194] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.af7ebcbb461e707db869446743e250c271f2c507c9012d1f132f446c09cef775" host="localhost" Dec 13 00:22:04.401938 containerd[1633]: 2025-12-13 00:22:04.358 [INFO][4194] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 13 00:22:04.401938 containerd[1633]: 2025-12-13 00:22:04.358 [INFO][4194] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="af7ebcbb461e707db869446743e250c271f2c507c9012d1f132f446c09cef775" HandleID="k8s-pod-network.af7ebcbb461e707db869446743e250c271f2c507c9012d1f132f446c09cef775" Workload="localhost-k8s-coredns--668d6bf9bc--2dg5q-eth0" Dec 13 00:22:04.402630 containerd[1633]: 2025-12-13 00:22:04.369 [INFO][4128] cni-plugin/k8s.go 418: Populated endpoint ContainerID="af7ebcbb461e707db869446743e250c271f2c507c9012d1f132f446c09cef775" Namespace="kube-system" Pod="coredns-668d6bf9bc-2dg5q" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--2dg5q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--2dg5q-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7a41fb7c-fb92-4424-905a-7d7e492fd340", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 0, 21, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-2dg5q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali12af00b7061", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 00:22:04.402630 containerd[1633]: 2025-12-13 00:22:04.369 [INFO][4128] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="af7ebcbb461e707db869446743e250c271f2c507c9012d1f132f446c09cef775" Namespace="kube-system" Pod="coredns-668d6bf9bc-2dg5q" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--2dg5q-eth0" Dec 13 00:22:04.402630 containerd[1633]: 2025-12-13 00:22:04.369 [INFO][4128] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali12af00b7061 ContainerID="af7ebcbb461e707db869446743e250c271f2c507c9012d1f132f446c09cef775" Namespace="kube-system" Pod="coredns-668d6bf9bc-2dg5q" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--2dg5q-eth0" Dec 13 00:22:04.402630 containerd[1633]: 2025-12-13 00:22:04.376 [INFO][4128] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="af7ebcbb461e707db869446743e250c271f2c507c9012d1f132f446c09cef775" Namespace="kube-system" Pod="coredns-668d6bf9bc-2dg5q" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--2dg5q-eth0" Dec 13 00:22:04.402630 containerd[1633]: 2025-12-13 00:22:04.376 [INFO][4128] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="af7ebcbb461e707db869446743e250c271f2c507c9012d1f132f446c09cef775" Namespace="kube-system" Pod="coredns-668d6bf9bc-2dg5q" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--2dg5q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--2dg5q-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7a41fb7c-fb92-4424-905a-7d7e492fd340", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 0, 21, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"af7ebcbb461e707db869446743e250c271f2c507c9012d1f132f446c09cef775", Pod:"coredns-668d6bf9bc-2dg5q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali12af00b7061", MAC:"92:85:c5:a4:d7:e3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 00:22:04.402630 containerd[1633]: 2025-12-13 00:22:04.394 [INFO][4128] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="af7ebcbb461e707db869446743e250c271f2c507c9012d1f132f446c09cef775" Namespace="kube-system" Pod="coredns-668d6bf9bc-2dg5q" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--2dg5q-eth0" Dec 13 00:22:04.404036 systemd-networkd[1321]: calic7d30e05a04: Gained IPv6LL Dec 13 00:22:04.414267 kubelet[2818]: I1213 00:22:04.414237 2818 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 00:22:04.415009 kubelet[2818]: E1213 00:22:04.414958 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:22:04.421036 containerd[1633]: time="2025-12-13T00:22:04.420982044Z" level=info msg="CreateContainer within sandbox \"48d9fb1f0ec087eaa7c4be71aba1ad1864adb43d114badf4261c58c8e3bda615\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"06783d0d0d5f15472f39cc5d73ca68a76791e426bf44b4f1b4944c8b978000e9\"" Dec 13 00:22:04.422771 containerd[1633]: time="2025-12-13T00:22:04.421590085Z" level=info msg="StartContainer for \"06783d0d0d5f15472f39cc5d73ca68a76791e426bf44b4f1b4944c8b978000e9\"" Dec 13 00:22:04.427133 containerd[1633]: time="2025-12-13T00:22:04.424794089Z" level=info msg="connecting to shim 06783d0d0d5f15472f39cc5d73ca68a76791e426bf44b4f1b4944c8b978000e9" address="unix:///run/containerd/s/a108df024adc629b652cd9a73764788fec69e1ef3c03f2f48fa6ded233b61f5a" protocol=ttrpc version=3 Dec 13 00:22:04.450487 systemd[1]: Started cri-containerd-cd7c1b91f5c6ad5be69df5e7f80643128f529ccc4ac477dc8b585d29bd42c33b.scope - libcontainer container cd7c1b91f5c6ad5be69df5e7f80643128f529ccc4ac477dc8b585d29bd42c33b. Dec 13 00:22:04.455130 systemd[1]: Started cri-containerd-06783d0d0d5f15472f39cc5d73ca68a76791e426bf44b4f1b4944c8b978000e9.scope - libcontainer container 06783d0d0d5f15472f39cc5d73ca68a76791e426bf44b4f1b4944c8b978000e9. Dec 13 00:22:04.465027 containerd[1633]: time="2025-12-13T00:22:04.464937217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d686f8ffb-wbp4f,Uid:5353c832-e4bf-4b05-bc32-552262f10d42,Namespace:calico-system,Attempt:0,}" Dec 13 00:22:04.471794 containerd[1633]: time="2025-12-13T00:22:04.471615101Z" level=info msg="connecting to shim af7ebcbb461e707db869446743e250c271f2c507c9012d1f132f446c09cef775" address="unix:///run/containerd/s/5113adcbd13a4428f552477287cf76d8cfb78fae4f8df33079bd4e67e2b2f881" namespace=k8s.io protocol=ttrpc version=3 Dec 13 00:22:04.489000 audit: BPF prog-id=189 op=LOAD Dec 13 00:22:04.491000 audit: BPF prog-id=190 op=LOAD Dec 13 00:22:04.491000 audit[4355]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=4153 pid=4355 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:04.491000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036373833643064306435663135343732663339636335643733636136 Dec 13 00:22:04.492000 audit: BPF prog-id=190 op=UNLOAD Dec 13 00:22:04.492000 audit[4355]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4153 pid=4355 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:04.492000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036373833643064306435663135343732663339636335643733636136 Dec 13 00:22:04.493000 audit: BPF prog-id=191 op=LOAD Dec 13 00:22:04.493000 audit[4355]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=4153 pid=4355 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:04.493000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036373833643064306435663135343732663339636335643733636136 Dec 13 00:22:04.494000 audit: BPF prog-id=192 op=LOAD Dec 13 00:22:04.494000 audit[4355]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=4153 pid=4355 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:04.494000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036373833643064306435663135343732663339636335643733636136 Dec 13 00:22:04.494000 audit: BPF prog-id=192 op=UNLOAD Dec 13 00:22:04.494000 audit[4355]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4153 pid=4355 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:04.494000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036373833643064306435663135343732663339636335643733636136 Dec 13 00:22:04.494000 audit: BPF prog-id=191 op=UNLOAD Dec 13 00:22:04.494000 audit[4355]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4153 pid=4355 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:04.494000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036373833643064306435663135343732663339636335643733636136 Dec 13 00:22:04.495000 audit: BPF prog-id=193 op=LOAD Dec 13 00:22:04.495000 audit[4355]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=4153 pid=4355 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:04.495000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036373833643064306435663135343732663339636335643733636136 Dec 13 00:22:04.514000 audit: BPF prog-id=194 op=LOAD Dec 13 00:22:04.518000 audit: BPF prog-id=195 op=LOAD Dec 13 00:22:04.518000 audit[4348]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=4331 pid=4348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:04.518000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364376331623931663563366164356265363964663565376638303634 Dec 13 00:22:04.518000 audit: BPF prog-id=195 op=UNLOAD Dec 13 00:22:04.518000 audit[4348]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4331 pid=4348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:04.518000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364376331623931663563366164356265363964663565376638303634 Dec 13 00:22:04.519000 audit: BPF prog-id=196 op=LOAD Dec 13 00:22:04.519000 audit[4348]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4331 pid=4348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:04.519000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364376331623931663563366164356265363964663565376638303634 Dec 13 00:22:04.519000 audit: BPF prog-id=197 op=LOAD Dec 13 00:22:04.519000 audit[4348]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=4331 pid=4348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:04.519000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364376331623931663563366164356265363964663565376638303634 Dec 13 00:22:04.519000 audit: BPF prog-id=197 op=UNLOAD Dec 13 00:22:04.519000 audit[4348]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4331 pid=4348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:04.519000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364376331623931663563366164356265363964663565376638303634 Dec 13 00:22:04.519000 audit: BPF prog-id=196 op=UNLOAD Dec 13 00:22:04.519000 audit[4348]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4331 pid=4348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:04.519000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364376331623931663563366164356265363964663565376638303634 Dec 13 00:22:04.519000 audit: BPF prog-id=198 op=LOAD Dec 13 00:22:04.519000 audit[4348]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=4331 pid=4348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:04.519000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364376331623931663563366164356265363964663565376638303634 Dec 13 00:22:04.527107 systemd-resolved[1290]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 00:22:04.530000 audit[4455]: NETFILTER_CFG table=filter:117 family=2 entries=21 op=nft_register_rule pid=4455 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:22:04.530000 audit[4455]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffca2a051e0 a2=0 a3=7ffca2a051cc items=0 ppid=2927 pid=4455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:04.530000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:22:04.535000 audit[4455]: NETFILTER_CFG table=nat:118 family=2 entries=19 op=nft_register_chain pid=4455 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:22:04.535000 audit[4455]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffca2a051e0 a2=0 a3=7ffca2a051cc items=0 ppid=2927 pid=4455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:04.535000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:22:04.546008 containerd[1633]: time="2025-12-13T00:22:04.545897091Z" level=info msg="StartContainer for \"06783d0d0d5f15472f39cc5d73ca68a76791e426bf44b4f1b4944c8b978000e9\" returns successfully" Dec 13 00:22:04.549942 systemd[1]: Started cri-containerd-af7ebcbb461e707db869446743e250c271f2c507c9012d1f132f446c09cef775.scope - libcontainer container af7ebcbb461e707db869446743e250c271f2c507c9012d1f132f446c09cef775. Dec 13 00:22:04.569000 audit: BPF prog-id=199 op=LOAD Dec 13 00:22:04.571000 audit: BPF prog-id=200 op=LOAD Dec 13 00:22:04.571000 audit[4423]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4394 pid=4423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:04.571000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166376562636262343631653730376462383639343436373433653235 Dec 13 00:22:04.571000 audit: BPF prog-id=200 op=UNLOAD Dec 13 00:22:04.571000 audit[4423]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4394 pid=4423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:04.571000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166376562636262343631653730376462383639343436373433653235 Dec 13 00:22:04.571000 audit: BPF prog-id=201 op=LOAD Dec 13 00:22:04.571000 audit[4423]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4394 pid=4423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:04.571000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166376562636262343631653730376462383639343436373433653235 Dec 13 00:22:04.571000 audit: BPF prog-id=202 op=LOAD Dec 13 00:22:04.571000 audit[4423]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4394 pid=4423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:04.571000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166376562636262343631653730376462383639343436373433653235 Dec 13 00:22:04.571000 audit: BPF prog-id=202 op=UNLOAD Dec 13 00:22:04.571000 audit[4423]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4394 pid=4423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:04.571000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166376562636262343631653730376462383639343436373433653235 Dec 13 00:22:04.571000 audit: BPF prog-id=201 op=UNLOAD Dec 13 00:22:04.571000 audit[4423]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4394 pid=4423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:04.571000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166376562636262343631653730376462383639343436373433653235 Dec 13 00:22:04.571000 audit: BPF prog-id=203 op=LOAD Dec 13 00:22:04.571000 audit[4423]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4394 pid=4423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:04.571000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166376562636262343631653730376462383639343436373433653235 Dec 13 00:22:04.574095 systemd-resolved[1290]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 00:22:04.582787 kubelet[2818]: I1213 00:22:04.582714 2818 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ea90886-dcbd-4bc0-9591-0e36e6c0d4a7" path="/var/lib/kubelet/pods/9ea90886-dcbd-4bc0-9591-0e36e6c0d4a7/volumes" Dec 13 00:22:04.626000 audit: BPF prog-id=204 op=LOAD Dec 13 00:22:04.626000 audit[4506]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff1e4817b0 a2=98 a3=1fffffffffffffff items=0 ppid=4231 pid=4506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:04.626000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 13 00:22:04.626000 audit: BPF prog-id=204 op=UNLOAD Dec 13 00:22:04.626000 audit[4506]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff1e481780 a3=0 items=0 ppid=4231 pid=4506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:04.626000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 13 00:22:04.626000 audit: BPF prog-id=205 op=LOAD Dec 13 00:22:04.626000 audit[4506]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff1e481690 a2=94 a3=3 items=0 ppid=4231 pid=4506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:04.626000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 13 00:22:04.626000 audit: BPF prog-id=205 op=UNLOAD Dec 13 00:22:04.626000 audit[4506]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff1e481690 a2=94 a3=3 items=0 ppid=4231 pid=4506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:04.626000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 13 00:22:04.626000 audit: BPF prog-id=206 op=LOAD Dec 13 00:22:04.626000 audit[4506]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff1e4816d0 a2=94 a3=7fff1e4818b0 items=0 ppid=4231 pid=4506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:04.626000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 13 00:22:04.626000 audit: BPF prog-id=206 op=UNLOAD Dec 13 00:22:04.626000 audit[4506]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff1e4816d0 a2=94 a3=7fff1e4818b0 items=0 ppid=4231 pid=4506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:04.626000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 13 00:22:04.627000 audit: BPF prog-id=207 op=LOAD Dec 13 00:22:04.627000 audit[4507]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffefa4cff10 a2=98 a3=3 items=0 ppid=4231 pid=4507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:04.627000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 00:22:04.628000 audit: BPF prog-id=207 op=UNLOAD Dec 13 00:22:04.628000 audit[4507]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffefa4cfee0 a3=0 items=0 ppid=4231 pid=4507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:04.628000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 00:22:04.628000 audit: BPF prog-id=208 op=LOAD Dec 13 00:22:04.628000 audit[4507]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffefa4cfd00 a2=94 a3=54428f items=0 ppid=4231 pid=4507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:04.628000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 00:22:04.628000 audit: BPF prog-id=208 op=UNLOAD Dec 13 00:22:04.628000 audit[4507]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffefa4cfd00 a2=94 a3=54428f items=0 ppid=4231 pid=4507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:04.628000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 00:22:04.628000 audit: BPF prog-id=209 op=LOAD Dec 13 00:22:04.628000 audit[4507]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffefa4cfd30 a2=94 a3=2 items=0 ppid=4231 pid=4507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:04.628000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 00:22:04.628000 audit: BPF prog-id=209 op=UNLOAD Dec 13 00:22:04.628000 audit[4507]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffefa4cfd30 a2=0 a3=2 items=0 ppid=4231 pid=4507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:04.628000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 00:22:04.711771 kubelet[2818]: E1213 00:22:04.711734 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:22:04.711771 kubelet[2818]: E1213 00:22:04.711777 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:22:04.790029 containerd[1633]: time="2025-12-13T00:22:04.789911769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-f54tn,Uid:7ce6ad04-f89c-40a1-981e-2b7e39fe58e0,Namespace:calico-system,Attempt:0,} returns sandbox id \"cd7c1b91f5c6ad5be69df5e7f80643128f529ccc4ac477dc8b585d29bd42c33b\"" Dec 13 00:22:04.793947 containerd[1633]: time="2025-12-13T00:22:04.793775040Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 13 00:22:04.798015 containerd[1633]: time="2025-12-13T00:22:04.797976264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2dg5q,Uid:7a41fb7c-fb92-4424-905a-7d7e492fd340,Namespace:kube-system,Attempt:0,} returns sandbox id \"af7ebcbb461e707db869446743e250c271f2c507c9012d1f132f446c09cef775\"" Dec 13 00:22:04.803190 kubelet[2818]: E1213 00:22:04.798727 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:22:04.804473 containerd[1633]: time="2025-12-13T00:22:04.804430898Z" level=info msg="CreateContainer within sandbox \"af7ebcbb461e707db869446743e250c271f2c507c9012d1f132f446c09cef775\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 00:22:04.831000 audit[4512]: NETFILTER_CFG table=filter:119 family=2 entries=20 op=nft_register_rule pid=4512 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:22:04.831000 audit[4512]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffed3c63450 a2=0 a3=7ffed3c6343c items=0 ppid=2927 pid=4512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:04.831000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:22:04.835948 containerd[1633]: time="2025-12-13T00:22:04.835790372Z" level=info msg="Container e532c951b17137925c30f00d6a403e6d7260bc21fccbd893fc64bd1922e0e710: CDI devices from CRI Config.CDIDevices: []" Dec 13 00:22:04.836000 audit: BPF prog-id=210 op=LOAD Dec 13 00:22:04.836000 audit[4507]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffefa4cfbf0 a2=94 a3=1 items=0 ppid=4231 pid=4507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:04.836000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 00:22:04.836000 audit: BPF prog-id=210 op=UNLOAD Dec 13 00:22:04.836000 audit[4507]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffefa4cfbf0 a2=94 a3=1 items=0 ppid=4231 pid=4507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:04.836000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 00:22:04.840000 audit[4512]: NETFILTER_CFG table=nat:120 family=2 entries=14 op=nft_register_rule pid=4512 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:22:04.840000 audit[4512]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffed3c63450 a2=0 a3=0 items=0 ppid=2927 pid=4512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:04.840000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:22:04.847958 containerd[1633]: time="2025-12-13T00:22:04.847913835Z" level=info msg="CreateContainer within sandbox \"af7ebcbb461e707db869446743e250c271f2c507c9012d1f132f446c09cef775\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e532c951b17137925c30f00d6a403e6d7260bc21fccbd893fc64bd1922e0e710\"" Dec 13 00:22:04.849187 containerd[1633]: time="2025-12-13T00:22:04.849133334Z" level=info msg="StartContainer for \"e532c951b17137925c30f00d6a403e6d7260bc21fccbd893fc64bd1922e0e710\"" Dec 13 00:22:04.850000 audit: BPF prog-id=211 op=LOAD Dec 13 00:22:04.850000 audit[4507]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffefa4cfbe0 a2=94 a3=4 items=0 ppid=4231 pid=4507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:04.850000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 00:22:04.851000 audit: BPF prog-id=211 op=UNLOAD Dec 13 00:22:04.851000 audit[4507]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffefa4cfbe0 a2=0 a3=4 items=0 ppid=4231 pid=4507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:04.851000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 00:22:04.852000 audit: BPF prog-id=212 op=LOAD Dec 13 00:22:04.852000 audit[4507]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffefa4cfa40 a2=94 a3=5 items=0 ppid=4231 pid=4507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:04.852000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 00:22:04.852000 audit: BPF prog-id=212 op=UNLOAD Dec 13 00:22:04.852000 audit[4507]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffefa4cfa40 a2=0 a3=5 items=0 ppid=4231 pid=4507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:04.852000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 00:22:04.852000 audit: BPF prog-id=213 op=LOAD Dec 13 00:22:04.852000 audit[4507]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffefa4cfc60 a2=94 a3=6 items=0 ppid=4231 pid=4507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:04.852000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 00:22:04.852000 audit: BPF prog-id=213 op=UNLOAD Dec 13 00:22:04.852000 audit[4507]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffefa4cfc60 a2=0 a3=6 items=0 ppid=4231 pid=4507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:04.852000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 00:22:04.852000 audit: BPF prog-id=214 op=LOAD Dec 13 00:22:04.852000 audit[4507]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffefa4cf410 a2=94 a3=88 items=0 ppid=4231 pid=4507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:04.852000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 00:22:04.853000 audit: BPF prog-id=215 op=LOAD Dec 13 00:22:04.853000 audit[4507]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffefa4cf290 a2=94 a3=2 items=0 ppid=4231 pid=4507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:04.853000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 00:22:04.853000 audit: BPF prog-id=215 op=UNLOAD Dec 13 00:22:04.853000 audit[4507]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffefa4cf2c0 a2=0 a3=7ffefa4cf3c0 items=0 ppid=4231 pid=4507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:04.853000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 00:22:04.854000 audit: BPF prog-id=214 op=UNLOAD Dec 13 00:22:04.854000 audit[4507]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=25298d10 a2=0 a3=251d0240033b687d items=0 ppid=4231 pid=4507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:04.854000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 00:22:04.857342 containerd[1633]: time="2025-12-13T00:22:04.853949724Z" level=info msg="connecting to shim e532c951b17137925c30f00d6a403e6d7260bc21fccbd893fc64bd1922e0e710" address="unix:///run/containerd/s/5113adcbd13a4428f552477287cf76d8cfb78fae4f8df33079bd4e67e2b2f881" protocol=ttrpc version=3 Dec 13 00:22:04.874162 systemd-networkd[1321]: cali2c106339dda: Link UP Dec 13 00:22:04.875518 systemd-networkd[1321]: cali2c106339dda: Gained carrier Dec 13 00:22:04.889844 kubelet[2818]: I1213 00:22:04.889002 2818 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-wj7mw" podStartSLOduration=39.888970439 podStartE2EDuration="39.888970439s" podCreationTimestamp="2025-12-13 00:21:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 00:22:04.815583188 +0000 UTC m=+46.346825502" watchObservedRunningTime="2025-12-13 00:22:04.888970439 +0000 UTC m=+46.420212743" Dec 13 00:22:04.889000 audit: BPF prog-id=216 op=LOAD Dec 13 00:22:04.889000 audit[4529]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd0d2c9540 a2=98 a3=1999999999999999 items=0 ppid=4231 pid=4529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:04.889000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 13 00:22:04.889000 audit: BPF prog-id=216 op=UNLOAD Dec 13 00:22:04.889000 audit[4529]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd0d2c9510 a3=0 items=0 ppid=4231 pid=4529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:04.889000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 13 00:22:04.889000 audit: BPF prog-id=217 op=LOAD Dec 13 00:22:04.889000 audit[4529]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd0d2c9420 a2=94 a3=ffff items=0 ppid=4231 pid=4529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:04.889000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 13 00:22:04.889000 audit: BPF prog-id=217 op=UNLOAD Dec 13 00:22:04.889000 audit[4529]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd0d2c9420 a2=94 a3=ffff items=0 ppid=4231 pid=4529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:04.889000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 13 00:22:04.889000 audit: BPF prog-id=218 op=LOAD Dec 13 00:22:04.889000 audit[4529]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd0d2c9460 a2=94 a3=7ffd0d2c9640 items=0 ppid=4231 pid=4529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:04.889000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 13 00:22:04.889000 audit: BPF prog-id=218 op=UNLOAD Dec 13 00:22:04.889000 audit[4529]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd0d2c9460 a2=94 a3=7ffd0d2c9640 items=0 ppid=4231 pid=4529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:04.889000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 13 00:22:04.897126 systemd[1]: Started cri-containerd-e532c951b17137925c30f00d6a403e6d7260bc21fccbd893fc64bd1922e0e710.scope - libcontainer container e532c951b17137925c30f00d6a403e6d7260bc21fccbd893fc64bd1922e0e710. Dec 13 00:22:04.900623 containerd[1633]: 2025-12-13 00:22:04.547 [INFO][4407] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 00:22:04.900623 containerd[1633]: 2025-12-13 00:22:04.573 [INFO][4407] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--6d686f8ffb--wbp4f-eth0 whisker-6d686f8ffb- calico-system 5353c832-e4bf-4b05-bc32-552262f10d42 963 0 2025-12-13 00:22:03 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6d686f8ffb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-6d686f8ffb-wbp4f eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali2c106339dda [] [] }} ContainerID="bff81f30d6929093d43a63cc5154287373353d2ffd4b783194f06d691667af05" Namespace="calico-system" Pod="whisker-6d686f8ffb-wbp4f" WorkloadEndpoint="localhost-k8s-whisker--6d686f8ffb--wbp4f-" Dec 13 00:22:04.900623 containerd[1633]: 2025-12-13 00:22:04.575 [INFO][4407] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bff81f30d6929093d43a63cc5154287373353d2ffd4b783194f06d691667af05" Namespace="calico-system" Pod="whisker-6d686f8ffb-wbp4f" WorkloadEndpoint="localhost-k8s-whisker--6d686f8ffb--wbp4f-eth0" Dec 13 00:22:04.900623 containerd[1633]: 2025-12-13 00:22:04.724 [INFO][4475] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bff81f30d6929093d43a63cc5154287373353d2ffd4b783194f06d691667af05" HandleID="k8s-pod-network.bff81f30d6929093d43a63cc5154287373353d2ffd4b783194f06d691667af05" Workload="localhost-k8s-whisker--6d686f8ffb--wbp4f-eth0" Dec 13 00:22:04.900623 containerd[1633]: 2025-12-13 00:22:04.724 [INFO][4475] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="bff81f30d6929093d43a63cc5154287373353d2ffd4b783194f06d691667af05" HandleID="k8s-pod-network.bff81f30d6929093d43a63cc5154287373353d2ffd4b783194f06d691667af05" Workload="localhost-k8s-whisker--6d686f8ffb--wbp4f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f710), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-6d686f8ffb-wbp4f", "timestamp":"2025-12-13 00:22:04.724535273 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 00:22:04.900623 containerd[1633]: 2025-12-13 00:22:04.724 [INFO][4475] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 13 00:22:04.900623 containerd[1633]: 2025-12-13 00:22:04.724 [INFO][4475] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 13 00:22:04.900623 containerd[1633]: 2025-12-13 00:22:04.724 [INFO][4475] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 00:22:04.900623 containerd[1633]: 2025-12-13 00:22:04.795 [INFO][4475] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bff81f30d6929093d43a63cc5154287373353d2ffd4b783194f06d691667af05" host="localhost" Dec 13 00:22:04.900623 containerd[1633]: 2025-12-13 00:22:04.801 [INFO][4475] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 13 00:22:04.900623 containerd[1633]: 2025-12-13 00:22:04.820 [INFO][4475] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 00:22:04.900623 containerd[1633]: 2025-12-13 00:22:04.825 [INFO][4475] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 00:22:04.900623 containerd[1633]: 2025-12-13 00:22:04.830 [INFO][4475] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 00:22:04.900623 containerd[1633]: 2025-12-13 00:22:04.830 [INFO][4475] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bff81f30d6929093d43a63cc5154287373353d2ffd4b783194f06d691667af05" host="localhost" Dec 13 00:22:04.900623 containerd[1633]: 2025-12-13 00:22:04.833 [INFO][4475] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.bff81f30d6929093d43a63cc5154287373353d2ffd4b783194f06d691667af05 Dec 13 00:22:04.900623 containerd[1633]: 2025-12-13 00:22:04.841 [INFO][4475] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bff81f30d6929093d43a63cc5154287373353d2ffd4b783194f06d691667af05" host="localhost" Dec 13 00:22:04.900623 containerd[1633]: 2025-12-13 00:22:04.853 [INFO][4475] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.bff81f30d6929093d43a63cc5154287373353d2ffd4b783194f06d691667af05" host="localhost" Dec 13 00:22:04.900623 containerd[1633]: 2025-12-13 00:22:04.853 [INFO][4475] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.bff81f30d6929093d43a63cc5154287373353d2ffd4b783194f06d691667af05" host="localhost" Dec 13 00:22:04.900623 containerd[1633]: 2025-12-13 00:22:04.854 [INFO][4475] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 13 00:22:04.900623 containerd[1633]: 2025-12-13 00:22:04.854 [INFO][4475] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="bff81f30d6929093d43a63cc5154287373353d2ffd4b783194f06d691667af05" HandleID="k8s-pod-network.bff81f30d6929093d43a63cc5154287373353d2ffd4b783194f06d691667af05" Workload="localhost-k8s-whisker--6d686f8ffb--wbp4f-eth0" Dec 13 00:22:04.901941 containerd[1633]: 2025-12-13 00:22:04.870 [INFO][4407] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bff81f30d6929093d43a63cc5154287373353d2ffd4b783194f06d691667af05" Namespace="calico-system" Pod="whisker-6d686f8ffb-wbp4f" WorkloadEndpoint="localhost-k8s-whisker--6d686f8ffb--wbp4f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6d686f8ffb--wbp4f-eth0", GenerateName:"whisker-6d686f8ffb-", Namespace:"calico-system", SelfLink:"", UID:"5353c832-e4bf-4b05-bc32-552262f10d42", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 0, 22, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6d686f8ffb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-6d686f8ffb-wbp4f", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2c106339dda", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 00:22:04.901941 containerd[1633]: 2025-12-13 00:22:04.870 [INFO][4407] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="bff81f30d6929093d43a63cc5154287373353d2ffd4b783194f06d691667af05" Namespace="calico-system" Pod="whisker-6d686f8ffb-wbp4f" WorkloadEndpoint="localhost-k8s-whisker--6d686f8ffb--wbp4f-eth0" Dec 13 00:22:04.901941 containerd[1633]: 2025-12-13 00:22:04.870 [INFO][4407] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2c106339dda ContainerID="bff81f30d6929093d43a63cc5154287373353d2ffd4b783194f06d691667af05" Namespace="calico-system" Pod="whisker-6d686f8ffb-wbp4f" WorkloadEndpoint="localhost-k8s-whisker--6d686f8ffb--wbp4f-eth0" Dec 13 00:22:04.901941 containerd[1633]: 2025-12-13 00:22:04.874 [INFO][4407] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bff81f30d6929093d43a63cc5154287373353d2ffd4b783194f06d691667af05" Namespace="calico-system" Pod="whisker-6d686f8ffb-wbp4f" WorkloadEndpoint="localhost-k8s-whisker--6d686f8ffb--wbp4f-eth0" Dec 13 00:22:04.901941 containerd[1633]: 2025-12-13 00:22:04.875 [INFO][4407] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bff81f30d6929093d43a63cc5154287373353d2ffd4b783194f06d691667af05" Namespace="calico-system" Pod="whisker-6d686f8ffb-wbp4f" WorkloadEndpoint="localhost-k8s-whisker--6d686f8ffb--wbp4f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6d686f8ffb--wbp4f-eth0", GenerateName:"whisker-6d686f8ffb-", Namespace:"calico-system", SelfLink:"", UID:"5353c832-e4bf-4b05-bc32-552262f10d42", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 0, 22, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6d686f8ffb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bff81f30d6929093d43a63cc5154287373353d2ffd4b783194f06d691667af05", Pod:"whisker-6d686f8ffb-wbp4f", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2c106339dda", MAC:"12:de:47:1d:a3:b2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 00:22:04.901941 containerd[1633]: 2025-12-13 00:22:04.890 [INFO][4407] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bff81f30d6929093d43a63cc5154287373353d2ffd4b783194f06d691667af05" Namespace="calico-system" Pod="whisker-6d686f8ffb-wbp4f" WorkloadEndpoint="localhost-k8s-whisker--6d686f8ffb--wbp4f-eth0" Dec 13 00:22:04.920000 audit: BPF prog-id=219 op=LOAD Dec 13 00:22:04.922000 audit: BPF prog-id=220 op=LOAD Dec 13 00:22:04.922000 audit[4515]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4394 pid=4515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:04.922000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535333263393531623137313337393235633330663030643661343033 Dec 13 00:22:04.925000 audit: BPF prog-id=220 op=UNLOAD Dec 13 00:22:04.925000 audit[4515]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4394 pid=4515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:04.925000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535333263393531623137313337393235633330663030643661343033 Dec 13 00:22:04.925000 audit: BPF prog-id=221 op=LOAD Dec 13 00:22:04.925000 audit[4515]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4394 pid=4515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:04.925000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535333263393531623137313337393235633330663030643661343033 Dec 13 00:22:04.925000 audit: BPF prog-id=222 op=LOAD Dec 13 00:22:04.925000 audit[4515]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4394 pid=4515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:04.925000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535333263393531623137313337393235633330663030643661343033 Dec 13 00:22:04.925000 audit: BPF prog-id=222 op=UNLOAD Dec 13 00:22:04.925000 audit[4515]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4394 pid=4515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:04.925000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535333263393531623137313337393235633330663030643661343033 Dec 13 00:22:04.925000 audit: BPF prog-id=221 op=UNLOAD Dec 13 00:22:04.925000 audit[4515]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4394 pid=4515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:04.925000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535333263393531623137313337393235633330663030643661343033 Dec 13 00:22:04.925000 audit: BPF prog-id=223 op=LOAD Dec 13 00:22:04.925000 audit[4515]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4394 pid=4515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:04.925000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535333263393531623137313337393235633330663030643661343033 Dec 13 00:22:04.938919 containerd[1633]: time="2025-12-13T00:22:04.938826046Z" level=info msg="connecting to shim bff81f30d6929093d43a63cc5154287373353d2ffd4b783194f06d691667af05" address="unix:///run/containerd/s/ca9cda20a5806534d636fca5642583b6a939be63cbd1ec0f69d1cb2f6542d843" namespace=k8s.io protocol=ttrpc version=3 Dec 13 00:22:04.968961 containerd[1633]: time="2025-12-13T00:22:04.968918423Z" level=info msg="StartContainer for \"e532c951b17137925c30f00d6a403e6d7260bc21fccbd893fc64bd1922e0e710\" returns successfully" Dec 13 00:22:04.996368 systemd[1]: Started cri-containerd-bff81f30d6929093d43a63cc5154287373353d2ffd4b783194f06d691667af05.scope - libcontainer container bff81f30d6929093d43a63cc5154287373353d2ffd4b783194f06d691667af05. Dec 13 00:22:05.048000 audit: BPF prog-id=224 op=LOAD Dec 13 00:22:05.049000 audit: BPF prog-id=225 op=LOAD Dec 13 00:22:05.049000 audit[4581]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=4563 pid=4581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:05.049000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266663831663330643639323930393364343361363363633531353432 Dec 13 00:22:05.049000 audit: BPF prog-id=225 op=UNLOAD Dec 13 00:22:05.049000 audit[4581]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4563 pid=4581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:05.049000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266663831663330643639323930393364343361363363633531353432 Dec 13 00:22:05.050000 audit: BPF prog-id=226 op=LOAD Dec 13 00:22:05.050000 audit[4581]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=4563 pid=4581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:05.050000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266663831663330643639323930393364343361363363633531353432 Dec 13 00:22:05.050000 audit: BPF prog-id=227 op=LOAD Dec 13 00:22:05.050000 audit[4581]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=4563 pid=4581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:05.050000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266663831663330643639323930393364343361363363633531353432 Dec 13 00:22:05.050000 audit: BPF prog-id=227 op=UNLOAD Dec 13 00:22:05.050000 audit[4581]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4563 pid=4581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:05.050000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266663831663330643639323930393364343361363363633531353432 Dec 13 00:22:05.050000 audit: BPF prog-id=226 op=UNLOAD Dec 13 00:22:05.050000 audit[4581]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4563 pid=4581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:05.050000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266663831663330643639323930393364343361363363633531353432 Dec 13 00:22:05.050000 audit: BPF prog-id=228 op=LOAD Dec 13 00:22:05.050000 audit[4581]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=4563 pid=4581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:05.050000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266663831663330643639323930393364343361363363633531353432 Dec 13 00:22:05.053565 systemd-resolved[1290]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 00:22:05.055552 systemd-networkd[1321]: vxlan.calico: Link UP Dec 13 00:22:05.055560 systemd-networkd[1321]: vxlan.calico: Gained carrier Dec 13 00:22:05.088000 audit: BPF prog-id=229 op=LOAD Dec 13 00:22:05.088000 audit[4625]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcc9e02580 a2=98 a3=0 items=0 ppid=4231 pid=4625 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:05.088000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 00:22:05.088000 audit: BPF prog-id=229 op=UNLOAD Dec 13 00:22:05.088000 audit[4625]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffcc9e02550 a3=0 items=0 ppid=4231 pid=4625 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:05.088000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 00:22:05.088000 audit: BPF prog-id=230 op=LOAD Dec 13 00:22:05.088000 audit[4625]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcc9e02390 a2=94 a3=54428f items=0 ppid=4231 pid=4625 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:05.088000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 00:22:05.088000 audit: BPF prog-id=230 op=UNLOAD Dec 13 00:22:05.088000 audit[4625]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffcc9e02390 a2=94 a3=54428f items=0 ppid=4231 pid=4625 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:05.088000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 00:22:05.088000 audit: BPF prog-id=231 op=LOAD Dec 13 00:22:05.088000 audit[4625]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcc9e023c0 a2=94 a3=2 items=0 ppid=4231 pid=4625 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:05.088000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 00:22:05.088000 audit: BPF prog-id=231 op=UNLOAD Dec 13 00:22:05.088000 audit[4625]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffcc9e023c0 a2=0 a3=2 items=0 ppid=4231 pid=4625 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:05.088000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 00:22:05.089000 audit: BPF prog-id=232 op=LOAD Dec 13 00:22:05.089000 audit[4625]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffcc9e02170 a2=94 a3=4 items=0 ppid=4231 pid=4625 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:05.089000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 00:22:05.089000 audit: BPF prog-id=232 op=UNLOAD Dec 13 00:22:05.089000 audit[4625]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffcc9e02170 a2=94 a3=4 items=0 ppid=4231 pid=4625 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:05.089000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 00:22:05.089000 audit: BPF prog-id=233 op=LOAD Dec 13 00:22:05.089000 audit[4625]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffcc9e02270 a2=94 a3=7ffcc9e023f0 items=0 ppid=4231 pid=4625 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:05.089000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 00:22:05.089000 audit: BPF prog-id=233 op=UNLOAD Dec 13 00:22:05.089000 audit[4625]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffcc9e02270 a2=0 a3=7ffcc9e023f0 items=0 ppid=4231 pid=4625 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:05.089000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 00:22:05.091000 audit: BPF prog-id=234 op=LOAD Dec 13 00:22:05.091000 audit[4625]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffcc9e019a0 a2=94 a3=2 items=0 ppid=4231 pid=4625 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:05.091000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 00:22:05.091000 audit: BPF prog-id=234 op=UNLOAD Dec 13 00:22:05.091000 audit[4625]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffcc9e019a0 a2=0 a3=2 items=0 ppid=4231 pid=4625 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:05.091000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 00:22:05.091000 audit: BPF prog-id=235 op=LOAD Dec 13 00:22:05.091000 audit[4625]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffcc9e01aa0 a2=94 a3=30 items=0 ppid=4231 pid=4625 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:05.091000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 00:22:05.110063 containerd[1633]: time="2025-12-13T00:22:05.110006292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d686f8ffb-wbp4f,Uid:5353c832-e4bf-4b05-bc32-552262f10d42,Namespace:calico-system,Attempt:0,} returns sandbox id \"bff81f30d6929093d43a63cc5154287373353d2ffd4b783194f06d691667af05\"" Dec 13 00:22:05.115000 audit: BPF prog-id=236 op=LOAD Dec 13 00:22:05.115000 audit[4640]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffd2988940 a2=98 a3=0 items=0 ppid=4231 pid=4640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:05.115000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 00:22:05.115000 audit: BPF prog-id=236 op=UNLOAD Dec 13 00:22:05.115000 audit[4640]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fffd2988910 a3=0 items=0 ppid=4231 pid=4640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:05.115000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 00:22:05.116000 audit: BPF prog-id=237 op=LOAD Dec 13 00:22:05.116000 audit[4640]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffd2988730 a2=94 a3=54428f items=0 ppid=4231 pid=4640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:05.116000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 00:22:05.116000 audit: BPF prog-id=237 op=UNLOAD Dec 13 00:22:05.116000 audit[4640]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fffd2988730 a2=94 a3=54428f items=0 ppid=4231 pid=4640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:05.116000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 00:22:05.116000 audit: BPF prog-id=238 op=LOAD Dec 13 00:22:05.116000 audit[4640]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffd2988760 a2=94 a3=2 items=0 ppid=4231 pid=4640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:05.116000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 00:22:05.116000 audit: BPF prog-id=238 op=UNLOAD Dec 13 00:22:05.116000 audit[4640]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fffd2988760 a2=0 a3=2 items=0 ppid=4231 pid=4640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:05.116000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 00:22:05.186545 containerd[1633]: time="2025-12-13T00:22:05.186319117Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:22:05.188484 containerd[1633]: time="2025-12-13T00:22:05.188389662Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 13 00:22:05.188798 containerd[1633]: time="2025-12-13T00:22:05.188478729Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 13 00:22:05.189355 kubelet[2818]: E1213 00:22:05.189298 2818 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 13 00:22:05.189469 kubelet[2818]: E1213 00:22:05.189456 2818 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 13 00:22:05.190653 containerd[1633]: time="2025-12-13T00:22:05.190606272Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 13 00:22:05.201654 kubelet[2818]: E1213 00:22:05.201532 2818 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sbbhp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-f54tn_calico-system(7ce6ad04-f89c-40a1-981e-2b7e39fe58e0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 13 00:22:05.202821 kubelet[2818]: E1213 00:22:05.202749 2818 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-f54tn" podUID="7ce6ad04-f89c-40a1-981e-2b7e39fe58e0" Dec 13 00:22:05.236029 systemd-networkd[1321]: cali97c3f55573c: Gained IPv6LL Dec 13 00:22:05.338000 audit: BPF prog-id=239 op=LOAD Dec 13 00:22:05.338000 audit[4640]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffd2988620 a2=94 a3=1 items=0 ppid=4231 pid=4640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:05.338000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 00:22:05.339000 audit: BPF prog-id=239 op=UNLOAD Dec 13 00:22:05.339000 audit[4640]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fffd2988620 a2=94 a3=1 items=0 ppid=4231 pid=4640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:05.339000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 00:22:05.351000 audit: BPF prog-id=240 op=LOAD Dec 13 00:22:05.351000 audit[4640]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fffd2988610 a2=94 a3=4 items=0 ppid=4231 pid=4640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:05.351000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 00:22:05.351000 audit: BPF prog-id=240 op=UNLOAD Dec 13 00:22:05.351000 audit[4640]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fffd2988610 a2=0 a3=4 items=0 ppid=4231 pid=4640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:05.351000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 00:22:05.352000 audit: BPF prog-id=241 op=LOAD Dec 13 00:22:05.352000 audit[4640]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fffd2988470 a2=94 a3=5 items=0 ppid=4231 pid=4640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:05.352000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 00:22:05.352000 audit: BPF prog-id=241 op=UNLOAD Dec 13 00:22:05.352000 audit[4640]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fffd2988470 a2=0 a3=5 items=0 ppid=4231 pid=4640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:05.352000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 00:22:05.352000 audit: BPF prog-id=242 op=LOAD Dec 13 00:22:05.352000 audit[4640]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fffd2988690 a2=94 a3=6 items=0 ppid=4231 pid=4640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:05.352000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 00:22:05.352000 audit: BPF prog-id=242 op=UNLOAD Dec 13 00:22:05.352000 audit[4640]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fffd2988690 a2=0 a3=6 items=0 ppid=4231 pid=4640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:05.352000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 00:22:05.352000 audit: BPF prog-id=243 op=LOAD Dec 13 00:22:05.352000 audit[4640]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fffd2987e40 a2=94 a3=88 items=0 ppid=4231 pid=4640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:05.352000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 00:22:05.353000 audit: BPF prog-id=244 op=LOAD Dec 13 00:22:05.353000 audit[4640]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7fffd2987cc0 a2=94 a3=2 items=0 ppid=4231 pid=4640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:05.353000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 00:22:05.353000 audit: BPF prog-id=244 op=UNLOAD Dec 13 00:22:05.353000 audit[4640]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7fffd2987cf0 a2=0 a3=7fffd2987df0 items=0 ppid=4231 pid=4640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:05.353000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 00:22:05.354000 audit: BPF prog-id=243 op=UNLOAD Dec 13 00:22:05.354000 audit[4640]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=2a18d10 a2=0 a3=1e53680af043b30 items=0 ppid=4231 pid=4640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:05.354000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 00:22:05.362000 audit: BPF prog-id=235 op=UNLOAD Dec 13 00:22:05.362000 audit[4231]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c000f20000 a2=0 a3=0 items=0 ppid=4216 pid=4231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:05.362000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Dec 13 00:22:05.445000 audit[4673]: NETFILTER_CFG table=mangle:121 family=2 entries=16 op=nft_register_chain pid=4673 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 00:22:05.445000 audit[4673]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffc710a06b0 a2=0 a3=7ffc710a069c items=0 ppid=4231 pid=4673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:05.445000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 00:22:05.453000 audit[4674]: NETFILTER_CFG table=raw:122 family=2 entries=21 op=nft_register_chain pid=4674 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 00:22:05.453000 audit[4674]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffcfd455090 a2=0 a3=7ffcfd45507c items=0 ppid=4231 pid=4674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:05.453000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 00:22:05.456000 audit[4680]: NETFILTER_CFG table=nat:123 family=2 entries=15 op=nft_register_chain pid=4680 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 00:22:05.456000 audit[4680]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7fff6d4b3a40 a2=0 a3=7fff6d4b3a2c items=0 ppid=4231 pid=4680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:05.456000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 00:22:05.459000 audit[4677]: NETFILTER_CFG table=filter:124 family=2 entries=188 op=nft_register_chain pid=4677 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 00:22:05.459000 audit[4677]: SYSCALL arch=c000003e syscall=46 success=yes exit=110116 a0=3 a1=7ffe3b147e10 a2=0 a3=7ffe3b147dfc items=0 ppid=4231 pid=4677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:05.459000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 00:22:05.535470 containerd[1633]: time="2025-12-13T00:22:05.535425096Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:22:05.536799 containerd[1633]: time="2025-12-13T00:22:05.536762696Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 13 00:22:05.536857 containerd[1633]: time="2025-12-13T00:22:05.536790769Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 13 00:22:05.537431 kubelet[2818]: E1213 00:22:05.537048 2818 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 13 00:22:05.537431 kubelet[2818]: E1213 00:22:05.537164 2818 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 13 00:22:05.537431 kubelet[2818]: E1213 00:22:05.537272 2818 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:2da41ffab71742d79c306eb97befe8f6,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zkjbx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6d686f8ffb-wbp4f_calico-system(5353c832-e4bf-4b05-bc32-552262f10d42): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 13 00:22:05.539064 containerd[1633]: time="2025-12-13T00:22:05.539038337Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 13 00:22:05.572835 containerd[1633]: time="2025-12-13T00:22:05.572779478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wvdrp,Uid:dedbe661-92c2-4c3f-9ab9-3f4df404e3b1,Namespace:calico-system,Attempt:0,}" Dec 13 00:22:05.573194 containerd[1633]: time="2025-12-13T00:22:05.572802491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58486567b6-lnz72,Uid:00a965a0-569e-4742-bf83-196c624e0f8f,Namespace:calico-apiserver,Attempt:0,}" Dec 13 00:22:05.705200 systemd-networkd[1321]: calif6f2cc94135: Link UP Dec 13 00:22:05.707735 systemd-networkd[1321]: calif6f2cc94135: Gained carrier Dec 13 00:22:05.725827 kubelet[2818]: E1213 00:22:05.725746 2818 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-f54tn" podUID="7ce6ad04-f89c-40a1-981e-2b7e39fe58e0" Dec 13 00:22:05.727866 kubelet[2818]: E1213 00:22:05.727736 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:22:05.728308 kubelet[2818]: E1213 00:22:05.728086 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:22:05.730733 containerd[1633]: 2025-12-13 00:22:05.625 [INFO][4699] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--58486567b6--lnz72-eth0 calico-apiserver-58486567b6- calico-apiserver 00a965a0-569e-4742-bf83-196c624e0f8f 879 0 2025-12-13 00:21:34 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:58486567b6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-58486567b6-lnz72 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif6f2cc94135 [] [] }} ContainerID="1211c22222b4ff55024d45eb82e1e3eeb98ef58fd5fda8b9b27723a30e7ebe19" Namespace="calico-apiserver" Pod="calico-apiserver-58486567b6-lnz72" WorkloadEndpoint="localhost-k8s-calico--apiserver--58486567b6--lnz72-" Dec 13 00:22:05.730733 containerd[1633]: 2025-12-13 00:22:05.626 [INFO][4699] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1211c22222b4ff55024d45eb82e1e3eeb98ef58fd5fda8b9b27723a30e7ebe19" Namespace="calico-apiserver" Pod="calico-apiserver-58486567b6-lnz72" WorkloadEndpoint="localhost-k8s-calico--apiserver--58486567b6--lnz72-eth0" Dec 13 00:22:05.730733 containerd[1633]: 2025-12-13 00:22:05.664 [INFO][4721] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1211c22222b4ff55024d45eb82e1e3eeb98ef58fd5fda8b9b27723a30e7ebe19" HandleID="k8s-pod-network.1211c22222b4ff55024d45eb82e1e3eeb98ef58fd5fda8b9b27723a30e7ebe19" Workload="localhost-k8s-calico--apiserver--58486567b6--lnz72-eth0" Dec 13 00:22:05.730733 containerd[1633]: 2025-12-13 00:22:05.664 [INFO][4721] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1211c22222b4ff55024d45eb82e1e3eeb98ef58fd5fda8b9b27723a30e7ebe19" HandleID="k8s-pod-network.1211c22222b4ff55024d45eb82e1e3eeb98ef58fd5fda8b9b27723a30e7ebe19" Workload="localhost-k8s-calico--apiserver--58486567b6--lnz72-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00034d290), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-58486567b6-lnz72", "timestamp":"2025-12-13 00:22:05.664730481 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 00:22:05.730733 containerd[1633]: 2025-12-13 00:22:05.665 [INFO][4721] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 13 00:22:05.730733 containerd[1633]: 2025-12-13 00:22:05.665 [INFO][4721] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 13 00:22:05.730733 containerd[1633]: 2025-12-13 00:22:05.665 [INFO][4721] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 00:22:05.730733 containerd[1633]: 2025-12-13 00:22:05.672 [INFO][4721] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1211c22222b4ff55024d45eb82e1e3eeb98ef58fd5fda8b9b27723a30e7ebe19" host="localhost" Dec 13 00:22:05.730733 containerd[1633]: 2025-12-13 00:22:05.676 [INFO][4721] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 13 00:22:05.730733 containerd[1633]: 2025-12-13 00:22:05.680 [INFO][4721] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 00:22:05.730733 containerd[1633]: 2025-12-13 00:22:05.682 [INFO][4721] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 00:22:05.730733 containerd[1633]: 2025-12-13 00:22:05.684 [INFO][4721] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 00:22:05.730733 containerd[1633]: 2025-12-13 00:22:05.684 [INFO][4721] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1211c22222b4ff55024d45eb82e1e3eeb98ef58fd5fda8b9b27723a30e7ebe19" host="localhost" Dec 13 00:22:05.730733 containerd[1633]: 2025-12-13 00:22:05.685 [INFO][4721] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1211c22222b4ff55024d45eb82e1e3eeb98ef58fd5fda8b9b27723a30e7ebe19 Dec 13 00:22:05.730733 containerd[1633]: 2025-12-13 00:22:05.690 [INFO][4721] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1211c22222b4ff55024d45eb82e1e3eeb98ef58fd5fda8b9b27723a30e7ebe19" host="localhost" Dec 13 00:22:05.730733 containerd[1633]: 2025-12-13 00:22:05.697 [INFO][4721] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.1211c22222b4ff55024d45eb82e1e3eeb98ef58fd5fda8b9b27723a30e7ebe19" host="localhost" Dec 13 00:22:05.730733 containerd[1633]: 2025-12-13 00:22:05.697 [INFO][4721] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.1211c22222b4ff55024d45eb82e1e3eeb98ef58fd5fda8b9b27723a30e7ebe19" host="localhost" Dec 13 00:22:05.730733 containerd[1633]: 2025-12-13 00:22:05.697 [INFO][4721] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 13 00:22:05.730733 containerd[1633]: 2025-12-13 00:22:05.697 [INFO][4721] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="1211c22222b4ff55024d45eb82e1e3eeb98ef58fd5fda8b9b27723a30e7ebe19" HandleID="k8s-pod-network.1211c22222b4ff55024d45eb82e1e3eeb98ef58fd5fda8b9b27723a30e7ebe19" Workload="localhost-k8s-calico--apiserver--58486567b6--lnz72-eth0" Dec 13 00:22:05.731522 containerd[1633]: 2025-12-13 00:22:05.700 [INFO][4699] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1211c22222b4ff55024d45eb82e1e3eeb98ef58fd5fda8b9b27723a30e7ebe19" Namespace="calico-apiserver" Pod="calico-apiserver-58486567b6-lnz72" WorkloadEndpoint="localhost-k8s-calico--apiserver--58486567b6--lnz72-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--58486567b6--lnz72-eth0", GenerateName:"calico-apiserver-58486567b6-", Namespace:"calico-apiserver", SelfLink:"", UID:"00a965a0-569e-4742-bf83-196c624e0f8f", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 0, 21, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58486567b6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-58486567b6-lnz72", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif6f2cc94135", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 00:22:05.731522 containerd[1633]: 2025-12-13 00:22:05.701 [INFO][4699] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="1211c22222b4ff55024d45eb82e1e3eeb98ef58fd5fda8b9b27723a30e7ebe19" Namespace="calico-apiserver" Pod="calico-apiserver-58486567b6-lnz72" WorkloadEndpoint="localhost-k8s-calico--apiserver--58486567b6--lnz72-eth0" Dec 13 00:22:05.731522 containerd[1633]: 2025-12-13 00:22:05.701 [INFO][4699] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif6f2cc94135 ContainerID="1211c22222b4ff55024d45eb82e1e3eeb98ef58fd5fda8b9b27723a30e7ebe19" Namespace="calico-apiserver" Pod="calico-apiserver-58486567b6-lnz72" WorkloadEndpoint="localhost-k8s-calico--apiserver--58486567b6--lnz72-eth0" Dec 13 00:22:05.731522 containerd[1633]: 2025-12-13 00:22:05.706 [INFO][4699] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1211c22222b4ff55024d45eb82e1e3eeb98ef58fd5fda8b9b27723a30e7ebe19" Namespace="calico-apiserver" Pod="calico-apiserver-58486567b6-lnz72" WorkloadEndpoint="localhost-k8s-calico--apiserver--58486567b6--lnz72-eth0" Dec 13 00:22:05.731522 containerd[1633]: 2025-12-13 00:22:05.706 [INFO][4699] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1211c22222b4ff55024d45eb82e1e3eeb98ef58fd5fda8b9b27723a30e7ebe19" Namespace="calico-apiserver" Pod="calico-apiserver-58486567b6-lnz72" WorkloadEndpoint="localhost-k8s-calico--apiserver--58486567b6--lnz72-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--58486567b6--lnz72-eth0", GenerateName:"calico-apiserver-58486567b6-", Namespace:"calico-apiserver", SelfLink:"", UID:"00a965a0-569e-4742-bf83-196c624e0f8f", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 0, 21, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58486567b6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1211c22222b4ff55024d45eb82e1e3eeb98ef58fd5fda8b9b27723a30e7ebe19", Pod:"calico-apiserver-58486567b6-lnz72", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif6f2cc94135", MAC:"52:00:88:c1:39:12", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 00:22:05.731522 containerd[1633]: 2025-12-13 00:22:05.719 [INFO][4699] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1211c22222b4ff55024d45eb82e1e3eeb98ef58fd5fda8b9b27723a30e7ebe19" Namespace="calico-apiserver" Pod="calico-apiserver-58486567b6-lnz72" WorkloadEndpoint="localhost-k8s-calico--apiserver--58486567b6--lnz72-eth0" Dec 13 00:22:05.736000 audit[4743]: NETFILTER_CFG table=filter:125 family=2 entries=62 op=nft_register_chain pid=4743 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 00:22:05.736000 audit[4743]: SYSCALL arch=c000003e syscall=46 success=yes exit=31772 a0=3 a1=7fffa4832d00 a2=0 a3=7fffa4832cec items=0 ppid=4231 pid=4743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:05.736000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 00:22:05.754293 kubelet[2818]: I1213 00:22:05.754205 2818 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-2dg5q" podStartSLOduration=40.754186791 podStartE2EDuration="40.754186791s" podCreationTimestamp="2025-12-13 00:21:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-13 00:22:05.753028057 +0000 UTC m=+47.284270361" watchObservedRunningTime="2025-12-13 00:22:05.754186791 +0000 UTC m=+47.285429095" Dec 13 00:22:05.759830 containerd[1633]: time="2025-12-13T00:22:05.759656576Z" level=info msg="connecting to shim 1211c22222b4ff55024d45eb82e1e3eeb98ef58fd5fda8b9b27723a30e7ebe19" address="unix:///run/containerd/s/10bf46828dae17da278774cb1880e7d5bffec9a24b5584a6d9d9337920411ae1" namespace=k8s.io protocol=ttrpc version=3 Dec 13 00:22:05.775000 audit[4763]: NETFILTER_CFG table=filter:126 family=2 entries=20 op=nft_register_rule pid=4763 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:22:05.775000 audit[4763]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd688825a0 a2=0 a3=7ffd6888258c items=0 ppid=2927 pid=4763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:05.775000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:22:05.789000 audit[4763]: NETFILTER_CFG table=nat:127 family=2 entries=14 op=nft_register_rule pid=4763 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:22:05.789000 audit[4763]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffd688825a0 a2=0 a3=0 items=0 ppid=2927 pid=4763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:05.789000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:22:05.816456 systemd[1]: Started cri-containerd-1211c22222b4ff55024d45eb82e1e3eeb98ef58fd5fda8b9b27723a30e7ebe19.scope - libcontainer container 1211c22222b4ff55024d45eb82e1e3eeb98ef58fd5fda8b9b27723a30e7ebe19. Dec 13 00:22:05.832116 systemd-networkd[1321]: calid6a812d4911: Link UP Dec 13 00:22:05.833732 systemd-networkd[1321]: calid6a812d4911: Gained carrier Dec 13 00:22:05.843000 audit: BPF prog-id=245 op=LOAD Dec 13 00:22:05.845000 audit: BPF prog-id=246 op=LOAD Dec 13 00:22:05.845000 audit[4769]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4757 pid=4769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:05.845000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132313163323232323262346666353530323464343565623832653165 Dec 13 00:22:05.845000 audit: BPF prog-id=246 op=UNLOAD Dec 13 00:22:05.845000 audit[4769]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4757 pid=4769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:05.845000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132313163323232323262346666353530323464343565623832653165 Dec 13 00:22:05.845000 audit: BPF prog-id=247 op=LOAD Dec 13 00:22:05.845000 audit[4769]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4757 pid=4769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:05.845000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132313163323232323262346666353530323464343565623832653165 Dec 13 00:22:05.845000 audit: BPF prog-id=248 op=LOAD Dec 13 00:22:05.845000 audit[4769]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4757 pid=4769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:05.845000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132313163323232323262346666353530323464343565623832653165 Dec 13 00:22:05.845000 audit: BPF prog-id=248 op=UNLOAD Dec 13 00:22:05.845000 audit[4769]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4757 pid=4769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:05.845000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132313163323232323262346666353530323464343565623832653165 Dec 13 00:22:05.845000 audit: BPF prog-id=247 op=UNLOAD Dec 13 00:22:05.845000 audit[4769]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4757 pid=4769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:05.845000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132313163323232323262346666353530323464343565623832653165 Dec 13 00:22:05.845000 audit: BPF prog-id=249 op=LOAD Dec 13 00:22:05.845000 audit[4769]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4757 pid=4769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:05.845000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132313163323232323262346666353530323464343565623832653165 Dec 13 00:22:05.848557 systemd-resolved[1290]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 00:22:05.852972 containerd[1633]: 2025-12-13 00:22:05.637 [INFO][4693] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--wvdrp-eth0 csi-node-driver- calico-system dedbe661-92c2-4c3f-9ab9-3f4df404e3b1 790 0 2025-12-13 00:21:38 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-wvdrp eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calid6a812d4911 [] [] }} ContainerID="25a0d6104515f0db75e314488dffabe2a47fbf6fe22fbc5cccf4d3786109c1aa" Namespace="calico-system" Pod="csi-node-driver-wvdrp" WorkloadEndpoint="localhost-k8s-csi--node--driver--wvdrp-" Dec 13 00:22:05.852972 containerd[1633]: 2025-12-13 00:22:05.637 [INFO][4693] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="25a0d6104515f0db75e314488dffabe2a47fbf6fe22fbc5cccf4d3786109c1aa" Namespace="calico-system" Pod="csi-node-driver-wvdrp" WorkloadEndpoint="localhost-k8s-csi--node--driver--wvdrp-eth0" Dec 13 00:22:05.852972 containerd[1633]: 2025-12-13 00:22:05.672 [INFO][4727] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="25a0d6104515f0db75e314488dffabe2a47fbf6fe22fbc5cccf4d3786109c1aa" HandleID="k8s-pod-network.25a0d6104515f0db75e314488dffabe2a47fbf6fe22fbc5cccf4d3786109c1aa" Workload="localhost-k8s-csi--node--driver--wvdrp-eth0" Dec 13 00:22:05.852972 containerd[1633]: 2025-12-13 00:22:05.672 [INFO][4727] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="25a0d6104515f0db75e314488dffabe2a47fbf6fe22fbc5cccf4d3786109c1aa" HandleID="k8s-pod-network.25a0d6104515f0db75e314488dffabe2a47fbf6fe22fbc5cccf4d3786109c1aa" Workload="localhost-k8s-csi--node--driver--wvdrp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00012d490), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-wvdrp", "timestamp":"2025-12-13 00:22:05.672053383 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 00:22:05.852972 containerd[1633]: 2025-12-13 00:22:05.672 [INFO][4727] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 13 00:22:05.852972 containerd[1633]: 2025-12-13 00:22:05.697 [INFO][4727] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 13 00:22:05.852972 containerd[1633]: 2025-12-13 00:22:05.698 [INFO][4727] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 00:22:05.852972 containerd[1633]: 2025-12-13 00:22:05.774 [INFO][4727] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.25a0d6104515f0db75e314488dffabe2a47fbf6fe22fbc5cccf4d3786109c1aa" host="localhost" Dec 13 00:22:05.852972 containerd[1633]: 2025-12-13 00:22:05.786 [INFO][4727] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 13 00:22:05.852972 containerd[1633]: 2025-12-13 00:22:05.796 [INFO][4727] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 00:22:05.852972 containerd[1633]: 2025-12-13 00:22:05.800 [INFO][4727] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 00:22:05.852972 containerd[1633]: 2025-12-13 00:22:05.803 [INFO][4727] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 00:22:05.852972 containerd[1633]: 2025-12-13 00:22:05.803 [INFO][4727] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.25a0d6104515f0db75e314488dffabe2a47fbf6fe22fbc5cccf4d3786109c1aa" host="localhost" Dec 13 00:22:05.852972 containerd[1633]: 2025-12-13 00:22:05.806 [INFO][4727] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.25a0d6104515f0db75e314488dffabe2a47fbf6fe22fbc5cccf4d3786109c1aa Dec 13 00:22:05.852972 containerd[1633]: 2025-12-13 00:22:05.816 [INFO][4727] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.25a0d6104515f0db75e314488dffabe2a47fbf6fe22fbc5cccf4d3786109c1aa" host="localhost" Dec 13 00:22:05.852972 containerd[1633]: 2025-12-13 00:22:05.823 [INFO][4727] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.25a0d6104515f0db75e314488dffabe2a47fbf6fe22fbc5cccf4d3786109c1aa" host="localhost" Dec 13 00:22:05.852972 containerd[1633]: 2025-12-13 00:22:05.823 [INFO][4727] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.25a0d6104515f0db75e314488dffabe2a47fbf6fe22fbc5cccf4d3786109c1aa" host="localhost" Dec 13 00:22:05.852972 containerd[1633]: 2025-12-13 00:22:05.823 [INFO][4727] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 13 00:22:05.852972 containerd[1633]: 2025-12-13 00:22:05.823 [INFO][4727] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="25a0d6104515f0db75e314488dffabe2a47fbf6fe22fbc5cccf4d3786109c1aa" HandleID="k8s-pod-network.25a0d6104515f0db75e314488dffabe2a47fbf6fe22fbc5cccf4d3786109c1aa" Workload="localhost-k8s-csi--node--driver--wvdrp-eth0" Dec 13 00:22:05.854032 containerd[1633]: 2025-12-13 00:22:05.828 [INFO][4693] cni-plugin/k8s.go 418: Populated endpoint ContainerID="25a0d6104515f0db75e314488dffabe2a47fbf6fe22fbc5cccf4d3786109c1aa" Namespace="calico-system" Pod="csi-node-driver-wvdrp" WorkloadEndpoint="localhost-k8s-csi--node--driver--wvdrp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--wvdrp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"dedbe661-92c2-4c3f-9ab9-3f4df404e3b1", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 0, 21, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-wvdrp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid6a812d4911", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 00:22:05.854032 containerd[1633]: 2025-12-13 00:22:05.828 [INFO][4693] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="25a0d6104515f0db75e314488dffabe2a47fbf6fe22fbc5cccf4d3786109c1aa" Namespace="calico-system" Pod="csi-node-driver-wvdrp" WorkloadEndpoint="localhost-k8s-csi--node--driver--wvdrp-eth0" Dec 13 00:22:05.854032 containerd[1633]: 2025-12-13 00:22:05.828 [INFO][4693] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid6a812d4911 ContainerID="25a0d6104515f0db75e314488dffabe2a47fbf6fe22fbc5cccf4d3786109c1aa" Namespace="calico-system" Pod="csi-node-driver-wvdrp" WorkloadEndpoint="localhost-k8s-csi--node--driver--wvdrp-eth0" Dec 13 00:22:05.854032 containerd[1633]: 2025-12-13 00:22:05.833 [INFO][4693] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="25a0d6104515f0db75e314488dffabe2a47fbf6fe22fbc5cccf4d3786109c1aa" Namespace="calico-system" Pod="csi-node-driver-wvdrp" WorkloadEndpoint="localhost-k8s-csi--node--driver--wvdrp-eth0" Dec 13 00:22:05.854032 containerd[1633]: 2025-12-13 00:22:05.835 [INFO][4693] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="25a0d6104515f0db75e314488dffabe2a47fbf6fe22fbc5cccf4d3786109c1aa" Namespace="calico-system" Pod="csi-node-driver-wvdrp" WorkloadEndpoint="localhost-k8s-csi--node--driver--wvdrp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--wvdrp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"dedbe661-92c2-4c3f-9ab9-3f4df404e3b1", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 0, 21, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"25a0d6104515f0db75e314488dffabe2a47fbf6fe22fbc5cccf4d3786109c1aa", Pod:"csi-node-driver-wvdrp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid6a812d4911", MAC:"5e:6f:b1:9a:c1:a5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 00:22:05.854032 containerd[1633]: 2025-12-13 00:22:05.847 [INFO][4693] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="25a0d6104515f0db75e314488dffabe2a47fbf6fe22fbc5cccf4d3786109c1aa" Namespace="calico-system" Pod="csi-node-driver-wvdrp" WorkloadEndpoint="localhost-k8s-csi--node--driver--wvdrp-eth0" Dec 13 00:22:05.877000 audit[4797]: NETFILTER_CFG table=filter:128 family=2 entries=52 op=nft_register_chain pid=4797 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 00:22:05.877000 audit[4797]: SYSCALL arch=c000003e syscall=46 success=yes exit=24328 a0=3 a1=7ffc8881b620 a2=0 a3=7ffc8881b60c items=0 ppid=4231 pid=4797 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:05.877000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 00:22:05.887834 containerd[1633]: time="2025-12-13T00:22:05.887747842Z" level=info msg="connecting to shim 25a0d6104515f0db75e314488dffabe2a47fbf6fe22fbc5cccf4d3786109c1aa" address="unix:///run/containerd/s/cabc6c9bcc3fb1cd611fa29fbf438318b233eb84af55ad6067abc489198661bd" namespace=k8s.io protocol=ttrpc version=3 Dec 13 00:22:05.905563 containerd[1633]: time="2025-12-13T00:22:05.905478497Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:22:05.910779 containerd[1633]: time="2025-12-13T00:22:05.910689176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58486567b6-lnz72,Uid:00a965a0-569e-4742-bf83-196c624e0f8f,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"1211c22222b4ff55024d45eb82e1e3eeb98ef58fd5fda8b9b27723a30e7ebe19\"" Dec 13 00:22:05.913643 containerd[1633]: time="2025-12-13T00:22:05.913408089Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 13 00:22:05.913735 containerd[1633]: time="2025-12-13T00:22:05.913484372Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 13 00:22:05.914230 kubelet[2818]: E1213 00:22:05.913897 2818 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 13 00:22:05.914230 kubelet[2818]: E1213 00:22:05.914207 2818 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 13 00:22:05.914405 kubelet[2818]: E1213 00:22:05.914363 2818 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zkjbx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6d686f8ffb-wbp4f_calico-system(5353c832-e4bf-4b05-bc32-552262f10d42): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 13 00:22:05.914587 containerd[1633]: time="2025-12-13T00:22:05.914561462Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 13 00:22:05.915572 kubelet[2818]: E1213 00:22:05.915523 2818 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6d686f8ffb-wbp4f" podUID="5353c832-e4bf-4b05-bc32-552262f10d42" Dec 13 00:22:05.918100 systemd[1]: Started cri-containerd-25a0d6104515f0db75e314488dffabe2a47fbf6fe22fbc5cccf4d3786109c1aa.scope - libcontainer container 25a0d6104515f0db75e314488dffabe2a47fbf6fe22fbc5cccf4d3786109c1aa. Dec 13 00:22:05.934000 audit: BPF prog-id=250 op=LOAD Dec 13 00:22:05.934000 audit: BPF prog-id=251 op=LOAD Dec 13 00:22:05.934000 audit[4825]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=4806 pid=4825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:05.934000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235613064363130343531356630646237356533313434383864666661 Dec 13 00:22:05.934000 audit: BPF prog-id=251 op=UNLOAD Dec 13 00:22:05.934000 audit[4825]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4806 pid=4825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:05.934000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235613064363130343531356630646237356533313434383864666661 Dec 13 00:22:05.934000 audit: BPF prog-id=252 op=LOAD Dec 13 00:22:05.934000 audit[4825]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=4806 pid=4825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:05.934000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235613064363130343531356630646237356533313434383864666661 Dec 13 00:22:05.934000 audit: BPF prog-id=253 op=LOAD Dec 13 00:22:05.934000 audit[4825]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=4806 pid=4825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:05.934000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235613064363130343531356630646237356533313434383864666661 Dec 13 00:22:05.934000 audit: BPF prog-id=253 op=UNLOAD Dec 13 00:22:05.934000 audit[4825]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4806 pid=4825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:05.934000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235613064363130343531356630646237356533313434383864666661 Dec 13 00:22:05.934000 audit: BPF prog-id=252 op=UNLOAD Dec 13 00:22:05.934000 audit[4825]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4806 pid=4825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:05.934000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235613064363130343531356630646237356533313434383864666661 Dec 13 00:22:05.935000 audit: BPF prog-id=254 op=LOAD Dec 13 00:22:05.935000 audit[4825]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=4806 pid=4825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:05.935000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235613064363130343531356630646237356533313434383864666661 Dec 13 00:22:05.936797 systemd-resolved[1290]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 00:22:05.940012 systemd-networkd[1321]: cali12af00b7061: Gained IPv6LL Dec 13 00:22:05.955614 containerd[1633]: time="2025-12-13T00:22:05.955568019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wvdrp,Uid:dedbe661-92c2-4c3f-9ab9-3f4df404e3b1,Namespace:calico-system,Attempt:0,} returns sandbox id \"25a0d6104515f0db75e314488dffabe2a47fbf6fe22fbc5cccf4d3786109c1aa\"" Dec 13 00:22:06.224088 containerd[1633]: time="2025-12-13T00:22:06.224024807Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:22:06.313638 containerd[1633]: time="2025-12-13T00:22:06.313570669Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 13 00:22:06.313830 containerd[1633]: time="2025-12-13T00:22:06.313657372Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 13 00:22:06.314001 kubelet[2818]: E1213 00:22:06.313927 2818 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 13 00:22:06.314001 kubelet[2818]: E1213 00:22:06.313985 2818 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 13 00:22:06.314563 containerd[1633]: time="2025-12-13T00:22:06.314355372Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 13 00:22:06.314620 kubelet[2818]: E1213 00:22:06.314372 2818 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dhk5x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-58486567b6-lnz72_calico-apiserver(00a965a0-569e-4742-bf83-196c624e0f8f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 13 00:22:06.315711 kubelet[2818]: E1213 00:22:06.315654 2818 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58486567b6-lnz72" podUID="00a965a0-569e-4742-bf83-196c624e0f8f" Dec 13 00:22:06.572728 containerd[1633]: time="2025-12-13T00:22:06.572676776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cf9f886c6-9fch9,Uid:fce4aad9-52fa-4b91-82ff-c6436952148b,Namespace:calico-system,Attempt:0,}" Dec 13 00:22:06.572884 containerd[1633]: time="2025-12-13T00:22:06.572678620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58486567b6-tgd79,Uid:b431ca61-6062-45e4-a35d-3ec7ff6dccb1,Namespace:calico-apiserver,Attempt:0,}" Dec 13 00:22:06.581084 systemd-networkd[1321]: cali2c106339dda: Gained IPv6LL Dec 13 00:22:06.692563 systemd-networkd[1321]: cali6ab308fea44: Link UP Dec 13 00:22:06.693199 systemd-networkd[1321]: cali6ab308fea44: Gained carrier Dec 13 00:22:06.704652 containerd[1633]: 2025-12-13 00:22:06.611 [INFO][4854] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7cf9f886c6--9fch9-eth0 calico-kube-controllers-7cf9f886c6- calico-system fce4aad9-52fa-4b91-82ff-c6436952148b 873 0 2025-12-13 00:21:38 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7cf9f886c6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7cf9f886c6-9fch9 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali6ab308fea44 [] [] }} ContainerID="78fbf9357b62e7dcc79344057b9fb9a57d617bd45c906725623ba9bfd3754664" Namespace="calico-system" Pod="calico-kube-controllers-7cf9f886c6-9fch9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7cf9f886c6--9fch9-" Dec 13 00:22:06.704652 containerd[1633]: 2025-12-13 00:22:06.612 [INFO][4854] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="78fbf9357b62e7dcc79344057b9fb9a57d617bd45c906725623ba9bfd3754664" Namespace="calico-system" Pod="calico-kube-controllers-7cf9f886c6-9fch9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7cf9f886c6--9fch9-eth0" Dec 13 00:22:06.704652 containerd[1633]: 2025-12-13 00:22:06.648 [INFO][4884] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="78fbf9357b62e7dcc79344057b9fb9a57d617bd45c906725623ba9bfd3754664" HandleID="k8s-pod-network.78fbf9357b62e7dcc79344057b9fb9a57d617bd45c906725623ba9bfd3754664" Workload="localhost-k8s-calico--kube--controllers--7cf9f886c6--9fch9-eth0" Dec 13 00:22:06.704652 containerd[1633]: 2025-12-13 00:22:06.648 [INFO][4884] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="78fbf9357b62e7dcc79344057b9fb9a57d617bd45c906725623ba9bfd3754664" HandleID="k8s-pod-network.78fbf9357b62e7dcc79344057b9fb9a57d617bd45c906725623ba9bfd3754664" Workload="localhost-k8s-calico--kube--controllers--7cf9f886c6--9fch9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f550), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7cf9f886c6-9fch9", "timestamp":"2025-12-13 00:22:06.648334716 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 00:22:06.704652 containerd[1633]: 2025-12-13 00:22:06.648 [INFO][4884] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 13 00:22:06.704652 containerd[1633]: 2025-12-13 00:22:06.648 [INFO][4884] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 13 00:22:06.704652 containerd[1633]: 2025-12-13 00:22:06.648 [INFO][4884] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 00:22:06.704652 containerd[1633]: 2025-12-13 00:22:06.657 [INFO][4884] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.78fbf9357b62e7dcc79344057b9fb9a57d617bd45c906725623ba9bfd3754664" host="localhost" Dec 13 00:22:06.704652 containerd[1633]: 2025-12-13 00:22:06.663 [INFO][4884] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 13 00:22:06.704652 containerd[1633]: 2025-12-13 00:22:06.668 [INFO][4884] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 00:22:06.704652 containerd[1633]: 2025-12-13 00:22:06.669 [INFO][4884] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 00:22:06.704652 containerd[1633]: 2025-12-13 00:22:06.671 [INFO][4884] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 00:22:06.704652 containerd[1633]: 2025-12-13 00:22:06.672 [INFO][4884] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.78fbf9357b62e7dcc79344057b9fb9a57d617bd45c906725623ba9bfd3754664" host="localhost" Dec 13 00:22:06.704652 containerd[1633]: 2025-12-13 00:22:06.673 [INFO][4884] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.78fbf9357b62e7dcc79344057b9fb9a57d617bd45c906725623ba9bfd3754664 Dec 13 00:22:06.704652 containerd[1633]: 2025-12-13 00:22:06.679 [INFO][4884] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.78fbf9357b62e7dcc79344057b9fb9a57d617bd45c906725623ba9bfd3754664" host="localhost" Dec 13 00:22:06.704652 containerd[1633]: 2025-12-13 00:22:06.685 [INFO][4884] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.78fbf9357b62e7dcc79344057b9fb9a57d617bd45c906725623ba9bfd3754664" host="localhost" Dec 13 00:22:06.704652 containerd[1633]: 2025-12-13 00:22:06.685 [INFO][4884] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.78fbf9357b62e7dcc79344057b9fb9a57d617bd45c906725623ba9bfd3754664" host="localhost" Dec 13 00:22:06.704652 containerd[1633]: 2025-12-13 00:22:06.686 [INFO][4884] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 13 00:22:06.704652 containerd[1633]: 2025-12-13 00:22:06.686 [INFO][4884] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="78fbf9357b62e7dcc79344057b9fb9a57d617bd45c906725623ba9bfd3754664" HandleID="k8s-pod-network.78fbf9357b62e7dcc79344057b9fb9a57d617bd45c906725623ba9bfd3754664" Workload="localhost-k8s-calico--kube--controllers--7cf9f886c6--9fch9-eth0" Dec 13 00:22:06.705334 containerd[1633]: 2025-12-13 00:22:06.689 [INFO][4854] cni-plugin/k8s.go 418: Populated endpoint ContainerID="78fbf9357b62e7dcc79344057b9fb9a57d617bd45c906725623ba9bfd3754664" Namespace="calico-system" Pod="calico-kube-controllers-7cf9f886c6-9fch9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7cf9f886c6--9fch9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7cf9f886c6--9fch9-eth0", GenerateName:"calico-kube-controllers-7cf9f886c6-", Namespace:"calico-system", SelfLink:"", UID:"fce4aad9-52fa-4b91-82ff-c6436952148b", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 0, 21, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7cf9f886c6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7cf9f886c6-9fch9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6ab308fea44", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 00:22:06.705334 containerd[1633]: 2025-12-13 00:22:06.689 [INFO][4854] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="78fbf9357b62e7dcc79344057b9fb9a57d617bd45c906725623ba9bfd3754664" Namespace="calico-system" Pod="calico-kube-controllers-7cf9f886c6-9fch9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7cf9f886c6--9fch9-eth0" Dec 13 00:22:06.705334 containerd[1633]: 2025-12-13 00:22:06.689 [INFO][4854] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6ab308fea44 ContainerID="78fbf9357b62e7dcc79344057b9fb9a57d617bd45c906725623ba9bfd3754664" Namespace="calico-system" Pod="calico-kube-controllers-7cf9f886c6-9fch9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7cf9f886c6--9fch9-eth0" Dec 13 00:22:06.705334 containerd[1633]: 2025-12-13 00:22:06.693 [INFO][4854] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="78fbf9357b62e7dcc79344057b9fb9a57d617bd45c906725623ba9bfd3754664" Namespace="calico-system" Pod="calico-kube-controllers-7cf9f886c6-9fch9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7cf9f886c6--9fch9-eth0" Dec 13 00:22:06.705334 containerd[1633]: 2025-12-13 00:22:06.693 [INFO][4854] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="78fbf9357b62e7dcc79344057b9fb9a57d617bd45c906725623ba9bfd3754664" Namespace="calico-system" Pod="calico-kube-controllers-7cf9f886c6-9fch9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7cf9f886c6--9fch9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7cf9f886c6--9fch9-eth0", GenerateName:"calico-kube-controllers-7cf9f886c6-", Namespace:"calico-system", SelfLink:"", UID:"fce4aad9-52fa-4b91-82ff-c6436952148b", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 0, 21, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7cf9f886c6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"78fbf9357b62e7dcc79344057b9fb9a57d617bd45c906725623ba9bfd3754664", Pod:"calico-kube-controllers-7cf9f886c6-9fch9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6ab308fea44", MAC:"1a:2d:a2:ec:23:ff", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 00:22:06.705334 containerd[1633]: 2025-12-13 00:22:06.702 [INFO][4854] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="78fbf9357b62e7dcc79344057b9fb9a57d617bd45c906725623ba9bfd3754664" Namespace="calico-system" Pod="calico-kube-controllers-7cf9f886c6-9fch9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7cf9f886c6--9fch9-eth0" Dec 13 00:22:06.721000 audit[4909]: NETFILTER_CFG table=filter:129 family=2 entries=56 op=nft_register_chain pid=4909 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 00:22:06.721000 audit[4909]: SYSCALL arch=c000003e syscall=46 success=yes exit=25516 a0=3 a1=7ffec14558e0 a2=0 a3=7ffec14558cc items=0 ppid=4231 pid=4909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:06.721000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 00:22:06.731860 kubelet[2818]: E1213 00:22:06.731602 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:22:06.731860 kubelet[2818]: E1213 00:22:06.731760 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:22:06.733036 kubelet[2818]: E1213 00:22:06.732994 2818 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-f54tn" podUID="7ce6ad04-f89c-40a1-981e-2b7e39fe58e0" Dec 13 00:22:06.734903 kubelet[2818]: E1213 00:22:06.734793 2818 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58486567b6-lnz72" podUID="00a965a0-569e-4742-bf83-196c624e0f8f" Dec 13 00:22:06.735106 kubelet[2818]: E1213 00:22:06.735071 2818 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6d686f8ffb-wbp4f" podUID="5353c832-e4bf-4b05-bc32-552262f10d42" Dec 13 00:22:06.738048 containerd[1633]: time="2025-12-13T00:22:06.737986976Z" level=info msg="connecting to shim 78fbf9357b62e7dcc79344057b9fb9a57d617bd45c906725623ba9bfd3754664" address="unix:///run/containerd/s/425677d36e2330545e10bfb0e70848dac9dcef4ab229011a0698a179be79ac58" namespace=k8s.io protocol=ttrpc version=3 Dec 13 00:22:06.747055 containerd[1633]: time="2025-12-13T00:22:06.747010610Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:22:06.748920 containerd[1633]: time="2025-12-13T00:22:06.748765483Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 13 00:22:06.748920 containerd[1633]: time="2025-12-13T00:22:06.748854840Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 13 00:22:06.749256 kubelet[2818]: E1213 00:22:06.749219 2818 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 13 00:22:06.749308 kubelet[2818]: E1213 00:22:06.749271 2818 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 13 00:22:06.749444 kubelet[2818]: E1213 00:22:06.749392 2818 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hq9mf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wvdrp_calico-system(dedbe661-92c2-4c3f-9ab9-3f4df404e3b1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 13 00:22:06.751290 containerd[1633]: time="2025-12-13T00:22:06.751261387Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 13 00:22:06.783124 systemd[1]: Started cri-containerd-78fbf9357b62e7dcc79344057b9fb9a57d617bd45c906725623ba9bfd3754664.scope - libcontainer container 78fbf9357b62e7dcc79344057b9fb9a57d617bd45c906725623ba9bfd3754664. Dec 13 00:22:06.801000 audit[4949]: NETFILTER_CFG table=filter:130 family=2 entries=17 op=nft_register_rule pid=4949 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:22:06.801000 audit[4949]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffdbf98df50 a2=0 a3=7ffdbf98df3c items=0 ppid=2927 pid=4949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:06.801000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:22:06.815852 systemd-networkd[1321]: calie2413283f92: Link UP Dec 13 00:22:06.814000 audit[4949]: NETFILTER_CFG table=nat:131 family=2 entries=47 op=nft_register_chain pid=4949 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:22:06.814000 audit[4949]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffdbf98df50 a2=0 a3=7ffdbf98df3c items=0 ppid=2927 pid=4949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:06.814000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:22:06.817372 systemd-networkd[1321]: calie2413283f92: Gained carrier Dec 13 00:22:06.818000 audit: BPF prog-id=255 op=LOAD Dec 13 00:22:06.819000 audit: BPF prog-id=256 op=LOAD Dec 13 00:22:06.819000 audit[4930]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4918 pid=4930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:06.819000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738666266393335376236326537646363373933343430353762396662 Dec 13 00:22:06.819000 audit: BPF prog-id=256 op=UNLOAD Dec 13 00:22:06.819000 audit[4930]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4918 pid=4930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:06.819000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738666266393335376236326537646363373933343430353762396662 Dec 13 00:22:06.819000 audit: BPF prog-id=257 op=LOAD Dec 13 00:22:06.819000 audit[4930]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4918 pid=4930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:06.819000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738666266393335376236326537646363373933343430353762396662 Dec 13 00:22:06.819000 audit: BPF prog-id=258 op=LOAD Dec 13 00:22:06.819000 audit[4930]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4918 pid=4930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:06.819000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738666266393335376236326537646363373933343430353762396662 Dec 13 00:22:06.819000 audit: BPF prog-id=258 op=UNLOAD Dec 13 00:22:06.819000 audit[4930]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4918 pid=4930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:06.819000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738666266393335376236326537646363373933343430353762396662 Dec 13 00:22:06.819000 audit: BPF prog-id=257 op=UNLOAD Dec 13 00:22:06.819000 audit[4930]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4918 pid=4930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:06.819000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738666266393335376236326537646363373933343430353762396662 Dec 13 00:22:06.820000 audit: BPF prog-id=259 op=LOAD Dec 13 00:22:06.820000 audit[4930]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4918 pid=4930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:06.820000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738666266393335376236326537646363373933343430353762396662 Dec 13 00:22:06.825523 systemd-resolved[1290]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 00:22:06.837038 containerd[1633]: 2025-12-13 00:22:06.622 [INFO][4862] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--58486567b6--tgd79-eth0 calico-apiserver-58486567b6- calico-apiserver b431ca61-6062-45e4-a35d-3ec7ff6dccb1 884 0 2025-12-13 00:21:34 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:58486567b6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-58486567b6-tgd79 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie2413283f92 [] [] }} ContainerID="7feadb96b646aa1530471929e9d25f8a96cfba76e9a639cb1fcfa8383ecb68a9" Namespace="calico-apiserver" Pod="calico-apiserver-58486567b6-tgd79" WorkloadEndpoint="localhost-k8s-calico--apiserver--58486567b6--tgd79-" Dec 13 00:22:06.837038 containerd[1633]: 2025-12-13 00:22:06.622 [INFO][4862] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7feadb96b646aa1530471929e9d25f8a96cfba76e9a639cb1fcfa8383ecb68a9" Namespace="calico-apiserver" Pod="calico-apiserver-58486567b6-tgd79" WorkloadEndpoint="localhost-k8s-calico--apiserver--58486567b6--tgd79-eth0" Dec 13 00:22:06.837038 containerd[1633]: 2025-12-13 00:22:06.651 [INFO][4890] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7feadb96b646aa1530471929e9d25f8a96cfba76e9a639cb1fcfa8383ecb68a9" HandleID="k8s-pod-network.7feadb96b646aa1530471929e9d25f8a96cfba76e9a639cb1fcfa8383ecb68a9" Workload="localhost-k8s-calico--apiserver--58486567b6--tgd79-eth0" Dec 13 00:22:06.837038 containerd[1633]: 2025-12-13 00:22:06.651 [INFO][4890] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7feadb96b646aa1530471929e9d25f8a96cfba76e9a639cb1fcfa8383ecb68a9" HandleID="k8s-pod-network.7feadb96b646aa1530471929e9d25f8a96cfba76e9a639cb1fcfa8383ecb68a9" Workload="localhost-k8s-calico--apiserver--58486567b6--tgd79-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002df5e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-58486567b6-tgd79", "timestamp":"2025-12-13 00:22:06.651455993 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 00:22:06.837038 containerd[1633]: 2025-12-13 00:22:06.651 [INFO][4890] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 13 00:22:06.837038 containerd[1633]: 2025-12-13 00:22:06.686 [INFO][4890] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 13 00:22:06.837038 containerd[1633]: 2025-12-13 00:22:06.686 [INFO][4890] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 00:22:06.837038 containerd[1633]: 2025-12-13 00:22:06.757 [INFO][4890] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7feadb96b646aa1530471929e9d25f8a96cfba76e9a639cb1fcfa8383ecb68a9" host="localhost" Dec 13 00:22:06.837038 containerd[1633]: 2025-12-13 00:22:06.771 [INFO][4890] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 13 00:22:06.837038 containerd[1633]: 2025-12-13 00:22:06.785 [INFO][4890] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 00:22:06.837038 containerd[1633]: 2025-12-13 00:22:06.788 [INFO][4890] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 00:22:06.837038 containerd[1633]: 2025-12-13 00:22:06.790 [INFO][4890] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 00:22:06.837038 containerd[1633]: 2025-12-13 00:22:06.790 [INFO][4890] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7feadb96b646aa1530471929e9d25f8a96cfba76e9a639cb1fcfa8383ecb68a9" host="localhost" Dec 13 00:22:06.837038 containerd[1633]: 2025-12-13 00:22:06.791 [INFO][4890] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7feadb96b646aa1530471929e9d25f8a96cfba76e9a639cb1fcfa8383ecb68a9 Dec 13 00:22:06.837038 containerd[1633]: 2025-12-13 00:22:06.796 [INFO][4890] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7feadb96b646aa1530471929e9d25f8a96cfba76e9a639cb1fcfa8383ecb68a9" host="localhost" Dec 13 00:22:06.837038 containerd[1633]: 2025-12-13 00:22:06.803 [INFO][4890] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.7feadb96b646aa1530471929e9d25f8a96cfba76e9a639cb1fcfa8383ecb68a9" host="localhost" Dec 13 00:22:06.837038 containerd[1633]: 2025-12-13 00:22:06.803 [INFO][4890] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.7feadb96b646aa1530471929e9d25f8a96cfba76e9a639cb1fcfa8383ecb68a9" host="localhost" Dec 13 00:22:06.837038 containerd[1633]: 2025-12-13 00:22:06.803 [INFO][4890] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 13 00:22:06.837038 containerd[1633]: 2025-12-13 00:22:06.803 [INFO][4890] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="7feadb96b646aa1530471929e9d25f8a96cfba76e9a639cb1fcfa8383ecb68a9" HandleID="k8s-pod-network.7feadb96b646aa1530471929e9d25f8a96cfba76e9a639cb1fcfa8383ecb68a9" Workload="localhost-k8s-calico--apiserver--58486567b6--tgd79-eth0" Dec 13 00:22:06.839178 containerd[1633]: 2025-12-13 00:22:06.810 [INFO][4862] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7feadb96b646aa1530471929e9d25f8a96cfba76e9a639cb1fcfa8383ecb68a9" Namespace="calico-apiserver" Pod="calico-apiserver-58486567b6-tgd79" WorkloadEndpoint="localhost-k8s-calico--apiserver--58486567b6--tgd79-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--58486567b6--tgd79-eth0", GenerateName:"calico-apiserver-58486567b6-", Namespace:"calico-apiserver", SelfLink:"", UID:"b431ca61-6062-45e4-a35d-3ec7ff6dccb1", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 0, 21, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58486567b6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-58486567b6-tgd79", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie2413283f92", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 00:22:06.839178 containerd[1633]: 2025-12-13 00:22:06.810 [INFO][4862] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="7feadb96b646aa1530471929e9d25f8a96cfba76e9a639cb1fcfa8383ecb68a9" Namespace="calico-apiserver" Pod="calico-apiserver-58486567b6-tgd79" WorkloadEndpoint="localhost-k8s-calico--apiserver--58486567b6--tgd79-eth0" Dec 13 00:22:06.839178 containerd[1633]: 2025-12-13 00:22:06.810 [INFO][4862] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie2413283f92 ContainerID="7feadb96b646aa1530471929e9d25f8a96cfba76e9a639cb1fcfa8383ecb68a9" Namespace="calico-apiserver" Pod="calico-apiserver-58486567b6-tgd79" WorkloadEndpoint="localhost-k8s-calico--apiserver--58486567b6--tgd79-eth0" Dec 13 00:22:06.839178 containerd[1633]: 2025-12-13 00:22:06.817 [INFO][4862] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7feadb96b646aa1530471929e9d25f8a96cfba76e9a639cb1fcfa8383ecb68a9" Namespace="calico-apiserver" Pod="calico-apiserver-58486567b6-tgd79" WorkloadEndpoint="localhost-k8s-calico--apiserver--58486567b6--tgd79-eth0" Dec 13 00:22:06.839178 containerd[1633]: 2025-12-13 00:22:06.818 [INFO][4862] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7feadb96b646aa1530471929e9d25f8a96cfba76e9a639cb1fcfa8383ecb68a9" Namespace="calico-apiserver" Pod="calico-apiserver-58486567b6-tgd79" WorkloadEndpoint="localhost-k8s-calico--apiserver--58486567b6--tgd79-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--58486567b6--tgd79-eth0", GenerateName:"calico-apiserver-58486567b6-", Namespace:"calico-apiserver", SelfLink:"", UID:"b431ca61-6062-45e4-a35d-3ec7ff6dccb1", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2025, time.December, 13, 0, 21, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58486567b6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7feadb96b646aa1530471929e9d25f8a96cfba76e9a639cb1fcfa8383ecb68a9", Pod:"calico-apiserver-58486567b6-tgd79", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie2413283f92", MAC:"f6:04:ec:2d:75:18", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 13 00:22:06.839178 containerd[1633]: 2025-12-13 00:22:06.828 [INFO][4862] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7feadb96b646aa1530471929e9d25f8a96cfba76e9a639cb1fcfa8383ecb68a9" Namespace="calico-apiserver" Pod="calico-apiserver-58486567b6-tgd79" WorkloadEndpoint="localhost-k8s-calico--apiserver--58486567b6--tgd79-eth0" Dec 13 00:22:06.856000 audit[4963]: NETFILTER_CFG table=filter:132 family=2 entries=61 op=nft_register_chain pid=4963 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 00:22:06.856000 audit[4963]: SYSCALL arch=c000003e syscall=46 success=yes exit=29016 a0=3 a1=7ffd4e020100 a2=0 a3=7ffd4e0200ec items=0 ppid=4231 pid=4963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:06.856000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 00:22:06.864875 containerd[1633]: time="2025-12-13T00:22:06.864686400Z" level=info msg="connecting to shim 7feadb96b646aa1530471929e9d25f8a96cfba76e9a639cb1fcfa8383ecb68a9" address="unix:///run/containerd/s/86914e4ebc9418695e0b0d6689b4f5138e9e3b7611c986a3ff5de099320d898b" namespace=k8s.io protocol=ttrpc version=3 Dec 13 00:22:06.878101 containerd[1633]: time="2025-12-13T00:22:06.878048575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cf9f886c6-9fch9,Uid:fce4aad9-52fa-4b91-82ff-c6436952148b,Namespace:calico-system,Attempt:0,} returns sandbox id \"78fbf9357b62e7dcc79344057b9fb9a57d617bd45c906725623ba9bfd3754664\"" Dec 13 00:22:06.901015 systemd-networkd[1321]: vxlan.calico: Gained IPv6LL Dec 13 00:22:06.907176 systemd[1]: Started cri-containerd-7feadb96b646aa1530471929e9d25f8a96cfba76e9a639cb1fcfa8383ecb68a9.scope - libcontainer container 7feadb96b646aa1530471929e9d25f8a96cfba76e9a639cb1fcfa8383ecb68a9. Dec 13 00:22:06.920000 audit: BPF prog-id=260 op=LOAD Dec 13 00:22:06.922065 kernel: kauditd_printk_skb: 437 callbacks suppressed Dec 13 00:22:06.922137 kernel: audit: type=1334 audit(1765585326.920:744): prog-id=260 op=LOAD Dec 13 00:22:06.924709 kernel: audit: type=1334 audit(1765585326.920:745): prog-id=261 op=LOAD Dec 13 00:22:06.920000 audit: BPF prog-id=261 op=LOAD Dec 13 00:22:06.923890 systemd-resolved[1290]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 00:22:06.920000 audit[4990]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=4972 pid=4990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:06.920000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766656164623936623634366161313533303437313932396539643235 Dec 13 00:22:06.936243 kernel: audit: type=1300 audit(1765585326.920:745): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=4972 pid=4990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:06.936310 kernel: audit: type=1327 audit(1765585326.920:745): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766656164623936623634366161313533303437313932396539643235 Dec 13 00:22:06.920000 audit: BPF prog-id=261 op=UNLOAD Dec 13 00:22:06.937931 kernel: audit: type=1334 audit(1765585326.920:746): prog-id=261 op=UNLOAD Dec 13 00:22:06.943890 kernel: audit: type=1300 audit(1765585326.920:746): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4972 pid=4990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:06.920000 audit[4990]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4972 pid=4990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:06.920000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766656164623936623634366161313533303437313932396539643235 Dec 13 00:22:06.949291 kernel: audit: type=1327 audit(1765585326.920:746): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766656164623936623634366161313533303437313932396539643235 Dec 13 00:22:06.949385 kernel: audit: type=1334 audit(1765585326.921:747): prog-id=262 op=LOAD Dec 13 00:22:06.921000 audit: BPF prog-id=262 op=LOAD Dec 13 00:22:06.955834 kernel: audit: type=1300 audit(1765585326.921:747): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=4972 pid=4990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:06.921000 audit[4990]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=4972 pid=4990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:06.960831 kernel: audit: type=1327 audit(1765585326.921:747): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766656164623936623634366161313533303437313932396539643235 Dec 13 00:22:06.921000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766656164623936623634366161313533303437313932396539643235 Dec 13 00:22:06.921000 audit: BPF prog-id=263 op=LOAD Dec 13 00:22:06.921000 audit[4990]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=4972 pid=4990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:06.921000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766656164623936623634366161313533303437313932396539643235 Dec 13 00:22:06.921000 audit: BPF prog-id=263 op=UNLOAD Dec 13 00:22:06.921000 audit[4990]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4972 pid=4990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:06.921000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766656164623936623634366161313533303437313932396539643235 Dec 13 00:22:06.921000 audit: BPF prog-id=262 op=UNLOAD Dec 13 00:22:06.921000 audit[4990]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4972 pid=4990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:06.921000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766656164623936623634366161313533303437313932396539643235 Dec 13 00:22:06.921000 audit: BPF prog-id=264 op=LOAD Dec 13 00:22:06.921000 audit[4990]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=4972 pid=4990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:06.921000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766656164623936623634366161313533303437313932396539643235 Dec 13 00:22:06.970277 containerd[1633]: time="2025-12-13T00:22:06.970225825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58486567b6-tgd79,Uid:b431ca61-6062-45e4-a35d-3ec7ff6dccb1,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"7feadb96b646aa1530471929e9d25f8a96cfba76e9a639cb1fcfa8383ecb68a9\"" Dec 13 00:22:07.070073 containerd[1633]: time="2025-12-13T00:22:07.070002599Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:22:07.071421 containerd[1633]: time="2025-12-13T00:22:07.071360187Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 13 00:22:07.071572 containerd[1633]: time="2025-12-13T00:22:07.071442752Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 13 00:22:07.071679 kubelet[2818]: E1213 00:22:07.071623 2818 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 13 00:22:07.071756 kubelet[2818]: E1213 00:22:07.071682 2818 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 13 00:22:07.072138 kubelet[2818]: E1213 00:22:07.072024 2818 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hq9mf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wvdrp_calico-system(dedbe661-92c2-4c3f-9ab9-3f4df404e3b1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 13 00:22:07.072269 containerd[1633]: time="2025-12-13T00:22:07.072117258Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 13 00:22:07.073262 kubelet[2818]: E1213 00:22:07.073222 2818 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wvdrp" podUID="dedbe661-92c2-4c3f-9ab9-3f4df404e3b1" Dec 13 00:22:07.092027 systemd-networkd[1321]: calif6f2cc94135: Gained IPv6LL Dec 13 00:22:07.426273 containerd[1633]: time="2025-12-13T00:22:07.426094633Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:22:07.427400 containerd[1633]: time="2025-12-13T00:22:07.427351171Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 13 00:22:07.427553 containerd[1633]: time="2025-12-13T00:22:07.427397247Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 13 00:22:07.427672 kubelet[2818]: E1213 00:22:07.427622 2818 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 13 00:22:07.427734 kubelet[2818]: E1213 00:22:07.427678 2818 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 13 00:22:07.428085 containerd[1633]: time="2025-12-13T00:22:07.428058248Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 13 00:22:07.428143 kubelet[2818]: E1213 00:22:07.428035 2818 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ntr4p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7cf9f886c6-9fch9_calico-system(fce4aad9-52fa-4b91-82ff-c6436952148b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 13 00:22:07.429440 kubelet[2818]: E1213 00:22:07.429364 2818 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cf9f886c6-9fch9" podUID="fce4aad9-52fa-4b91-82ff-c6436952148b" Dec 13 00:22:07.732045 systemd-networkd[1321]: calid6a812d4911: Gained IPv6LL Dec 13 00:22:07.736179 kubelet[2818]: E1213 00:22:07.736122 2818 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cf9f886c6-9fch9" podUID="fce4aad9-52fa-4b91-82ff-c6436952148b" Dec 13 00:22:07.738108 kubelet[2818]: E1213 00:22:07.738086 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:22:07.738233 kubelet[2818]: E1213 00:22:07.738174 2818 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58486567b6-lnz72" podUID="00a965a0-569e-4742-bf83-196c624e0f8f" Dec 13 00:22:07.738422 kubelet[2818]: E1213 00:22:07.737563 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:22:07.738610 kubelet[2818]: E1213 00:22:07.738576 2818 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wvdrp" podUID="dedbe661-92c2-4c3f-9ab9-3f4df404e3b1" Dec 13 00:22:07.776295 containerd[1633]: time="2025-12-13T00:22:07.776219198Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:22:07.834551 containerd[1633]: time="2025-12-13T00:22:07.834429892Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 13 00:22:07.834551 containerd[1633]: time="2025-12-13T00:22:07.834479334Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 13 00:22:07.834842 kubelet[2818]: E1213 00:22:07.834772 2818 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 13 00:22:07.834922 kubelet[2818]: E1213 00:22:07.834854 2818 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 13 00:22:07.835077 kubelet[2818]: E1213 00:22:07.835009 2818 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gfmql,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-58486567b6-tgd79_calico-apiserver(b431ca61-6062-45e4-a35d-3ec7ff6dccb1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 13 00:22:07.836244 kubelet[2818]: E1213 00:22:07.836191 2818 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58486567b6-tgd79" podUID="b431ca61-6062-45e4-a35d-3ec7ff6dccb1" Dec 13 00:22:07.840000 audit[5016]: NETFILTER_CFG table=filter:133 family=2 entries=14 op=nft_register_rule pid=5016 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:22:07.840000 audit[5016]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffda1dad940 a2=0 a3=7ffda1dad92c items=0 ppid=2927 pid=5016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:07.840000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:22:07.850000 audit[5016]: NETFILTER_CFG table=nat:134 family=2 entries=20 op=nft_register_rule pid=5016 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:22:07.850000 audit[5016]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffda1dad940 a2=0 a3=7ffda1dad92c items=0 ppid=2927 pid=5016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:07.850000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:22:08.180042 systemd-networkd[1321]: calie2413283f92: Gained IPv6LL Dec 13 00:22:08.501017 systemd-networkd[1321]: cali6ab308fea44: Gained IPv6LL Dec 13 00:22:08.740316 kubelet[2818]: E1213 00:22:08.740268 2818 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58486567b6-tgd79" podUID="b431ca61-6062-45e4-a35d-3ec7ff6dccb1" Dec 13 00:22:08.741608 kubelet[2818]: E1213 00:22:08.741101 2818 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cf9f886c6-9fch9" podUID="fce4aad9-52fa-4b91-82ff-c6436952148b" Dec 13 00:22:08.780000 audit[5020]: NETFILTER_CFG table=filter:135 family=2 entries=14 op=nft_register_rule pid=5020 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:22:08.780000 audit[5020]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffccc3c49c0 a2=0 a3=7ffccc3c49ac items=0 ppid=2927 pid=5020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:08.780000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:22:08.786000 audit[5020]: NETFILTER_CFG table=nat:136 family=2 entries=20 op=nft_register_rule pid=5020 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:22:08.786000 audit[5020]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffccc3c49c0 a2=0 a3=7ffccc3c49ac items=0 ppid=2927 pid=5020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:08.786000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:22:10.585550 systemd[1]: Started sshd@9-10.0.0.65:22-10.0.0.1:44086.service - OpenSSH per-connection server daemon (10.0.0.1:44086). Dec 13 00:22:10.585000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.65:22-10.0.0.1:44086 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:10.684000 audit[5031]: USER_ACCT pid=5031 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:10.685794 sshd[5031]: Accepted publickey for core from 10.0.0.1 port 44086 ssh2: RSA SHA256:bCAENV3gEImip2hLsDgpmZxJX+wB3hyqf9WeGkoaK2w Dec 13 00:22:10.686000 audit[5031]: CRED_ACQ pid=5031 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:10.686000 audit[5031]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd3de47bb0 a2=3 a3=0 items=0 ppid=1 pid=5031 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:10.686000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:22:10.688327 sshd-session[5031]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:22:10.693660 systemd-logind[1605]: New session 11 of user core. Dec 13 00:22:10.700257 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 00:22:10.703000 audit[5031]: USER_START pid=5031 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:10.705000 audit[5035]: CRED_ACQ pid=5035 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:10.812515 sshd[5035]: Connection closed by 10.0.0.1 port 44086 Dec 13 00:22:10.812934 sshd-session[5031]: pam_unix(sshd:session): session closed for user core Dec 13 00:22:10.813000 audit[5031]: USER_END pid=5031 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:10.814000 audit[5031]: CRED_DISP pid=5031 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:10.818445 systemd[1]: sshd@9-10.0.0.65:22-10.0.0.1:44086.service: Deactivated successfully. Dec 13 00:22:10.817000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.65:22-10.0.0.1:44086 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:10.820621 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 00:22:10.821428 systemd-logind[1605]: Session 11 logged out. Waiting for processes to exit. Dec 13 00:22:10.822803 systemd-logind[1605]: Removed session 11. Dec 13 00:22:15.830835 systemd[1]: Started sshd@10-10.0.0.65:22-10.0.0.1:44094.service - OpenSSH per-connection server daemon (10.0.0.1:44094). Dec 13 00:22:15.830000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.65:22-10.0.0.1:44094 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:15.832374 kernel: kauditd_printk_skb: 35 callbacks suppressed Dec 13 00:22:15.832516 kernel: audit: type=1130 audit(1765585335.830:765): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.65:22-10.0.0.1:44094 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:15.893000 audit[5059]: USER_ACCT pid=5059 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:15.894847 sshd[5059]: Accepted publickey for core from 10.0.0.1 port 44094 ssh2: RSA SHA256:bCAENV3gEImip2hLsDgpmZxJX+wB3hyqf9WeGkoaK2w Dec 13 00:22:15.898280 sshd-session[5059]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:22:15.893000 audit[5059]: CRED_ACQ pid=5059 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:15.904479 systemd-logind[1605]: New session 12 of user core. Dec 13 00:22:15.906445 kernel: audit: type=1101 audit(1765585335.893:766): pid=5059 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:15.906522 kernel: audit: type=1103 audit(1765585335.893:767): pid=5059 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:15.906566 kernel: audit: type=1006 audit(1765585335.893:768): pid=5059 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Dec 13 00:22:15.893000 audit[5059]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc59e73820 a2=3 a3=0 items=0 ppid=1 pid=5059 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:15.893000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:22:15.918339 kernel: audit: type=1300 audit(1765585335.893:768): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc59e73820 a2=3 a3=0 items=0 ppid=1 pid=5059 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:15.918435 kernel: audit: type=1327 audit(1765585335.893:768): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:22:15.924194 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 00:22:15.927000 audit[5059]: USER_START pid=5059 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:15.929000 audit[5063]: CRED_ACQ pid=5063 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:15.940351 kernel: audit: type=1105 audit(1765585335.927:769): pid=5059 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:15.940527 kernel: audit: type=1103 audit(1765585335.929:770): pid=5063 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:16.047235 sshd[5063]: Connection closed by 10.0.0.1 port 44094 Dec 13 00:22:16.047526 sshd-session[5059]: pam_unix(sshd:session): session closed for user core Dec 13 00:22:16.048000 audit[5059]: USER_END pid=5059 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:16.053785 systemd[1]: sshd@10-10.0.0.65:22-10.0.0.1:44094.service: Deactivated successfully. Dec 13 00:22:16.048000 audit[5059]: CRED_DISP pid=5059 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:16.056164 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 00:22:16.057097 systemd-logind[1605]: Session 12 logged out. Waiting for processes to exit. Dec 13 00:22:16.058341 systemd-logind[1605]: Removed session 12. Dec 13 00:22:16.059520 kernel: audit: type=1106 audit(1765585336.048:771): pid=5059 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:16.059622 kernel: audit: type=1104 audit(1765585336.048:772): pid=5059 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:16.053000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.65:22-10.0.0.1:44094 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:18.573969 containerd[1633]: time="2025-12-13T00:22:18.573861167Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 13 00:22:18.896396 containerd[1633]: time="2025-12-13T00:22:18.896228978Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:22:18.958236 containerd[1633]: time="2025-12-13T00:22:18.958162218Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 13 00:22:18.958422 containerd[1633]: time="2025-12-13T00:22:18.958209009Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 13 00:22:18.958522 kubelet[2818]: E1213 00:22:18.958473 2818 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 13 00:22:18.958989 kubelet[2818]: E1213 00:22:18.958532 2818 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 13 00:22:18.958989 kubelet[2818]: E1213 00:22:18.958667 2818 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:2da41ffab71742d79c306eb97befe8f6,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zkjbx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6d686f8ffb-wbp4f_calico-system(5353c832-e4bf-4b05-bc32-552262f10d42): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 13 00:22:18.960784 containerd[1633]: time="2025-12-13T00:22:18.960731007Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 13 00:22:19.327345 containerd[1633]: time="2025-12-13T00:22:19.327290325Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:22:19.404373 containerd[1633]: time="2025-12-13T00:22:19.404283893Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 13 00:22:19.404560 containerd[1633]: time="2025-12-13T00:22:19.404406560Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 13 00:22:19.404604 kubelet[2818]: E1213 00:22:19.404535 2818 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 13 00:22:19.404604 kubelet[2818]: E1213 00:22:19.404589 2818 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 13 00:22:19.404776 kubelet[2818]: E1213 00:22:19.404726 2818 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zkjbx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6d686f8ffb-wbp4f_calico-system(5353c832-e4bf-4b05-bc32-552262f10d42): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 13 00:22:19.405899 kubelet[2818]: E1213 00:22:19.405858 2818 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6d686f8ffb-wbp4f" podUID="5353c832-e4bf-4b05-bc32-552262f10d42" Dec 13 00:22:19.573176 containerd[1633]: time="2025-12-13T00:22:19.573127726Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 13 00:22:19.901694 containerd[1633]: time="2025-12-13T00:22:19.901620518Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:22:19.902937 containerd[1633]: time="2025-12-13T00:22:19.902880493Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 13 00:22:19.903019 containerd[1633]: time="2025-12-13T00:22:19.902955419Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 13 00:22:19.903166 kubelet[2818]: E1213 00:22:19.903106 2818 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 13 00:22:19.903235 kubelet[2818]: E1213 00:22:19.903171 2818 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 13 00:22:19.903521 kubelet[2818]: E1213 00:22:19.903430 2818 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ntr4p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7cf9f886c6-9fch9_calico-system(fce4aad9-52fa-4b91-82ff-c6436952148b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 13 00:22:19.903787 containerd[1633]: time="2025-12-13T00:22:19.903471085Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 13 00:22:19.904923 kubelet[2818]: E1213 00:22:19.904874 2818 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cf9f886c6-9fch9" podUID="fce4aad9-52fa-4b91-82ff-c6436952148b" Dec 13 00:22:20.327054 containerd[1633]: time="2025-12-13T00:22:20.326987498Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:22:20.328456 containerd[1633]: time="2025-12-13T00:22:20.328383714Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 13 00:22:20.328538 containerd[1633]: time="2025-12-13T00:22:20.328452417Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 13 00:22:20.328741 kubelet[2818]: E1213 00:22:20.328670 2818 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 13 00:22:20.328741 kubelet[2818]: E1213 00:22:20.328740 2818 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 13 00:22:20.329163 kubelet[2818]: E1213 00:22:20.328873 2818 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hq9mf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wvdrp_calico-system(dedbe661-92c2-4c3f-9ab9-3f4df404e3b1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 13 00:22:20.331720 containerd[1633]: time="2025-12-13T00:22:20.331669771Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 13 00:22:20.671139 containerd[1633]: time="2025-12-13T00:22:20.671002838Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:22:20.672261 containerd[1633]: time="2025-12-13T00:22:20.672218636Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 13 00:22:20.672313 containerd[1633]: time="2025-12-13T00:22:20.672284523Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 13 00:22:20.672445 kubelet[2818]: E1213 00:22:20.672400 2818 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 13 00:22:20.672512 kubelet[2818]: E1213 00:22:20.672453 2818 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 13 00:22:20.673000 kubelet[2818]: E1213 00:22:20.672643 2818 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hq9mf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wvdrp_calico-system(dedbe661-92c2-4c3f-9ab9-3f4df404e3b1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 13 00:22:20.673150 containerd[1633]: time="2025-12-13T00:22:20.672779519Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 13 00:22:20.674189 kubelet[2818]: E1213 00:22:20.674095 2818 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wvdrp" podUID="dedbe661-92c2-4c3f-9ab9-3f4df404e3b1" Dec 13 00:22:20.998334 containerd[1633]: time="2025-12-13T00:22:20.998197131Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:22:20.999796 containerd[1633]: time="2025-12-13T00:22:20.999763136Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 13 00:22:20.999891 containerd[1633]: time="2025-12-13T00:22:20.999832460Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 13 00:22:21.000027 kubelet[2818]: E1213 00:22:20.999992 2818 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 13 00:22:21.000116 kubelet[2818]: E1213 00:22:21.000042 2818 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 13 00:22:21.000248 kubelet[2818]: E1213 00:22:21.000199 2818 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sbbhp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-f54tn_calico-system(7ce6ad04-f89c-40a1-981e-2b7e39fe58e0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 13 00:22:21.001437 kubelet[2818]: E1213 00:22:21.001387 2818 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-f54tn" podUID="7ce6ad04-f89c-40a1-981e-2b7e39fe58e0" Dec 13 00:22:21.072596 systemd[1]: Started sshd@11-10.0.0.65:22-10.0.0.1:51864.service - OpenSSH per-connection server daemon (10.0.0.1:51864). Dec 13 00:22:21.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.65:22-10.0.0.1:51864 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:21.074254 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 00:22:21.074319 kernel: audit: type=1130 audit(1765585341.070:774): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.65:22-10.0.0.1:51864 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:21.145000 audit[5082]: USER_ACCT pid=5082 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:21.147677 sshd[5082]: Accepted publickey for core from 10.0.0.1 port 51864 ssh2: RSA SHA256:bCAENV3gEImip2hLsDgpmZxJX+wB3hyqf9WeGkoaK2w Dec 13 00:22:21.150638 sshd-session[5082]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:22:21.147000 audit[5082]: CRED_ACQ pid=5082 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:21.156463 systemd-logind[1605]: New session 13 of user core. Dec 13 00:22:21.159924 kernel: audit: type=1101 audit(1765585341.145:775): pid=5082 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:21.159990 kernel: audit: type=1103 audit(1765585341.147:776): pid=5082 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:21.160018 kernel: audit: type=1006 audit(1765585341.147:777): pid=5082 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Dec 13 00:22:21.147000 audit[5082]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd297b9aa0 a2=3 a3=0 items=0 ppid=1 pid=5082 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:21.169356 kernel: audit: type=1300 audit(1765585341.147:777): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd297b9aa0 a2=3 a3=0 items=0 ppid=1 pid=5082 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:21.169402 kernel: audit: type=1327 audit(1765585341.147:777): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:22:21.147000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:22:21.175993 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 00:22:21.177000 audit[5082]: USER_START pid=5082 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:21.186832 kernel: audit: type=1105 audit(1765585341.177:778): pid=5082 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:21.186938 kernel: audit: type=1103 audit(1765585341.180:779): pid=5086 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:21.180000 audit[5086]: CRED_ACQ pid=5086 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:21.256867 sshd[5086]: Connection closed by 10.0.0.1 port 51864 Dec 13 00:22:21.257072 sshd-session[5082]: pam_unix(sshd:session): session closed for user core Dec 13 00:22:21.257000 audit[5082]: USER_END pid=5082 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:21.263055 systemd[1]: sshd@11-10.0.0.65:22-10.0.0.1:51864.service: Deactivated successfully. Dec 13 00:22:21.265635 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 00:22:21.257000 audit[5082]: CRED_DISP pid=5082 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:21.267045 systemd-logind[1605]: Session 13 logged out. Waiting for processes to exit. Dec 13 00:22:21.269459 systemd-logind[1605]: Removed session 13. Dec 13 00:22:21.272308 kernel: audit: type=1106 audit(1765585341.257:780): pid=5082 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:21.272385 kernel: audit: type=1104 audit(1765585341.257:781): pid=5082 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:21.257000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.65:22-10.0.0.1:51864 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:21.573491 containerd[1633]: time="2025-12-13T00:22:21.573275681Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 13 00:22:22.000142 containerd[1633]: time="2025-12-13T00:22:21.999993687Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:22:22.001285 containerd[1633]: time="2025-12-13T00:22:22.001242477Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 13 00:22:22.001458 containerd[1633]: time="2025-12-13T00:22:22.001322220Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 13 00:22:22.001500 kubelet[2818]: E1213 00:22:22.001459 2818 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 13 00:22:22.001891 kubelet[2818]: E1213 00:22:22.001510 2818 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 13 00:22:22.001930 containerd[1633]: time="2025-12-13T00:22:22.001838046Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 13 00:22:22.001973 kubelet[2818]: E1213 00:22:22.001851 2818 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dhk5x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-58486567b6-lnz72_calico-apiserver(00a965a0-569e-4742-bf83-196c624e0f8f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 13 00:22:22.003482 kubelet[2818]: E1213 00:22:22.003423 2818 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58486567b6-lnz72" podUID="00a965a0-569e-4742-bf83-196c624e0f8f" Dec 13 00:22:22.360234 containerd[1633]: time="2025-12-13T00:22:22.360164881Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:22:22.483846 containerd[1633]: time="2025-12-13T00:22:22.483669539Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 13 00:22:22.483846 containerd[1633]: time="2025-12-13T00:22:22.483754924Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 13 00:22:22.484109 kubelet[2818]: E1213 00:22:22.484037 2818 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 13 00:22:22.484109 kubelet[2818]: E1213 00:22:22.484099 2818 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 13 00:22:22.484347 kubelet[2818]: E1213 00:22:22.484237 2818 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gfmql,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-58486567b6-tgd79_calico-apiserver(b431ca61-6062-45e4-a35d-3ec7ff6dccb1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 13 00:22:22.485652 kubelet[2818]: E1213 00:22:22.485601 2818 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58486567b6-tgd79" podUID="b431ca61-6062-45e4-a35d-3ec7ff6dccb1" Dec 13 00:22:26.275182 systemd[1]: Started sshd@12-10.0.0.65:22-10.0.0.1:51868.service - OpenSSH per-connection server daemon (10.0.0.1:51868). Dec 13 00:22:26.276368 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 00:22:26.276408 kernel: audit: type=1130 audit(1765585346.274:783): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.65:22-10.0.0.1:51868 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:26.274000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.65:22-10.0.0.1:51868 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:26.343000 audit[5108]: USER_ACCT pid=5108 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:26.344143 sshd[5108]: Accepted publickey for core from 10.0.0.1 port 51868 ssh2: RSA SHA256:bCAENV3gEImip2hLsDgpmZxJX+wB3hyqf9WeGkoaK2w Dec 13 00:22:26.348000 audit[5108]: CRED_ACQ pid=5108 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:26.352051 sshd-session[5108]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:22:26.354339 kernel: audit: type=1101 audit(1765585346.343:784): pid=5108 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:26.354395 kernel: audit: type=1103 audit(1765585346.348:785): pid=5108 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:26.354412 kernel: audit: type=1006 audit(1765585346.349:786): pid=5108 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Dec 13 00:22:26.357178 kernel: audit: type=1300 audit(1765585346.349:786): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffef2dd5b40 a2=3 a3=0 items=0 ppid=1 pid=5108 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:26.349000 audit[5108]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffef2dd5b40 a2=3 a3=0 items=0 ppid=1 pid=5108 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:26.358462 systemd-logind[1605]: New session 14 of user core. Dec 13 00:22:26.362307 kernel: audit: type=1327 audit(1765585346.349:786): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:22:26.349000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:22:26.371046 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 00:22:26.373000 audit[5108]: USER_START pid=5108 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:26.375000 audit[5112]: CRED_ACQ pid=5112 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:26.384432 kernel: audit: type=1105 audit(1765585346.373:787): pid=5108 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:26.384533 kernel: audit: type=1103 audit(1765585346.375:788): pid=5112 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:26.498733 sshd[5112]: Connection closed by 10.0.0.1 port 51868 Dec 13 00:22:26.499715 sshd-session[5108]: pam_unix(sshd:session): session closed for user core Dec 13 00:22:26.500000 audit[5108]: USER_END pid=5108 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:26.500000 audit[5108]: CRED_DISP pid=5108 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:26.528562 kernel: audit: type=1106 audit(1765585346.500:789): pid=5108 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:26.528653 kernel: audit: type=1104 audit(1765585346.500:790): pid=5108 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:26.535287 systemd[1]: sshd@12-10.0.0.65:22-10.0.0.1:51868.service: Deactivated successfully. Dec 13 00:22:26.534000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.65:22-10.0.0.1:51868 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:26.537478 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 00:22:26.538381 systemd-logind[1605]: Session 14 logged out. Waiting for processes to exit. Dec 13 00:22:26.541304 systemd[1]: Started sshd@13-10.0.0.65:22-10.0.0.1:51880.service - OpenSSH per-connection server daemon (10.0.0.1:51880). Dec 13 00:22:26.540000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.65:22-10.0.0.1:51880 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:26.542499 systemd-logind[1605]: Removed session 14. Dec 13 00:22:26.607000 audit[5126]: USER_ACCT pid=5126 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:26.608116 sshd[5126]: Accepted publickey for core from 10.0.0.1 port 51880 ssh2: RSA SHA256:bCAENV3gEImip2hLsDgpmZxJX+wB3hyqf9WeGkoaK2w Dec 13 00:22:26.608000 audit[5126]: CRED_ACQ pid=5126 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:26.608000 audit[5126]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd2f645cf0 a2=3 a3=0 items=0 ppid=1 pid=5126 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:26.608000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:22:26.610359 sshd-session[5126]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:22:26.615190 systemd-logind[1605]: New session 15 of user core. Dec 13 00:22:26.622982 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 00:22:26.624000 audit[5126]: USER_START pid=5126 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:26.626000 audit[5130]: CRED_ACQ pid=5130 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:26.918587 sshd[5130]: Connection closed by 10.0.0.1 port 51880 Dec 13 00:22:26.918958 sshd-session[5126]: pam_unix(sshd:session): session closed for user core Dec 13 00:22:26.919000 audit[5126]: USER_END pid=5126 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:26.919000 audit[5126]: CRED_DISP pid=5126 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:26.935223 systemd[1]: sshd@13-10.0.0.65:22-10.0.0.1:51880.service: Deactivated successfully. Dec 13 00:22:26.934000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.65:22-10.0.0.1:51880 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:26.937696 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 00:22:26.938544 systemd-logind[1605]: Session 15 logged out. Waiting for processes to exit. Dec 13 00:22:26.941227 systemd[1]: Started sshd@14-10.0.0.65:22-10.0.0.1:51890.service - OpenSSH per-connection server daemon (10.0.0.1:51890). Dec 13 00:22:26.940000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.65:22-10.0.0.1:51890 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:26.941866 systemd-logind[1605]: Removed session 15. Dec 13 00:22:26.996000 audit[5144]: USER_ACCT pid=5144 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:26.997895 sshd[5144]: Accepted publickey for core from 10.0.0.1 port 51890 ssh2: RSA SHA256:bCAENV3gEImip2hLsDgpmZxJX+wB3hyqf9WeGkoaK2w Dec 13 00:22:26.997000 audit[5144]: CRED_ACQ pid=5144 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:26.998000 audit[5144]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd2ad3b520 a2=3 a3=0 items=0 ppid=1 pid=5144 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:26.998000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:22:26.999941 sshd-session[5144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:22:27.005230 systemd-logind[1605]: New session 16 of user core. Dec 13 00:22:27.012043 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 00:22:27.013000 audit[5144]: USER_START pid=5144 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:27.015000 audit[5148]: CRED_ACQ pid=5148 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:27.177017 sshd[5148]: Connection closed by 10.0.0.1 port 51890 Dec 13 00:22:27.177721 sshd-session[5144]: pam_unix(sshd:session): session closed for user core Dec 13 00:22:27.178000 audit[5144]: USER_END pid=5144 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:27.178000 audit[5144]: CRED_DISP pid=5144 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:27.184048 systemd-logind[1605]: Session 16 logged out. Waiting for processes to exit. Dec 13 00:22:27.184373 systemd[1]: sshd@14-10.0.0.65:22-10.0.0.1:51890.service: Deactivated successfully. Dec 13 00:22:27.183000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.65:22-10.0.0.1:51890 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:27.186583 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 00:22:27.189071 systemd-logind[1605]: Removed session 16. Dec 13 00:22:32.201325 systemd[1]: Started sshd@15-10.0.0.65:22-10.0.0.1:44374.service - OpenSSH per-connection server daemon (10.0.0.1:44374). Dec 13 00:22:32.200000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.65:22-10.0.0.1:44374 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:32.202614 kernel: kauditd_printk_skb: 23 callbacks suppressed Dec 13 00:22:32.202683 kernel: audit: type=1130 audit(1765585352.200:810): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.65:22-10.0.0.1:44374 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:32.260000 audit[5165]: USER_ACCT pid=5165 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:32.261888 sshd[5165]: Accepted publickey for core from 10.0.0.1 port 44374 ssh2: RSA SHA256:bCAENV3gEImip2hLsDgpmZxJX+wB3hyqf9WeGkoaK2w Dec 13 00:22:32.264201 sshd-session[5165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:22:32.262000 audit[5165]: CRED_ACQ pid=5165 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:32.270261 systemd-logind[1605]: New session 17 of user core. Dec 13 00:22:32.276892 kernel: audit: type=1101 audit(1765585352.260:811): pid=5165 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:32.277052 kernel: audit: type=1103 audit(1765585352.262:812): pid=5165 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:32.277101 kernel: audit: type=1006 audit(1765585352.262:813): pid=5165 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Dec 13 00:22:32.262000 audit[5165]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc1d5cecb0 a2=3 a3=0 items=0 ppid=1 pid=5165 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:32.282212 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 00:22:32.262000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:22:32.288127 kernel: audit: type=1300 audit(1765585352.262:813): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc1d5cecb0 a2=3 a3=0 items=0 ppid=1 pid=5165 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:32.288197 kernel: audit: type=1327 audit(1765585352.262:813): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:22:32.288229 kernel: audit: type=1105 audit(1765585352.284:814): pid=5165 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:32.284000 audit[5165]: USER_START pid=5165 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:32.295740 kernel: audit: type=1103 audit(1765585352.286:815): pid=5169 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:32.286000 audit[5169]: CRED_ACQ pid=5169 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:32.365563 sshd[5169]: Connection closed by 10.0.0.1 port 44374 Dec 13 00:22:32.365864 sshd-session[5165]: pam_unix(sshd:session): session closed for user core Dec 13 00:22:32.366000 audit[5165]: USER_END pid=5165 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:32.371426 systemd[1]: sshd@15-10.0.0.65:22-10.0.0.1:44374.service: Deactivated successfully. Dec 13 00:22:32.373571 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 00:22:32.374992 systemd-logind[1605]: Session 17 logged out. Waiting for processes to exit. Dec 13 00:22:32.367000 audit[5165]: CRED_DISP pid=5165 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:32.375861 systemd-logind[1605]: Removed session 17. Dec 13 00:22:32.381142 kernel: audit: type=1106 audit(1765585352.366:816): pid=5165 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:32.381218 kernel: audit: type=1104 audit(1765585352.367:817): pid=5165 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:32.370000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.65:22-10.0.0.1:44374 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:32.573436 kubelet[2818]: E1213 00:22:32.573371 2818 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-f54tn" podUID="7ce6ad04-f89c-40a1-981e-2b7e39fe58e0" Dec 13 00:22:33.573891 kubelet[2818]: E1213 00:22:33.573836 2818 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cf9f886c6-9fch9" podUID="fce4aad9-52fa-4b91-82ff-c6436952148b" Dec 13 00:22:33.574750 kubelet[2818]: E1213 00:22:33.574109 2818 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wvdrp" podUID="dedbe661-92c2-4c3f-9ab9-3f4df404e3b1" Dec 13 00:22:34.574271 kubelet[2818]: E1213 00:22:34.574214 2818 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6d686f8ffb-wbp4f" podUID="5353c832-e4bf-4b05-bc32-552262f10d42" Dec 13 00:22:35.572992 kubelet[2818]: E1213 00:22:35.572911 2818 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58486567b6-tgd79" podUID="b431ca61-6062-45e4-a35d-3ec7ff6dccb1" Dec 13 00:22:37.380970 systemd[1]: Started sshd@16-10.0.0.65:22-10.0.0.1:44378.service - OpenSSH per-connection server daemon (10.0.0.1:44378). Dec 13 00:22:37.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.65:22-10.0.0.1:44378 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:37.382765 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 00:22:37.382893 kernel: audit: type=1130 audit(1765585357.380:819): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.65:22-10.0.0.1:44378 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:37.469000 audit[5211]: USER_ACCT pid=5211 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:37.470788 sshd[5211]: Accepted publickey for core from 10.0.0.1 port 44378 ssh2: RSA SHA256:bCAENV3gEImip2hLsDgpmZxJX+wB3hyqf9WeGkoaK2w Dec 13 00:22:37.473178 sshd-session[5211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:22:37.471000 audit[5211]: CRED_ACQ pid=5211 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:37.478636 systemd-logind[1605]: New session 18 of user core. Dec 13 00:22:37.483517 kernel: audit: type=1101 audit(1765585357.469:820): pid=5211 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:37.483588 kernel: audit: type=1103 audit(1765585357.471:821): pid=5211 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:37.483621 kernel: audit: type=1006 audit(1765585357.471:822): pid=5211 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Dec 13 00:22:37.471000 audit[5211]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc60742b90 a2=3 a3=0 items=0 ppid=1 pid=5211 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:37.494098 kernel: audit: type=1300 audit(1765585357.471:822): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc60742b90 a2=3 a3=0 items=0 ppid=1 pid=5211 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:37.494167 kernel: audit: type=1327 audit(1765585357.471:822): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:22:37.471000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:22:37.495151 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 00:22:37.497000 audit[5211]: USER_START pid=5211 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:37.499000 audit[5215]: CRED_ACQ pid=5215 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:37.511614 kernel: audit: type=1105 audit(1765585357.497:823): pid=5211 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:37.511741 kernel: audit: type=1103 audit(1765585357.499:824): pid=5215 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:37.573932 kubelet[2818]: E1213 00:22:37.573871 2818 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58486567b6-lnz72" podUID="00a965a0-569e-4742-bf83-196c624e0f8f" Dec 13 00:22:37.630764 sshd[5215]: Connection closed by 10.0.0.1 port 44378 Dec 13 00:22:37.631133 sshd-session[5211]: pam_unix(sshd:session): session closed for user core Dec 13 00:22:37.632000 audit[5211]: USER_END pid=5211 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:37.637343 systemd[1]: sshd@16-10.0.0.65:22-10.0.0.1:44378.service: Deactivated successfully. Dec 13 00:22:37.632000 audit[5211]: CRED_DISP pid=5211 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:37.640055 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 00:22:37.641106 systemd-logind[1605]: Session 18 logged out. Waiting for processes to exit. Dec 13 00:22:37.643649 systemd-logind[1605]: Removed session 18. Dec 13 00:22:37.731145 kernel: audit: type=1106 audit(1765585357.632:825): pid=5211 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:37.731305 kernel: audit: type=1104 audit(1765585357.632:826): pid=5211 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:37.636000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.65:22-10.0.0.1:44378 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:42.645315 systemd[1]: Started sshd@17-10.0.0.65:22-10.0.0.1:48932.service - OpenSSH per-connection server daemon (10.0.0.1:48932). Dec 13 00:22:42.644000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.65:22-10.0.0.1:48932 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:42.646916 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 00:22:42.646989 kernel: audit: type=1130 audit(1765585362.644:828): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.65:22-10.0.0.1:48932 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:42.709000 audit[5229]: USER_ACCT pid=5229 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:42.710870 sshd[5229]: Accepted publickey for core from 10.0.0.1 port 48932 ssh2: RSA SHA256:bCAENV3gEImip2hLsDgpmZxJX+wB3hyqf9WeGkoaK2w Dec 13 00:22:42.713299 sshd-session[5229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:22:42.711000 audit[5229]: CRED_ACQ pid=5229 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:42.718759 systemd-logind[1605]: New session 19 of user core. Dec 13 00:22:42.722067 kernel: audit: type=1101 audit(1765585362.709:829): pid=5229 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:42.722135 kernel: audit: type=1103 audit(1765585362.711:830): pid=5229 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:42.722159 kernel: audit: type=1006 audit(1765585362.711:831): pid=5229 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Dec 13 00:22:42.711000 audit[5229]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffd83ed920 a2=3 a3=0 items=0 ppid=1 pid=5229 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:42.731654 kernel: audit: type=1300 audit(1765585362.711:831): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffd83ed920 a2=3 a3=0 items=0 ppid=1 pid=5229 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:42.731706 kernel: audit: type=1327 audit(1765585362.711:831): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:22:42.711000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:22:42.741148 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 00:22:42.743000 audit[5229]: USER_START pid=5229 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:42.745000 audit[5233]: CRED_ACQ pid=5233 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:42.757415 kernel: audit: type=1105 audit(1765585362.743:832): pid=5229 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:42.757509 kernel: audit: type=1103 audit(1765585362.745:833): pid=5233 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:42.838870 sshd[5233]: Connection closed by 10.0.0.1 port 48932 Dec 13 00:22:42.839259 sshd-session[5229]: pam_unix(sshd:session): session closed for user core Dec 13 00:22:42.840000 audit[5229]: USER_END pid=5229 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:42.845551 systemd-logind[1605]: Session 19 logged out. Waiting for processes to exit. Dec 13 00:22:42.845970 systemd[1]: sshd@17-10.0.0.65:22-10.0.0.1:48932.service: Deactivated successfully. Dec 13 00:22:42.840000 audit[5229]: CRED_DISP pid=5229 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:42.849232 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 00:22:42.852594 kernel: audit: type=1106 audit(1765585362.840:834): pid=5229 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:42.852661 kernel: audit: type=1104 audit(1765585362.840:835): pid=5229 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:42.845000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.65:22-10.0.0.1:48932 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:42.852140 systemd-logind[1605]: Removed session 19. Dec 13 00:22:44.586273 kubelet[2818]: E1213 00:22:44.586224 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:22:44.586747 kubelet[2818]: E1213 00:22:44.586223 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:22:44.591238 containerd[1633]: time="2025-12-13T00:22:44.591181704Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 13 00:22:45.572300 kubelet[2818]: E1213 00:22:45.572248 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:22:46.090743 containerd[1633]: time="2025-12-13T00:22:46.090653431Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:22:46.243493 containerd[1633]: time="2025-12-13T00:22:46.243376332Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 13 00:22:46.243493 containerd[1633]: time="2025-12-13T00:22:46.243425015Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 13 00:22:46.243822 kubelet[2818]: E1213 00:22:46.243747 2818 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 13 00:22:46.244224 kubelet[2818]: E1213 00:22:46.243849 2818 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 13 00:22:46.244224 kubelet[2818]: E1213 00:22:46.244041 2818 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ntr4p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7cf9f886c6-9fch9_calico-system(fce4aad9-52fa-4b91-82ff-c6436952148b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 13 00:22:46.245258 kubelet[2818]: E1213 00:22:46.245228 2818 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cf9f886c6-9fch9" podUID="fce4aad9-52fa-4b91-82ff-c6436952148b" Dec 13 00:22:46.574713 containerd[1633]: time="2025-12-13T00:22:46.574653890Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 13 00:22:46.935097 containerd[1633]: time="2025-12-13T00:22:46.934905554Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:22:46.937432 containerd[1633]: time="2025-12-13T00:22:46.937361529Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 13 00:22:46.937698 containerd[1633]: time="2025-12-13T00:22:46.937635681Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 13 00:22:46.937758 kubelet[2818]: E1213 00:22:46.937702 2818 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 13 00:22:46.937836 kubelet[2818]: E1213 00:22:46.937777 2818 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 13 00:22:46.938065 kubelet[2818]: E1213 00:22:46.937972 2818 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sbbhp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-f54tn_calico-system(7ce6ad04-f89c-40a1-981e-2b7e39fe58e0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 13 00:22:46.939319 kubelet[2818]: E1213 00:22:46.939231 2818 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-f54tn" podUID="7ce6ad04-f89c-40a1-981e-2b7e39fe58e0" Dec 13 00:22:47.574302 containerd[1633]: time="2025-12-13T00:22:47.574260418Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 13 00:22:47.857920 systemd[1]: Started sshd@18-10.0.0.65:22-10.0.0.1:48938.service - OpenSSH per-connection server daemon (10.0.0.1:48938). Dec 13 00:22:47.856000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.65:22-10.0.0.1:48938 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:47.859874 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 00:22:47.859937 kernel: audit: type=1130 audit(1765585367.856:837): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.65:22-10.0.0.1:48938 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:47.908727 containerd[1633]: time="2025-12-13T00:22:47.908649243Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:22:47.938240 containerd[1633]: time="2025-12-13T00:22:47.938086872Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 13 00:22:47.938461 containerd[1633]: time="2025-12-13T00:22:47.938257426Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 13 00:22:47.938580 kubelet[2818]: E1213 00:22:47.938481 2818 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 13 00:22:47.938580 kubelet[2818]: E1213 00:22:47.938552 2818 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 13 00:22:47.939242 kubelet[2818]: E1213 00:22:47.938857 2818 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hq9mf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wvdrp_calico-system(dedbe661-92c2-4c3f-9ab9-3f4df404e3b1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 13 00:22:47.939356 containerd[1633]: time="2025-12-13T00:22:47.938992676Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 13 00:22:47.964000 audit[5254]: USER_ACCT pid=5254 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:47.966339 sshd[5254]: Accepted publickey for core from 10.0.0.1 port 48938 ssh2: RSA SHA256:bCAENV3gEImip2hLsDgpmZxJX+wB3hyqf9WeGkoaK2w Dec 13 00:22:47.969668 sshd-session[5254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:22:47.977890 kernel: audit: type=1101 audit(1765585367.964:838): pid=5254 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:47.978005 kernel: audit: type=1103 audit(1765585367.966:839): pid=5254 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:47.966000 audit[5254]: CRED_ACQ pid=5254 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:47.976848 systemd-logind[1605]: New session 20 of user core. Dec 13 00:22:47.981965 kernel: audit: type=1006 audit(1765585367.966:840): pid=5254 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Dec 13 00:22:47.982112 kernel: audit: type=1300 audit(1765585367.966:840): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc5dcbdaa0 a2=3 a3=0 items=0 ppid=1 pid=5254 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:47.966000 audit[5254]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc5dcbdaa0 a2=3 a3=0 items=0 ppid=1 pid=5254 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:47.966000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:22:47.989990 kernel: audit: type=1327 audit(1765585367.966:840): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:22:47.996120 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 00:22:47.997000 audit[5254]: USER_START pid=5254 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:48.000000 audit[5258]: CRED_ACQ pid=5258 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:48.011542 kernel: audit: type=1105 audit(1765585367.997:841): pid=5254 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:48.011631 kernel: audit: type=1103 audit(1765585368.000:842): pid=5258 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:48.086038 sshd[5258]: Connection closed by 10.0.0.1 port 48938 Dec 13 00:22:48.086373 sshd-session[5254]: pam_unix(sshd:session): session closed for user core Dec 13 00:22:48.086000 audit[5254]: USER_END pid=5254 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:48.093619 systemd[1]: sshd@18-10.0.0.65:22-10.0.0.1:48938.service: Deactivated successfully. Dec 13 00:22:48.086000 audit[5254]: CRED_DISP pid=5254 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:48.096798 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 00:22:48.098153 systemd-logind[1605]: Session 20 logged out. Waiting for processes to exit. Dec 13 00:22:48.100003 systemd-logind[1605]: Removed session 20. Dec 13 00:22:48.100532 kernel: audit: type=1106 audit(1765585368.086:843): pid=5254 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:48.100579 kernel: audit: type=1104 audit(1765585368.086:844): pid=5254 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:48.092000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.65:22-10.0.0.1:48938 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:48.292750 containerd[1633]: time="2025-12-13T00:22:48.292598410Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:22:48.409551 containerd[1633]: time="2025-12-13T00:22:48.409441163Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 13 00:22:48.409924 containerd[1633]: time="2025-12-13T00:22:48.409536054Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 13 00:22:48.410215 kubelet[2818]: E1213 00:22:48.410134 2818 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 13 00:22:48.410215 kubelet[2818]: E1213 00:22:48.410214 2818 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 13 00:22:48.410679 containerd[1633]: time="2025-12-13T00:22:48.410643631Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 13 00:22:48.410729 kubelet[2818]: E1213 00:22:48.410590 2818 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:2da41ffab71742d79c306eb97befe8f6,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zkjbx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6d686f8ffb-wbp4f_calico-system(5353c832-e4bf-4b05-bc32-552262f10d42): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 13 00:22:48.828671 containerd[1633]: time="2025-12-13T00:22:48.828617063Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:22:48.998905 containerd[1633]: time="2025-12-13T00:22:48.998841966Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 13 00:22:48.999054 containerd[1633]: time="2025-12-13T00:22:48.998858568Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 13 00:22:48.999180 kubelet[2818]: E1213 00:22:48.999134 2818 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 13 00:22:48.999511 kubelet[2818]: E1213 00:22:48.999194 2818 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 13 00:22:48.999511 kubelet[2818]: E1213 00:22:48.999421 2818 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hq9mf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wvdrp_calico-system(dedbe661-92c2-4c3f-9ab9-3f4df404e3b1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 13 00:22:49.000694 kubelet[2818]: E1213 00:22:49.000646 2818 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wvdrp" podUID="dedbe661-92c2-4c3f-9ab9-3f4df404e3b1" Dec 13 00:22:49.002496 containerd[1633]: time="2025-12-13T00:22:49.002452365Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 13 00:22:49.380828 containerd[1633]: time="2025-12-13T00:22:49.380741215Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:22:49.382206 containerd[1633]: time="2025-12-13T00:22:49.382166926Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 13 00:22:49.382286 containerd[1633]: time="2025-12-13T00:22:49.382203406Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 13 00:22:49.382494 kubelet[2818]: E1213 00:22:49.382436 2818 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 13 00:22:49.382567 kubelet[2818]: E1213 00:22:49.382505 2818 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 13 00:22:49.382820 kubelet[2818]: E1213 00:22:49.382750 2818 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zkjbx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6d686f8ffb-wbp4f_calico-system(5353c832-e4bf-4b05-bc32-552262f10d42): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 13 00:22:49.383065 containerd[1633]: time="2025-12-13T00:22:49.382873219Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 13 00:22:49.384466 kubelet[2818]: E1213 00:22:49.384429 2818 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6d686f8ffb-wbp4f" podUID="5353c832-e4bf-4b05-bc32-552262f10d42" Dec 13 00:22:49.676329 containerd[1633]: time="2025-12-13T00:22:49.676124925Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:22:49.696696 containerd[1633]: time="2025-12-13T00:22:49.696577348Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 13 00:22:49.696907 containerd[1633]: time="2025-12-13T00:22:49.696753504Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 13 00:22:49.697115 kubelet[2818]: E1213 00:22:49.697049 2818 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 13 00:22:49.697202 kubelet[2818]: E1213 00:22:49.697122 2818 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 13 00:22:49.697352 kubelet[2818]: E1213 00:22:49.697297 2818 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gfmql,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-58486567b6-tgd79_calico-apiserver(b431ca61-6062-45e4-a35d-3ec7ff6dccb1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 13 00:22:49.699445 kubelet[2818]: E1213 00:22:49.699280 2818 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58486567b6-tgd79" podUID="b431ca61-6062-45e4-a35d-3ec7ff6dccb1" Dec 13 00:22:50.573641 containerd[1633]: time="2025-12-13T00:22:50.573360344Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 13 00:22:50.915872 containerd[1633]: time="2025-12-13T00:22:50.915679081Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 13 00:22:50.917049 containerd[1633]: time="2025-12-13T00:22:50.916972259Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 13 00:22:50.917131 containerd[1633]: time="2025-12-13T00:22:50.917041251Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 13 00:22:50.917242 kubelet[2818]: E1213 00:22:50.917185 2818 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 13 00:22:50.917634 kubelet[2818]: E1213 00:22:50.917243 2818 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 13 00:22:50.917634 kubelet[2818]: E1213 00:22:50.917393 2818 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dhk5x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-58486567b6-lnz72_calico-apiserver(00a965a0-569e-4742-bf83-196c624e0f8f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 13 00:22:50.918618 kubelet[2818]: E1213 00:22:50.918589 2818 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58486567b6-lnz72" podUID="00a965a0-569e-4742-bf83-196c624e0f8f" Dec 13 00:22:53.108194 systemd[1]: Started sshd@19-10.0.0.65:22-10.0.0.1:43976.service - OpenSSH per-connection server daemon (10.0.0.1:43976). Dec 13 00:22:53.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.65:22-10.0.0.1:43976 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:53.109452 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 00:22:53.109613 kernel: audit: type=1130 audit(1765585373.107:846): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.65:22-10.0.0.1:43976 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:53.163000 audit[5272]: USER_ACCT pid=5272 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:53.164990 sshd[5272]: Accepted publickey for core from 10.0.0.1 port 43976 ssh2: RSA SHA256:bCAENV3gEImip2hLsDgpmZxJX+wB3hyqf9WeGkoaK2w Dec 13 00:22:53.167348 sshd-session[5272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:22:53.165000 audit[5272]: CRED_ACQ pid=5272 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:53.172912 systemd-logind[1605]: New session 21 of user core. Dec 13 00:22:53.175284 kernel: audit: type=1101 audit(1765585373.163:847): pid=5272 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:53.175352 kernel: audit: type=1103 audit(1765585373.165:848): pid=5272 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:53.175401 kernel: audit: type=1006 audit(1765585373.165:849): pid=5272 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Dec 13 00:22:53.178176 kernel: audit: type=1300 audit(1765585373.165:849): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe4618d470 a2=3 a3=0 items=0 ppid=1 pid=5272 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:53.165000 audit[5272]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe4618d470 a2=3 a3=0 items=0 ppid=1 pid=5272 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:53.183361 kernel: audit: type=1327 audit(1765585373.165:849): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:22:53.165000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:22:53.187246 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 00:22:53.189000 audit[5272]: USER_START pid=5272 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:53.191000 audit[5276]: CRED_ACQ pid=5276 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:53.218406 kernel: audit: type=1105 audit(1765585373.189:850): pid=5272 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:53.218472 kernel: audit: type=1103 audit(1765585373.191:851): pid=5276 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:53.274929 sshd[5276]: Connection closed by 10.0.0.1 port 43976 Dec 13 00:22:53.275465 sshd-session[5272]: pam_unix(sshd:session): session closed for user core Dec 13 00:22:53.276000 audit[5272]: USER_END pid=5272 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:53.276000 audit[5272]: CRED_DISP pid=5272 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:53.287474 kernel: audit: type=1106 audit(1765585373.276:852): pid=5272 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:53.287573 kernel: audit: type=1104 audit(1765585373.276:853): pid=5272 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:53.299747 systemd[1]: sshd@19-10.0.0.65:22-10.0.0.1:43976.service: Deactivated successfully. Dec 13 00:22:53.298000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.65:22-10.0.0.1:43976 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:53.301995 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 00:22:53.302939 systemd-logind[1605]: Session 21 logged out. Waiting for processes to exit. Dec 13 00:22:53.306251 systemd[1]: Started sshd@20-10.0.0.65:22-10.0.0.1:43980.service - OpenSSH per-connection server daemon (10.0.0.1:43980). Dec 13 00:22:53.305000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.65:22-10.0.0.1:43980 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:53.307289 systemd-logind[1605]: Removed session 21. Dec 13 00:22:53.361000 audit[5290]: USER_ACCT pid=5290 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:53.362203 sshd[5290]: Accepted publickey for core from 10.0.0.1 port 43980 ssh2: RSA SHA256:bCAENV3gEImip2hLsDgpmZxJX+wB3hyqf9WeGkoaK2w Dec 13 00:22:53.362000 audit[5290]: CRED_ACQ pid=5290 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:53.363000 audit[5290]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc9064f7b0 a2=3 a3=0 items=0 ppid=1 pid=5290 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:53.363000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:22:53.365011 sshd-session[5290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:22:53.369686 systemd-logind[1605]: New session 22 of user core. Dec 13 00:22:53.379015 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 00:22:53.381000 audit[5290]: USER_START pid=5290 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:53.383000 audit[5294]: CRED_ACQ pid=5294 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:54.741046 sshd[5294]: Connection closed by 10.0.0.1 port 43980 Dec 13 00:22:54.742330 sshd-session[5290]: pam_unix(sshd:session): session closed for user core Dec 13 00:22:54.743000 audit[5290]: USER_END pid=5290 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:54.743000 audit[5290]: CRED_DISP pid=5290 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:54.762720 systemd[1]: sshd@20-10.0.0.65:22-10.0.0.1:43980.service: Deactivated successfully. Dec 13 00:22:54.762000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.65:22-10.0.0.1:43980 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:54.765038 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 00:22:54.766218 systemd-logind[1605]: Session 22 logged out. Waiting for processes to exit. Dec 13 00:22:54.769480 systemd[1]: Started sshd@21-10.0.0.65:22-10.0.0.1:43984.service - OpenSSH per-connection server daemon (10.0.0.1:43984). Dec 13 00:22:54.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.65:22-10.0.0.1:43984 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:54.770274 systemd-logind[1605]: Removed session 22. Dec 13 00:22:54.867000 audit[5305]: USER_ACCT pid=5305 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:54.868535 sshd[5305]: Accepted publickey for core from 10.0.0.1 port 43984 ssh2: RSA SHA256:bCAENV3gEImip2hLsDgpmZxJX+wB3hyqf9WeGkoaK2w Dec 13 00:22:54.869000 audit[5305]: CRED_ACQ pid=5305 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:54.869000 audit[5305]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc85052980 a2=3 a3=0 items=0 ppid=1 pid=5305 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:54.869000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:22:54.871168 sshd-session[5305]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:22:54.876512 systemd-logind[1605]: New session 23 of user core. Dec 13 00:22:54.886032 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 13 00:22:54.887000 audit[5305]: USER_START pid=5305 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:54.889000 audit[5309]: CRED_ACQ pid=5309 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:55.616000 audit[5321]: NETFILTER_CFG table=filter:137 family=2 entries=26 op=nft_register_rule pid=5321 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:22:55.616000 audit[5321]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7fff02860a90 a2=0 a3=7fff02860a7c items=0 ppid=2927 pid=5321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:55.616000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:22:55.623000 audit[5321]: NETFILTER_CFG table=nat:138 family=2 entries=20 op=nft_register_rule pid=5321 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:22:55.623000 audit[5321]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff02860a90 a2=0 a3=0 items=0 ppid=2927 pid=5321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:55.623000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:22:55.629775 sshd[5309]: Connection closed by 10.0.0.1 port 43984 Dec 13 00:22:55.630347 sshd-session[5305]: pam_unix(sshd:session): session closed for user core Dec 13 00:22:55.631000 audit[5305]: USER_END pid=5305 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:55.631000 audit[5305]: CRED_DISP pid=5305 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:55.640993 systemd[1]: sshd@21-10.0.0.65:22-10.0.0.1:43984.service: Deactivated successfully. Dec 13 00:22:55.641000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.65:22-10.0.0.1:43984 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:55.647641 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 00:22:55.647000 audit[5325]: NETFILTER_CFG table=filter:139 family=2 entries=38 op=nft_register_rule pid=5325 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:22:55.647000 audit[5325]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffd7fead0f0 a2=0 a3=7ffd7fead0dc items=0 ppid=2927 pid=5325 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:55.647000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:22:55.652857 systemd-logind[1605]: Session 23 logged out. Waiting for processes to exit. Dec 13 00:22:55.658618 systemd[1]: Started sshd@22-10.0.0.65:22-10.0.0.1:43988.service - OpenSSH per-connection server daemon (10.0.0.1:43988). Dec 13 00:22:55.657000 audit[5325]: NETFILTER_CFG table=nat:140 family=2 entries=20 op=nft_register_rule pid=5325 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:22:55.657000 audit[5325]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffd7fead0f0 a2=0 a3=0 items=0 ppid=2927 pid=5325 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:55.657000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:22:55.657000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.65:22-10.0.0.1:43988 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:55.660628 systemd-logind[1605]: Removed session 23. Dec 13 00:22:55.719000 audit[5328]: USER_ACCT pid=5328 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:55.720318 sshd[5328]: Accepted publickey for core from 10.0.0.1 port 43988 ssh2: RSA SHA256:bCAENV3gEImip2hLsDgpmZxJX+wB3hyqf9WeGkoaK2w Dec 13 00:22:55.722920 sshd-session[5328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:22:55.720000 audit[5328]: CRED_ACQ pid=5328 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:55.720000 audit[5328]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd07156910 a2=3 a3=0 items=0 ppid=1 pid=5328 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:55.720000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:22:55.730372 systemd-logind[1605]: New session 24 of user core. Dec 13 00:22:55.741132 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 13 00:22:55.742000 audit[5328]: USER_START pid=5328 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:55.744000 audit[5332]: CRED_ACQ pid=5332 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:56.067623 sshd[5332]: Connection closed by 10.0.0.1 port 43988 Dec 13 00:22:56.068094 sshd-session[5328]: pam_unix(sshd:session): session closed for user core Dec 13 00:22:56.069000 audit[5328]: USER_END pid=5328 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:56.069000 audit[5328]: CRED_DISP pid=5328 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:56.078598 systemd[1]: sshd@22-10.0.0.65:22-10.0.0.1:43988.service: Deactivated successfully. Dec 13 00:22:56.078000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.65:22-10.0.0.1:43988 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:56.081616 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 00:22:56.083625 systemd-logind[1605]: Session 24 logged out. Waiting for processes to exit. Dec 13 00:22:56.086684 systemd-logind[1605]: Removed session 24. Dec 13 00:22:56.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.65:22-10.0.0.1:44000 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:56.088425 systemd[1]: Started sshd@23-10.0.0.65:22-10.0.0.1:44000.service - OpenSSH per-connection server daemon (10.0.0.1:44000). Dec 13 00:22:56.182000 audit[5346]: USER_ACCT pid=5346 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:56.183662 sshd[5346]: Accepted publickey for core from 10.0.0.1 port 44000 ssh2: RSA SHA256:bCAENV3gEImip2hLsDgpmZxJX+wB3hyqf9WeGkoaK2w Dec 13 00:22:56.184000 audit[5346]: CRED_ACQ pid=5346 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:56.184000 audit[5346]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffece4c02c0 a2=3 a3=0 items=0 ppid=1 pid=5346 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:22:56.184000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:22:56.187260 sshd-session[5346]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:22:56.193516 systemd-logind[1605]: New session 25 of user core. Dec 13 00:22:56.200255 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 13 00:22:56.203000 audit[5346]: USER_START pid=5346 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:56.205000 audit[5350]: CRED_ACQ pid=5350 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:56.293676 sshd[5350]: Connection closed by 10.0.0.1 port 44000 Dec 13 00:22:56.294054 sshd-session[5346]: pam_unix(sshd:session): session closed for user core Dec 13 00:22:56.294000 audit[5346]: USER_END pid=5346 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:56.294000 audit[5346]: CRED_DISP pid=5346 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:22:56.299958 systemd[1]: sshd@23-10.0.0.65:22-10.0.0.1:44000.service: Deactivated successfully. Dec 13 00:22:56.299000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.65:22-10.0.0.1:44000 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:22:56.302131 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 00:22:56.303166 systemd-logind[1605]: Session 25 logged out. Waiting for processes to exit. Dec 13 00:22:56.304377 systemd-logind[1605]: Removed session 25. Dec 13 00:22:58.575670 kubelet[2818]: E1213 00:22:58.575416 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:22:58.576196 kubelet[2818]: E1213 00:22:58.576125 2818 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-f54tn" podUID="7ce6ad04-f89c-40a1-981e-2b7e39fe58e0" Dec 13 00:22:58.576347 kubelet[2818]: E1213 00:22:58.576287 2818 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cf9f886c6-9fch9" podUID="fce4aad9-52fa-4b91-82ff-c6436952148b" Dec 13 00:23:01.305783 systemd[1]: Started sshd@24-10.0.0.65:22-10.0.0.1:33174.service - OpenSSH per-connection server daemon (10.0.0.1:33174). Dec 13 00:23:01.305000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.65:22-10.0.0.1:33174 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:23:01.312866 kernel: kauditd_printk_skb: 57 callbacks suppressed Dec 13 00:23:01.312998 kernel: audit: type=1130 audit(1765585381.305:895): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.65:22-10.0.0.1:33174 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:23:01.364000 audit[5363]: USER_ACCT pid=5363 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:01.365213 sshd[5363]: Accepted publickey for core from 10.0.0.1 port 33174 ssh2: RSA SHA256:bCAENV3gEImip2hLsDgpmZxJX+wB3hyqf9WeGkoaK2w Dec 13 00:23:01.367383 sshd-session[5363]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:23:01.371551 systemd-logind[1605]: New session 26 of user core. Dec 13 00:23:01.365000 audit[5363]: CRED_ACQ pid=5363 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:01.379155 kernel: audit: type=1101 audit(1765585381.364:896): pid=5363 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:01.379212 kernel: audit: type=1103 audit(1765585381.365:897): pid=5363 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:01.379246 kernel: audit: type=1006 audit(1765585381.365:898): pid=5363 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Dec 13 00:23:01.365000 audit[5363]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe54b62cb0 a2=3 a3=0 items=0 ppid=1 pid=5363 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:01.388472 kernel: audit: type=1300 audit(1765585381.365:898): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe54b62cb0 a2=3 a3=0 items=0 ppid=1 pid=5363 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:01.388508 kernel: audit: type=1327 audit(1765585381.365:898): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:23:01.365000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:23:01.396007 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 13 00:23:01.397000 audit[5363]: USER_START pid=5363 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:01.399000 audit[5367]: CRED_ACQ pid=5367 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:01.410799 kernel: audit: type=1105 audit(1765585381.397:899): pid=5363 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:01.410900 kernel: audit: type=1103 audit(1765585381.399:900): pid=5367 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:01.464311 sshd[5367]: Connection closed by 10.0.0.1 port 33174 Dec 13 00:23:01.464583 sshd-session[5363]: pam_unix(sshd:session): session closed for user core Dec 13 00:23:01.465000 audit[5363]: USER_END pid=5363 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:01.469145 systemd[1]: sshd@24-10.0.0.65:22-10.0.0.1:33174.service: Deactivated successfully. Dec 13 00:23:01.471537 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 00:23:01.472629 systemd-logind[1605]: Session 26 logged out. Waiting for processes to exit. Dec 13 00:23:01.474220 systemd-logind[1605]: Removed session 26. Dec 13 00:23:01.465000 audit[5363]: CRED_DISP pid=5363 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:01.501785 kernel: audit: type=1106 audit(1765585381.465:901): pid=5363 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:01.501845 kernel: audit: type=1104 audit(1765585381.465:902): pid=5363 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:01.468000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.65:22-10.0.0.1:33174 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:23:01.572857 kubelet[2818]: E1213 00:23:01.572770 2818 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58486567b6-lnz72" podUID="00a965a0-569e-4742-bf83-196c624e0f8f" Dec 13 00:23:01.573848 kubelet[2818]: E1213 00:23:01.573788 2818 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58486567b6-tgd79" podUID="b431ca61-6062-45e4-a35d-3ec7ff6dccb1" Dec 13 00:23:01.574012 kubelet[2818]: E1213 00:23:01.573862 2818 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wvdrp" podUID="dedbe661-92c2-4c3f-9ab9-3f4df404e3b1" Dec 13 00:23:01.574408 kubelet[2818]: E1213 00:23:01.574380 2818 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6d686f8ffb-wbp4f" podUID="5353c832-e4bf-4b05-bc32-552262f10d42" Dec 13 00:23:03.428000 audit[5381]: NETFILTER_CFG table=filter:141 family=2 entries=26 op=nft_register_rule pid=5381 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:23:03.428000 audit[5381]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffdb7a43bb0 a2=0 a3=7ffdb7a43b9c items=0 ppid=2927 pid=5381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:03.428000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:23:03.437000 audit[5381]: NETFILTER_CFG table=nat:142 family=2 entries=104 op=nft_register_chain pid=5381 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 00:23:03.437000 audit[5381]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffdb7a43bb0 a2=0 a3=7ffdb7a43b9c items=0 ppid=2927 pid=5381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:03.437000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 00:23:03.784368 kubelet[2818]: E1213 00:23:03.784220 2818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 00:23:06.478227 systemd[1]: Started sshd@25-10.0.0.65:22-10.0.0.1:33182.service - OpenSSH per-connection server daemon (10.0.0.1:33182). Dec 13 00:23:06.484035 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 13 00:23:06.484178 kernel: audit: type=1130 audit(1765585386.477:906): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.65:22-10.0.0.1:33182 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:23:06.477000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.65:22-10.0.0.1:33182 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:23:06.556000 audit[5411]: USER_ACCT pid=5411 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:06.563026 kernel: audit: type=1101 audit(1765585386.556:907): pid=5411 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:06.560011 sshd-session[5411]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:23:06.563415 sshd[5411]: Accepted publickey for core from 10.0.0.1 port 33182 ssh2: RSA SHA256:bCAENV3gEImip2hLsDgpmZxJX+wB3hyqf9WeGkoaK2w Dec 13 00:23:06.557000 audit[5411]: CRED_ACQ pid=5411 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:06.565776 systemd-logind[1605]: New session 27 of user core. Dec 13 00:23:06.568892 kernel: audit: type=1103 audit(1765585386.557:908): pid=5411 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:06.577876 kernel: audit: type=1006 audit(1765585386.557:909): pid=5411 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Dec 13 00:23:06.577938 kernel: audit: type=1300 audit(1765585386.557:909): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc68c8d8d0 a2=3 a3=0 items=0 ppid=1 pid=5411 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:06.557000 audit[5411]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc68c8d8d0 a2=3 a3=0 items=0 ppid=1 pid=5411 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:06.580096 kernel: audit: type=1327 audit(1765585386.557:909): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:23:06.557000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:23:06.579160 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 13 00:23:06.582000 audit[5411]: USER_START pid=5411 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:06.584000 audit[5415]: CRED_ACQ pid=5415 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:06.595105 kernel: audit: type=1105 audit(1765585386.582:910): pid=5411 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:06.595207 kernel: audit: type=1103 audit(1765585386.584:911): pid=5415 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:06.666370 sshd[5415]: Connection closed by 10.0.0.1 port 33182 Dec 13 00:23:06.668266 sshd-session[5411]: pam_unix(sshd:session): session closed for user core Dec 13 00:23:06.669000 audit[5411]: USER_END pid=5411 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:06.673246 systemd[1]: sshd@25-10.0.0.65:22-10.0.0.1:33182.service: Deactivated successfully. Dec 13 00:23:06.676140 systemd[1]: session-27.scope: Deactivated successfully. Dec 13 00:23:06.677081 systemd-logind[1605]: Session 27 logged out. Waiting for processes to exit. Dec 13 00:23:06.678696 systemd-logind[1605]: Removed session 27. Dec 13 00:23:06.669000 audit[5411]: CRED_DISP pid=5411 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:06.694596 kernel: audit: type=1106 audit(1765585386.669:912): pid=5411 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:06.694705 kernel: audit: type=1104 audit(1765585386.669:913): pid=5411 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:06.672000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.65:22-10.0.0.1:33182 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:23:11.684621 systemd[1]: Started sshd@26-10.0.0.65:22-10.0.0.1:58856.service - OpenSSH per-connection server daemon (10.0.0.1:58856). Dec 13 00:23:11.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.65:22-10.0.0.1:58856 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:23:11.686059 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 00:23:11.686161 kernel: audit: type=1130 audit(1765585391.683:915): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.65:22-10.0.0.1:58856 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:23:11.799000 audit[5429]: USER_ACCT pid=5429 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:11.801016 sshd[5429]: Accepted publickey for core from 10.0.0.1 port 58856 ssh2: RSA SHA256:bCAENV3gEImip2hLsDgpmZxJX+wB3hyqf9WeGkoaK2w Dec 13 00:23:11.802000 audit[5429]: CRED_ACQ pid=5429 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:11.806327 sshd-session[5429]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:23:11.812167 kernel: audit: type=1101 audit(1765585391.799:916): pid=5429 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:11.812734 kernel: audit: type=1103 audit(1765585391.802:917): pid=5429 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:11.812765 kernel: audit: type=1006 audit(1765585391.802:918): pid=5429 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Dec 13 00:23:11.817836 kernel: audit: type=1300 audit(1765585391.802:918): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc4cac46d0 a2=3 a3=0 items=0 ppid=1 pid=5429 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:11.802000 audit[5429]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc4cac46d0 a2=3 a3=0 items=0 ppid=1 pid=5429 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:11.815887 systemd-logind[1605]: New session 28 of user core. Dec 13 00:23:11.802000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:23:11.824837 kernel: audit: type=1327 audit(1765585391.802:918): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:23:11.825153 systemd[1]: Started session-28.scope - Session 28 of User core. Dec 13 00:23:11.828000 audit[5429]: USER_START pid=5429 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:11.836851 kernel: audit: type=1105 audit(1765585391.828:919): pid=5429 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:11.831000 audit[5434]: CRED_ACQ pid=5434 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:11.842850 kernel: audit: type=1103 audit(1765585391.831:920): pid=5434 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:11.929125 sshd[5434]: Connection closed by 10.0.0.1 port 58856 Dec 13 00:23:11.929469 sshd-session[5429]: pam_unix(sshd:session): session closed for user core Dec 13 00:23:11.930000 audit[5429]: USER_END pid=5429 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:11.935314 systemd-logind[1605]: Session 28 logged out. Waiting for processes to exit. Dec 13 00:23:11.935689 systemd[1]: sshd@26-10.0.0.65:22-10.0.0.1:58856.service: Deactivated successfully. Dec 13 00:23:11.930000 audit[5429]: CRED_DISP pid=5429 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:11.938566 systemd[1]: session-28.scope: Deactivated successfully. Dec 13 00:23:11.940436 systemd-logind[1605]: Removed session 28. Dec 13 00:23:11.943179 kernel: audit: type=1106 audit(1765585391.930:921): pid=5429 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:11.943246 kernel: audit: type=1104 audit(1765585391.930:922): pid=5429 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:11.935000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.65:22-10.0.0.1:58856 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:23:12.573901 kubelet[2818]: E1213 00:23:12.573826 2818 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58486567b6-lnz72" podUID="00a965a0-569e-4742-bf83-196c624e0f8f" Dec 13 00:23:12.575171 kubelet[2818]: E1213 00:23:12.575143 2818 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-f54tn" podUID="7ce6ad04-f89c-40a1-981e-2b7e39fe58e0" Dec 13 00:23:13.574033 kubelet[2818]: E1213 00:23:13.573842 2818 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cf9f886c6-9fch9" podUID="fce4aad9-52fa-4b91-82ff-c6436952148b" Dec 13 00:23:13.574033 kubelet[2818]: E1213 00:23:13.573846 2818 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wvdrp" podUID="dedbe661-92c2-4c3f-9ab9-3f4df404e3b1" Dec 13 00:23:14.575543 kubelet[2818]: E1213 00:23:14.575469 2818 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58486567b6-tgd79" podUID="b431ca61-6062-45e4-a35d-3ec7ff6dccb1" Dec 13 00:23:15.573218 kubelet[2818]: E1213 00:23:15.573024 2818 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6d686f8ffb-wbp4f" podUID="5353c832-e4bf-4b05-bc32-552262f10d42" Dec 13 00:23:16.946138 systemd[1]: Started sshd@27-10.0.0.65:22-10.0.0.1:58872.service - OpenSSH per-connection server daemon (10.0.0.1:58872). Dec 13 00:23:16.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.65:22-10.0.0.1:58872 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:23:16.947635 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 00:23:16.947685 kernel: audit: type=1130 audit(1765585396.945:924): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.65:22-10.0.0.1:58872 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 00:23:17.007000 audit[5447]: USER_ACCT pid=5447 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:17.008647 sshd[5447]: Accepted publickey for core from 10.0.0.1 port 58872 ssh2: RSA SHA256:bCAENV3gEImip2hLsDgpmZxJX+wB3hyqf9WeGkoaK2w Dec 13 00:23:17.011015 sshd-session[5447]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 00:23:17.008000 audit[5447]: CRED_ACQ pid=5447 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:17.016708 systemd-logind[1605]: New session 29 of user core. Dec 13 00:23:17.021372 kernel: audit: type=1101 audit(1765585397.007:925): pid=5447 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:17.021506 kernel: audit: type=1103 audit(1765585397.008:926): pid=5447 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:17.021536 kernel: audit: type=1006 audit(1765585397.008:927): pid=5447 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=29 res=1 Dec 13 00:23:17.008000 audit[5447]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe9f094960 a2=3 a3=0 items=0 ppid=1 pid=5447 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:17.032111 kernel: audit: type=1300 audit(1765585397.008:927): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe9f094960 a2=3 a3=0 items=0 ppid=1 pid=5447 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 00:23:17.032180 kernel: audit: type=1327 audit(1765585397.008:927): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:23:17.008000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 13 00:23:17.036119 systemd[1]: Started session-29.scope - Session 29 of User core. Dec 13 00:23:17.039000 audit[5447]: USER_START pid=5447 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:17.041000 audit[5451]: CRED_ACQ pid=5451 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:17.053688 kernel: audit: type=1105 audit(1765585397.039:928): pid=5447 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:17.053802 kernel: audit: type=1103 audit(1765585397.041:929): pid=5451 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:17.124495 sshd[5451]: Connection closed by 10.0.0.1 port 58872 Dec 13 00:23:17.124895 sshd-session[5447]: pam_unix(sshd:session): session closed for user core Dec 13 00:23:17.125000 audit[5447]: USER_END pid=5447 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:17.129588 systemd[1]: sshd@27-10.0.0.65:22-10.0.0.1:58872.service: Deactivated successfully. Dec 13 00:23:17.133000 systemd[1]: session-29.scope: Deactivated successfully. Dec 13 00:23:17.125000 audit[5447]: CRED_DISP pid=5447 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:17.135417 systemd-logind[1605]: Session 29 logged out. Waiting for processes to exit. Dec 13 00:23:17.136984 systemd-logind[1605]: Removed session 29. Dec 13 00:23:17.138484 kernel: audit: type=1106 audit(1765585397.125:930): pid=5447 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:17.138543 kernel: audit: type=1104 audit(1765585397.125:931): pid=5447 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 00:23:17.129000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.65:22-10.0.0.1:58872 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'