Dec 16 03:13:00.695806 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Tue Dec 16 00:18:19 -00 2025 Dec 16 03:13:00.695837 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=553464fdb0286a5b06b399da29ca659e521c68f08ea70a931c96ddffd00b4357 Dec 16 03:13:00.695849 kernel: BIOS-provided physical RAM map: Dec 16 03:13:00.695858 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000002ffff] usable Dec 16 03:13:00.695867 kernel: BIOS-e820: [mem 0x0000000000030000-0x000000000004ffff] reserved Dec 16 03:13:00.695879 kernel: BIOS-e820: [mem 0x0000000000050000-0x000000000009efff] usable Dec 16 03:13:00.695890 kernel: BIOS-e820: [mem 0x000000000009f000-0x000000000009ffff] reserved Dec 16 03:13:00.695899 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009b8ecfff] usable Dec 16 03:13:00.695909 kernel: BIOS-e820: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Dec 16 03:13:00.695918 kernel: BIOS-e820: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Dec 16 03:13:00.695928 kernel: BIOS-e820: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Dec 16 03:13:00.695937 kernel: BIOS-e820: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Dec 16 03:13:00.695946 kernel: BIOS-e820: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Dec 16 03:13:00.695956 kernel: BIOS-e820: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Dec 16 03:13:00.695970 kernel: BIOS-e820: [mem 0x000000009bfb7000-0x000000009bffffff] usable Dec 16 03:13:00.695980 kernel: BIOS-e820: [mem 0x000000009c000000-0x000000009cffffff] reserved Dec 16 03:13:00.695995 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Dec 16 03:13:00.696005 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 16 03:13:00.696018 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Dec 16 03:13:00.696028 kernel: NX (Execute Disable) protection: active Dec 16 03:13:00.696038 kernel: APIC: Static calls initialized Dec 16 03:13:00.696048 kernel: e820: update [mem 0x9a13f018-0x9a148c57] usable ==> usable Dec 16 03:13:00.696059 kernel: e820: update [mem 0x9a102018-0x9a13ee57] usable ==> usable Dec 16 03:13:00.696072 kernel: extended physical RAM map: Dec 16 03:13:00.696085 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000002ffff] usable Dec 16 03:13:00.696097 kernel: reserve setup_data: [mem 0x0000000000030000-0x000000000004ffff] reserved Dec 16 03:13:00.696121 kernel: reserve setup_data: [mem 0x0000000000050000-0x000000000009efff] usable Dec 16 03:13:00.696133 kernel: reserve setup_data: [mem 0x000000000009f000-0x000000000009ffff] reserved Dec 16 03:13:00.696146 kernel: reserve setup_data: [mem 0x0000000000100000-0x000000009a102017] usable Dec 16 03:13:00.696185 kernel: reserve setup_data: [mem 0x000000009a102018-0x000000009a13ee57] usable Dec 16 03:13:00.696198 kernel: reserve setup_data: [mem 0x000000009a13ee58-0x000000009a13f017] usable Dec 16 03:13:00.696209 kernel: reserve setup_data: [mem 0x000000009a13f018-0x000000009a148c57] usable Dec 16 03:13:00.696219 kernel: reserve setup_data: [mem 0x000000009a148c58-0x000000009b8ecfff] usable Dec 16 03:13:00.696229 kernel: reserve setup_data: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Dec 16 03:13:00.696239 kernel: reserve setup_data: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Dec 16 03:13:00.696249 kernel: reserve setup_data: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Dec 16 03:13:00.696259 kernel: reserve setup_data: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Dec 16 03:13:00.696269 kernel: reserve setup_data: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Dec 16 03:13:00.696279 kernel: reserve setup_data: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Dec 16 03:13:00.696292 kernel: reserve setup_data: [mem 0x000000009bfb7000-0x000000009bffffff] usable Dec 16 03:13:00.696307 kernel: reserve setup_data: [mem 0x000000009c000000-0x000000009cffffff] reserved Dec 16 03:13:00.696318 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Dec 16 03:13:00.696328 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 16 03:13:00.696339 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Dec 16 03:13:00.696351 kernel: efi: EFI v2.7 by EDK II Dec 16 03:13:00.696362 kernel: efi: SMBIOS=0x9b9d5000 ACPI=0x9bb7e000 ACPI 2.0=0x9bb7e014 MEMATTR=0x9a1af018 RNG=0x9bb73018 Dec 16 03:13:00.696373 kernel: random: crng init done Dec 16 03:13:00.696398 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Dec 16 03:13:00.696409 kernel: secureboot: Secure boot enabled Dec 16 03:13:00.696419 kernel: SMBIOS 2.8 present. Dec 16 03:13:00.696430 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Dec 16 03:13:00.696441 kernel: DMI: Memory slots populated: 1/1 Dec 16 03:13:00.696451 kernel: Hypervisor detected: KVM Dec 16 03:13:00.696465 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x400000000 Dec 16 03:13:00.696475 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 16 03:13:00.696486 kernel: kvm-clock: using sched offset of 6142267231 cycles Dec 16 03:13:00.696496 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 16 03:13:00.696508 kernel: tsc: Detected 2794.748 MHz processor Dec 16 03:13:00.696520 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 16 03:13:00.696531 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 16 03:13:00.696542 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x400000000 Dec 16 03:13:00.696557 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Dec 16 03:13:00.696571 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 16 03:13:00.696585 kernel: Using GB pages for direct mapping Dec 16 03:13:00.696596 kernel: ACPI: Early table checksum verification disabled Dec 16 03:13:00.696607 kernel: ACPI: RSDP 0x000000009BB7E014 000024 (v02 BOCHS ) Dec 16 03:13:00.696618 kernel: ACPI: XSDT 0x000000009BB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Dec 16 03:13:00.696629 kernel: ACPI: FACP 0x000000009BB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 03:13:00.696640 kernel: ACPI: DSDT 0x000000009BB7A000 002237 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 03:13:00.696654 kernel: ACPI: FACS 0x000000009BBDD000 000040 Dec 16 03:13:00.696665 kernel: ACPI: APIC 0x000000009BB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 03:13:00.696676 kernel: ACPI: HPET 0x000000009BB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 03:13:00.696687 kernel: ACPI: MCFG 0x000000009BB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 03:13:00.696698 kernel: ACPI: WAET 0x000000009BB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 03:13:00.696708 kernel: ACPI: BGRT 0x000000009BB74000 000038 (v01 INTEL EDK2 00000002 01000013) Dec 16 03:13:00.696719 kernel: ACPI: Reserving FACP table memory at [mem 0x9bb79000-0x9bb790f3] Dec 16 03:13:00.696733 kernel: ACPI: Reserving DSDT table memory at [mem 0x9bb7a000-0x9bb7c236] Dec 16 03:13:00.696744 kernel: ACPI: Reserving FACS table memory at [mem 0x9bbdd000-0x9bbdd03f] Dec 16 03:13:00.696755 kernel: ACPI: Reserving APIC table memory at [mem 0x9bb78000-0x9bb7808f] Dec 16 03:13:00.696766 kernel: ACPI: Reserving HPET table memory at [mem 0x9bb77000-0x9bb77037] Dec 16 03:13:00.696777 kernel: ACPI: Reserving MCFG table memory at [mem 0x9bb76000-0x9bb7603b] Dec 16 03:13:00.696787 kernel: ACPI: Reserving WAET table memory at [mem 0x9bb75000-0x9bb75027] Dec 16 03:13:00.696798 kernel: ACPI: Reserving BGRT table memory at [mem 0x9bb74000-0x9bb74037] Dec 16 03:13:00.696811 kernel: No NUMA configuration found Dec 16 03:13:00.696822 kernel: Faking a node at [mem 0x0000000000000000-0x000000009bffffff] Dec 16 03:13:00.696833 kernel: NODE_DATA(0) allocated [mem 0x9bf57dc0-0x9bf5efff] Dec 16 03:13:00.696844 kernel: Zone ranges: Dec 16 03:13:00.696855 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 16 03:13:00.696866 kernel: DMA32 [mem 0x0000000001000000-0x000000009bffffff] Dec 16 03:13:00.696877 kernel: Normal empty Dec 16 03:13:00.696888 kernel: Device empty Dec 16 03:13:00.696901 kernel: Movable zone start for each node Dec 16 03:13:00.696912 kernel: Early memory node ranges Dec 16 03:13:00.696923 kernel: node 0: [mem 0x0000000000001000-0x000000000002ffff] Dec 16 03:13:00.696934 kernel: node 0: [mem 0x0000000000050000-0x000000000009efff] Dec 16 03:13:00.696945 kernel: node 0: [mem 0x0000000000100000-0x000000009b8ecfff] Dec 16 03:13:00.696956 kernel: node 0: [mem 0x000000009bbff000-0x000000009bfb0fff] Dec 16 03:13:00.696967 kernel: node 0: [mem 0x000000009bfb7000-0x000000009bffffff] Dec 16 03:13:00.696980 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009bffffff] Dec 16 03:13:00.696991 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 16 03:13:00.697002 kernel: On node 0, zone DMA: 32 pages in unavailable ranges Dec 16 03:13:00.697013 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 16 03:13:00.697024 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Dec 16 03:13:00.697035 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Dec 16 03:13:00.697046 kernel: On node 0, zone DMA32: 16384 pages in unavailable ranges Dec 16 03:13:00.697060 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 16 03:13:00.697071 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 16 03:13:00.697082 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 16 03:13:00.697093 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 16 03:13:00.697118 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 16 03:13:00.697131 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 16 03:13:00.697142 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 16 03:13:00.697156 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 16 03:13:00.697212 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 16 03:13:00.697223 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 16 03:13:00.697234 kernel: TSC deadline timer available Dec 16 03:13:00.697245 kernel: CPU topo: Max. logical packages: 1 Dec 16 03:13:00.697256 kernel: CPU topo: Max. logical dies: 1 Dec 16 03:13:00.697277 kernel: CPU topo: Max. dies per package: 1 Dec 16 03:13:00.697289 kernel: CPU topo: Max. threads per core: 1 Dec 16 03:13:00.697300 kernel: CPU topo: Num. cores per package: 4 Dec 16 03:13:00.697312 kernel: CPU topo: Num. threads per package: 4 Dec 16 03:13:00.697326 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Dec 16 03:13:00.697337 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 16 03:13:00.697348 kernel: kvm-guest: KVM setup pv remote TLB flush Dec 16 03:13:00.697360 kernel: kvm-guest: setup PV sched yield Dec 16 03:13:00.697373 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Dec 16 03:13:00.697385 kernel: Booting paravirtualized kernel on KVM Dec 16 03:13:00.697397 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 16 03:13:00.697409 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Dec 16 03:13:00.697420 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Dec 16 03:13:00.697432 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Dec 16 03:13:00.697443 kernel: pcpu-alloc: [0] 0 1 2 3 Dec 16 03:13:00.697457 kernel: kvm-guest: PV spinlocks enabled Dec 16 03:13:00.697468 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 16 03:13:00.697481 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=553464fdb0286a5b06b399da29ca659e521c68f08ea70a931c96ddffd00b4357 Dec 16 03:13:00.697493 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 16 03:13:00.697504 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 16 03:13:00.697516 kernel: Fallback order for Node 0: 0 Dec 16 03:13:00.697527 kernel: Built 1 zonelists, mobility grouping on. Total pages: 638054 Dec 16 03:13:00.697542 kernel: Policy zone: DMA32 Dec 16 03:13:00.697553 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 16 03:13:00.697564 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 16 03:13:00.697576 kernel: ftrace: allocating 40103 entries in 157 pages Dec 16 03:13:00.697587 kernel: ftrace: allocated 157 pages with 5 groups Dec 16 03:13:00.697599 kernel: Dynamic Preempt: voluntary Dec 16 03:13:00.697610 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 16 03:13:00.697625 kernel: rcu: RCU event tracing is enabled. Dec 16 03:13:00.697637 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 16 03:13:00.697650 kernel: Trampoline variant of Tasks RCU enabled. Dec 16 03:13:00.697661 kernel: Rude variant of Tasks RCU enabled. Dec 16 03:13:00.697673 kernel: Tracing variant of Tasks RCU enabled. Dec 16 03:13:00.697684 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 16 03:13:00.697695 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 16 03:13:00.697709 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 16 03:13:00.697721 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 16 03:13:00.697737 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 16 03:13:00.697749 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Dec 16 03:13:00.697760 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 16 03:13:00.697772 kernel: Console: colour dummy device 80x25 Dec 16 03:13:00.697783 kernel: printk: legacy console [ttyS0] enabled Dec 16 03:13:00.697797 kernel: ACPI: Core revision 20240827 Dec 16 03:13:00.697809 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 16 03:13:00.697821 kernel: APIC: Switch to symmetric I/O mode setup Dec 16 03:13:00.697832 kernel: x2apic enabled Dec 16 03:13:00.697843 kernel: APIC: Switched APIC routing to: physical x2apic Dec 16 03:13:00.697855 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Dec 16 03:13:00.697867 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Dec 16 03:13:00.697881 kernel: kvm-guest: setup PV IPIs Dec 16 03:13:00.697892 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 16 03:13:00.697904 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Dec 16 03:13:00.697915 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Dec 16 03:13:00.697927 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 16 03:13:00.697938 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 16 03:13:00.697950 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 16 03:13:00.697964 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 16 03:13:00.697976 kernel: Spectre V2 : Mitigation: Retpolines Dec 16 03:13:00.697987 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Dec 16 03:13:00.697999 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Dec 16 03:13:00.698010 kernel: active return thunk: retbleed_return_thunk Dec 16 03:13:00.698022 kernel: RETBleed: Mitigation: untrained return thunk Dec 16 03:13:00.698034 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 16 03:13:00.698048 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 16 03:13:00.698059 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Dec 16 03:13:00.698072 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Dec 16 03:13:00.698084 kernel: active return thunk: srso_return_thunk Dec 16 03:13:00.698095 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Dec 16 03:13:00.698116 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 16 03:13:00.698128 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 16 03:13:00.698143 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 16 03:13:00.698154 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 16 03:13:00.698184 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Dec 16 03:13:00.698199 kernel: Freeing SMP alternatives memory: 32K Dec 16 03:13:00.698213 kernel: pid_max: default: 32768 minimum: 301 Dec 16 03:13:00.698228 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 16 03:13:00.698242 kernel: landlock: Up and running. Dec 16 03:13:00.698260 kernel: SELinux: Initializing. Dec 16 03:13:00.698275 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 16 03:13:00.698289 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 16 03:13:00.698304 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Dec 16 03:13:00.698319 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 16 03:13:00.698333 kernel: ... version: 0 Dec 16 03:13:00.698352 kernel: ... bit width: 48 Dec 16 03:13:00.698367 kernel: ... generic registers: 6 Dec 16 03:13:00.698379 kernel: ... value mask: 0000ffffffffffff Dec 16 03:13:00.698390 kernel: ... max period: 00007fffffffffff Dec 16 03:13:00.698402 kernel: ... fixed-purpose events: 0 Dec 16 03:13:00.698413 kernel: ... event mask: 000000000000003f Dec 16 03:13:00.698425 kernel: signal: max sigframe size: 1776 Dec 16 03:13:00.698436 kernel: rcu: Hierarchical SRCU implementation. Dec 16 03:13:00.698448 kernel: rcu: Max phase no-delay instances is 400. Dec 16 03:13:00.698463 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 16 03:13:00.698486 kernel: smp: Bringing up secondary CPUs ... Dec 16 03:13:00.698498 kernel: smpboot: x86: Booting SMP configuration: Dec 16 03:13:00.698527 kernel: .... node #0, CPUs: #1 #2 #3 Dec 16 03:13:00.698538 kernel: smp: Brought up 1 node, 4 CPUs Dec 16 03:13:00.698550 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Dec 16 03:13:00.698572 kernel: Memory: 2425600K/2552216K available (14336K kernel code, 2444K rwdata, 31636K rodata, 15556K init, 2484K bss, 120680K reserved, 0K cma-reserved) Dec 16 03:13:00.698596 kernel: devtmpfs: initialized Dec 16 03:13:00.698617 kernel: x86/mm: Memory block size: 128MB Dec 16 03:13:00.698638 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bb7f000-0x9bbfefff] (524288 bytes) Dec 16 03:13:00.698659 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bfb5000-0x9bfb6fff] (8192 bytes) Dec 16 03:13:00.698680 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 16 03:13:00.698716 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 16 03:13:00.698728 kernel: pinctrl core: initialized pinctrl subsystem Dec 16 03:13:00.698743 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 16 03:13:00.698754 kernel: audit: initializing netlink subsys (disabled) Dec 16 03:13:00.698766 kernel: audit: type=2000 audit(1765854776.381:1): state=initialized audit_enabled=0 res=1 Dec 16 03:13:00.698778 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 16 03:13:00.698789 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 16 03:13:00.698801 kernel: cpuidle: using governor menu Dec 16 03:13:00.698813 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 16 03:13:00.698827 kernel: dca service started, version 1.12.1 Dec 16 03:13:00.698839 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Dec 16 03:13:00.698851 kernel: PCI: Using configuration type 1 for base access Dec 16 03:13:00.698862 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 16 03:13:00.698874 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 16 03:13:00.698886 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 16 03:13:00.698897 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 16 03:13:00.698912 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 16 03:13:00.698923 kernel: ACPI: Added _OSI(Module Device) Dec 16 03:13:00.698935 kernel: ACPI: Added _OSI(Processor Device) Dec 16 03:13:00.698947 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 16 03:13:00.698958 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 16 03:13:00.698970 kernel: ACPI: Interpreter enabled Dec 16 03:13:00.698981 kernel: ACPI: PM: (supports S0 S5) Dec 16 03:13:00.698995 kernel: ACPI: Using IOAPIC for interrupt routing Dec 16 03:13:00.699007 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 16 03:13:00.699019 kernel: PCI: Using E820 reservations for host bridge windows Dec 16 03:13:00.699030 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 16 03:13:00.699042 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 16 03:13:00.699393 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 16 03:13:00.699605 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 16 03:13:00.699808 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 16 03:13:00.699823 kernel: PCI host bridge to bus 0000:00 Dec 16 03:13:00.700022 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 16 03:13:00.700255 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 16 03:13:00.700461 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 16 03:13:00.700649 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Dec 16 03:13:00.700828 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Dec 16 03:13:00.701008 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Dec 16 03:13:00.701242 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 16 03:13:00.701478 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Dec 16 03:13:00.701700 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Dec 16 03:13:00.701904 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Dec 16 03:13:00.702097 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Dec 16 03:13:00.702323 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Dec 16 03:13:00.702518 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 16 03:13:00.703022 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Dec 16 03:13:00.703530 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Dec 16 03:13:00.703780 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Dec 16 03:13:00.703978 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Dec 16 03:13:00.704255 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Dec 16 03:13:00.704456 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Dec 16 03:13:00.704658 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Dec 16 03:13:00.704863 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Dec 16 03:13:00.705077 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Dec 16 03:13:00.705342 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Dec 16 03:13:00.705545 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Dec 16 03:13:00.705744 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Dec 16 03:13:00.705945 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Dec 16 03:13:00.706233 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Dec 16 03:13:00.706434 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 16 03:13:00.706663 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Dec 16 03:13:00.706864 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Dec 16 03:13:00.707061 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Dec 16 03:13:00.707341 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Dec 16 03:13:00.707548 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Dec 16 03:13:00.707564 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 16 03:13:00.707576 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 16 03:13:00.707588 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 16 03:13:00.707600 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 16 03:13:00.707616 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 16 03:13:00.707629 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 16 03:13:00.707640 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 16 03:13:00.707652 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 16 03:13:00.707664 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 16 03:13:00.707675 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 16 03:13:00.707687 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 16 03:13:00.707702 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 16 03:13:00.707714 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 16 03:13:00.707725 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 16 03:13:00.707738 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 16 03:13:00.707749 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 16 03:13:00.707761 kernel: iommu: Default domain type: Translated Dec 16 03:13:00.707773 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 16 03:13:00.707787 kernel: efivars: Registered efivars operations Dec 16 03:13:00.707799 kernel: PCI: Using ACPI for IRQ routing Dec 16 03:13:00.707811 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 16 03:13:00.707823 kernel: e820: reserve RAM buffer [mem 0x0009f000-0x0009ffff] Dec 16 03:13:00.707835 kernel: e820: reserve RAM buffer [mem 0x9a102018-0x9bffffff] Dec 16 03:13:00.707847 kernel: e820: reserve RAM buffer [mem 0x9a13f018-0x9bffffff] Dec 16 03:13:00.707859 kernel: e820: reserve RAM buffer [mem 0x9b8ed000-0x9bffffff] Dec 16 03:13:00.707871 kernel: e820: reserve RAM buffer [mem 0x9bfb1000-0x9bffffff] Dec 16 03:13:00.708071 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 16 03:13:00.708323 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 16 03:13:00.708518 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 16 03:13:00.708533 kernel: vgaarb: loaded Dec 16 03:13:00.708546 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 16 03:13:00.708558 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 16 03:13:00.708574 kernel: clocksource: Switched to clocksource kvm-clock Dec 16 03:13:00.708586 kernel: VFS: Disk quotas dquot_6.6.0 Dec 16 03:13:00.708598 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 16 03:13:00.708609 kernel: pnp: PnP ACPI init Dec 16 03:13:00.708834 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Dec 16 03:13:00.708852 kernel: pnp: PnP ACPI: found 6 devices Dec 16 03:13:00.708864 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 16 03:13:00.708879 kernel: NET: Registered PF_INET protocol family Dec 16 03:13:00.708891 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 16 03:13:00.708903 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 16 03:13:00.708915 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 16 03:13:00.708927 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 16 03:13:00.708939 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 16 03:13:00.708951 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 16 03:13:00.708966 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 16 03:13:00.708978 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 16 03:13:00.708990 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 16 03:13:00.709001 kernel: NET: Registered PF_XDP protocol family Dec 16 03:13:00.709224 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Dec 16 03:13:00.709428 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Dec 16 03:13:00.709617 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 16 03:13:00.709862 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 16 03:13:00.710044 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 16 03:13:00.710314 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Dec 16 03:13:00.710508 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Dec 16 03:13:00.710735 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Dec 16 03:13:00.710751 kernel: PCI: CLS 0 bytes, default 64 Dec 16 03:13:00.710769 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Dec 16 03:13:00.710781 kernel: Initialise system trusted keyrings Dec 16 03:13:00.710793 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 16 03:13:00.710805 kernel: Key type asymmetric registered Dec 16 03:13:00.710817 kernel: Asymmetric key parser 'x509' registered Dec 16 03:13:00.710846 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 16 03:13:00.710861 kernel: io scheduler mq-deadline registered Dec 16 03:13:00.710877 kernel: io scheduler kyber registered Dec 16 03:13:00.710889 kernel: io scheduler bfq registered Dec 16 03:13:00.710902 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 16 03:13:00.710915 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 16 03:13:00.710928 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 16 03:13:00.710940 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 16 03:13:00.710952 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 16 03:13:00.710968 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 16 03:13:00.710980 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 16 03:13:00.710993 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 16 03:13:00.711005 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 16 03:13:00.711247 kernel: rtc_cmos 00:04: RTC can wake from S4 Dec 16 03:13:00.711265 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 16 03:13:00.711451 kernel: rtc_cmos 00:04: registered as rtc0 Dec 16 03:13:00.711647 kernel: rtc_cmos 00:04: setting system clock to 2025-12-16T03:12:58 UTC (1765854778) Dec 16 03:13:00.711840 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Dec 16 03:13:00.711857 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Dec 16 03:13:00.711869 kernel: efifb: probing for efifb Dec 16 03:13:00.711881 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Dec 16 03:13:00.711894 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Dec 16 03:13:00.711910 kernel: efifb: scrolling: redraw Dec 16 03:13:00.711923 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 16 03:13:00.711935 kernel: Console: switching to colour frame buffer device 160x50 Dec 16 03:13:00.711950 kernel: fb0: EFI VGA frame buffer device Dec 16 03:13:00.711962 kernel: pstore: Using crash dump compression: deflate Dec 16 03:13:00.711977 kernel: pstore: Registered efi_pstore as persistent store backend Dec 16 03:13:00.711989 kernel: NET: Registered PF_INET6 protocol family Dec 16 03:13:00.712001 kernel: Segment Routing with IPv6 Dec 16 03:13:00.712013 kernel: In-situ OAM (IOAM) with IPv6 Dec 16 03:13:00.712025 kernel: NET: Registered PF_PACKET protocol family Dec 16 03:13:00.712038 kernel: Key type dns_resolver registered Dec 16 03:13:00.712050 kernel: IPI shorthand broadcast: enabled Dec 16 03:13:00.712065 kernel: sched_clock: Marking stable (3043014286, 347679135)->(3521539193, -130845772) Dec 16 03:13:00.712077 kernel: registered taskstats version 1 Dec 16 03:13:00.712089 kernel: Loading compiled-in X.509 certificates Dec 16 03:13:00.712115 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: aafd1eb27ea805b8231c3bede9210239fae84df8' Dec 16 03:13:00.712128 kernel: Demotion targets for Node 0: null Dec 16 03:13:00.712140 kernel: Key type .fscrypt registered Dec 16 03:13:00.712152 kernel: Key type fscrypt-provisioning registered Dec 16 03:13:00.712197 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 16 03:13:00.712234 kernel: ima: Allocated hash algorithm: sha1 Dec 16 03:13:00.712250 kernel: ima: No architecture policies found Dec 16 03:13:00.712265 kernel: clk: Disabling unused clocks Dec 16 03:13:00.712281 kernel: Freeing unused kernel image (initmem) memory: 15556K Dec 16 03:13:00.712296 kernel: Write protecting the kernel read-only data: 47104k Dec 16 03:13:00.712312 kernel: Freeing unused kernel image (rodata/data gap) memory: 1132K Dec 16 03:13:00.712331 kernel: Run /init as init process Dec 16 03:13:00.712346 kernel: with arguments: Dec 16 03:13:00.712362 kernel: /init Dec 16 03:13:00.712378 kernel: with environment: Dec 16 03:13:00.712390 kernel: HOME=/ Dec 16 03:13:00.712402 kernel: TERM=linux Dec 16 03:13:00.712415 kernel: SCSI subsystem initialized Dec 16 03:13:00.712430 kernel: libata version 3.00 loaded. Dec 16 03:13:00.712635 kernel: ahci 0000:00:1f.2: version 3.0 Dec 16 03:13:00.712653 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 16 03:13:00.712852 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Dec 16 03:13:00.713048 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Dec 16 03:13:00.713299 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 16 03:13:00.713558 kernel: scsi host0: ahci Dec 16 03:13:00.713781 kernel: scsi host1: ahci Dec 16 03:13:00.713993 kernel: scsi host2: ahci Dec 16 03:13:00.714254 kernel: scsi host3: ahci Dec 16 03:13:00.714495 kernel: scsi host4: ahci Dec 16 03:13:00.714708 kernel: scsi host5: ahci Dec 16 03:13:00.714730 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 26 lpm-pol 1 Dec 16 03:13:00.714743 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 26 lpm-pol 1 Dec 16 03:13:00.714755 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 26 lpm-pol 1 Dec 16 03:13:00.714768 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 26 lpm-pol 1 Dec 16 03:13:00.714780 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 26 lpm-pol 1 Dec 16 03:13:00.714793 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 26 lpm-pol 1 Dec 16 03:13:00.714808 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 16 03:13:00.714821 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 16 03:13:00.714833 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 16 03:13:00.714845 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 16 03:13:00.714858 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 16 03:13:00.714871 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Dec 16 03:13:00.714883 kernel: ata3.00: LPM support broken, forcing max_power Dec 16 03:13:00.714895 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Dec 16 03:13:00.714910 kernel: ata3.00: applying bridge limits Dec 16 03:13:00.714923 kernel: ata3.00: LPM support broken, forcing max_power Dec 16 03:13:00.714934 kernel: ata3.00: configured for UDMA/100 Dec 16 03:13:00.715224 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 16 03:13:00.715442 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Dec 16 03:13:00.715643 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Dec 16 03:13:00.715664 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 16 03:13:00.715677 kernel: GPT:16515071 != 27000831 Dec 16 03:13:00.715689 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 16 03:13:00.715701 kernel: GPT:16515071 != 27000831 Dec 16 03:13:00.715713 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 16 03:13:00.715725 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 16 03:13:00.715942 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Dec 16 03:13:00.715962 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 16 03:13:00.716212 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Dec 16 03:13:00.716229 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 16 03:13:00.716242 kernel: device-mapper: uevent: version 1.0.3 Dec 16 03:13:00.716254 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 16 03:13:00.716267 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Dec 16 03:13:00.716284 kernel: raid6: avx2x4 gen() 28693 MB/s Dec 16 03:13:00.716296 kernel: raid6: avx2x2 gen() 29780 MB/s Dec 16 03:13:00.716308 kernel: raid6: avx2x1 gen() 24607 MB/s Dec 16 03:13:00.716320 kernel: raid6: using algorithm avx2x2 gen() 29780 MB/s Dec 16 03:13:00.716332 kernel: raid6: .... xor() 19057 MB/s, rmw enabled Dec 16 03:13:00.716345 kernel: raid6: using avx2x2 recovery algorithm Dec 16 03:13:00.716357 kernel: xor: automatically using best checksumming function avx Dec 16 03:13:00.716369 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 16 03:13:00.716385 kernel: BTRFS: device fsid 57a8262f-2900-48ba-a17e-aafbd70d59c7 devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (181) Dec 16 03:13:00.716397 kernel: BTRFS info (device dm-0): first mount of filesystem 57a8262f-2900-48ba-a17e-aafbd70d59c7 Dec 16 03:13:00.716410 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 16 03:13:00.716422 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 16 03:13:00.716435 kernel: BTRFS info (device dm-0): enabling free space tree Dec 16 03:13:00.716447 kernel: loop: module loaded Dec 16 03:13:00.716459 kernel: loop0: detected capacity change from 0 to 100528 Dec 16 03:13:00.716475 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 16 03:13:00.716489 systemd[1]: Successfully made /usr/ read-only. Dec 16 03:13:00.716505 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 03:13:00.716518 systemd[1]: Detected virtualization kvm. Dec 16 03:13:00.716531 systemd[1]: Detected architecture x86-64. Dec 16 03:13:00.716547 systemd[1]: Running in initrd. Dec 16 03:13:00.716559 systemd[1]: No hostname configured, using default hostname. Dec 16 03:13:00.716572 systemd[1]: Hostname set to . Dec 16 03:13:00.716585 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 16 03:13:00.716598 systemd[1]: Queued start job for default target initrd.target. Dec 16 03:13:00.716611 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 16 03:13:00.716623 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 03:13:00.716639 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 03:13:00.716654 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 16 03:13:00.716667 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 03:13:00.716681 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 16 03:13:00.716694 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 16 03:13:00.716710 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 03:13:00.716723 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 03:13:00.716735 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 16 03:13:00.716748 systemd[1]: Reached target paths.target - Path Units. Dec 16 03:13:00.716761 systemd[1]: Reached target slices.target - Slice Units. Dec 16 03:13:00.716774 systemd[1]: Reached target swap.target - Swaps. Dec 16 03:13:00.716786 systemd[1]: Reached target timers.target - Timer Units. Dec 16 03:13:00.716802 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 03:13:00.716815 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 03:13:00.716828 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 16 03:13:00.716841 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 16 03:13:00.716854 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 16 03:13:00.716867 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 03:13:00.716879 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 03:13:00.716895 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 03:13:00.716908 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 03:13:00.716921 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 16 03:13:00.716934 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 16 03:13:00.716948 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 03:13:00.716961 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 16 03:13:00.716974 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 16 03:13:00.716989 systemd[1]: Starting systemd-fsck-usr.service... Dec 16 03:13:00.717002 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 03:13:00.717015 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 03:13:00.717029 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 03:13:00.717044 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 03:13:00.717057 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 16 03:13:00.717070 systemd[1]: Finished systemd-fsck-usr.service. Dec 16 03:13:00.717084 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 03:13:00.717140 systemd-journald[315]: Collecting audit messages is enabled. Dec 16 03:13:00.717192 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 16 03:13:00.717212 kernel: Bridge firewalling registered Dec 16 03:13:00.717228 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 03:13:00.717245 kernel: audit: type=1130 audit(1765854780.713:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:00.717265 systemd-journald[315]: Journal started Dec 16 03:13:00.717297 systemd-journald[315]: Runtime Journal (/run/log/journal/a7dcbdddb21d4324a49972a831bf252d) is 5.9M, max 47.8M, 41.8M free. Dec 16 03:13:00.717369 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 03:13:00.713000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:00.710753 systemd-modules-load[318]: Inserted module 'br_netfilter' Dec 16 03:13:00.725621 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 03:13:00.725651 kernel: audit: type=1130 audit(1765854780.724:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:00.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:00.731636 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 03:13:00.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:00.738203 kernel: audit: type=1130 audit(1765854780.732:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:00.748424 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 03:13:00.753444 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 03:13:00.758707 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 03:13:00.767527 kernel: audit: type=1130 audit(1765854780.760:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:00.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:00.767719 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 03:13:00.775775 kernel: audit: type=1130 audit(1765854780.768:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:00.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:00.775931 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 03:13:00.776977 systemd-tmpfiles[338]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 16 03:13:00.787865 kernel: audit: type=1130 audit(1765854780.780:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:00.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:00.788382 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 03:13:00.793000 audit: BPF prog-id=6 op=LOAD Dec 16 03:13:00.794806 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 03:13:00.799514 kernel: audit: type=1334 audit(1765854780.793:8): prog-id=6 op=LOAD Dec 16 03:13:00.799478 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 03:13:00.808069 kernel: audit: type=1130 audit(1765854780.800:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:00.800000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:00.830965 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 03:13:00.839762 kernel: audit: type=1130 audit(1765854780.831:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:00.831000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:00.841483 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 16 03:13:00.870802 systemd-resolved[346]: Positive Trust Anchors: Dec 16 03:13:00.870824 systemd-resolved[346]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 03:13:00.915000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:00.870830 systemd-resolved[346]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 16 03:13:00.870870 systemd-resolved[346]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 03:13:00.897876 systemd-resolved[346]: Defaulting to hostname 'linux'. Dec 16 03:13:00.899341 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 03:13:00.915931 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 03:13:00.942658 dracut-cmdline[360]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=553464fdb0286a5b06b399da29ca659e521c68f08ea70a931c96ddffd00b4357 Dec 16 03:13:01.053196 kernel: Loading iSCSI transport class v2.0-870. Dec 16 03:13:01.067197 kernel: iscsi: registered transport (tcp) Dec 16 03:13:01.257372 kernel: iscsi: registered transport (qla4xxx) Dec 16 03:13:01.257457 kernel: QLogic iSCSI HBA Driver Dec 16 03:13:01.286260 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 03:13:01.345199 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 03:13:01.345000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:01.347705 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 03:13:01.354891 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 03:13:01.354919 kernel: audit: type=1130 audit(1765854781.345:12): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:01.436964 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 16 03:13:01.437000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:01.439537 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 16 03:13:01.448707 kernel: audit: type=1130 audit(1765854781.437:13): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:01.449063 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 16 03:13:01.501062 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 16 03:13:01.512499 kernel: audit: type=1130 audit(1765854781.501:14): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:01.512532 kernel: audit: type=1334 audit(1765854781.504:15): prog-id=7 op=LOAD Dec 16 03:13:01.512545 kernel: audit: type=1334 audit(1765854781.504:16): prog-id=8 op=LOAD Dec 16 03:13:01.501000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:01.504000 audit: BPF prog-id=7 op=LOAD Dec 16 03:13:01.504000 audit: BPF prog-id=8 op=LOAD Dec 16 03:13:01.505293 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 03:13:01.543549 systemd-udevd[599]: Using default interface naming scheme 'v257'. Dec 16 03:13:01.560358 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 03:13:01.561000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:01.569739 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 16 03:13:01.571594 kernel: audit: type=1130 audit(1765854781.561:17): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:01.614466 dracut-pre-trigger[668]: rd.md=0: removing MD RAID activation Dec 16 03:13:01.619570 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 03:13:01.631264 kernel: audit: type=1130 audit(1765854781.620:18): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:01.631301 kernel: audit: type=1334 audit(1765854781.627:19): prog-id=9 op=LOAD Dec 16 03:13:01.620000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:01.627000 audit: BPF prog-id=9 op=LOAD Dec 16 03:13:01.629831 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 03:13:01.684918 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 03:13:01.692363 kernel: audit: type=1130 audit(1765854781.686:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:01.686000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:01.691365 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 03:13:01.709644 systemd-networkd[709]: lo: Link UP Dec 16 03:13:01.709660 systemd-networkd[709]: lo: Gained carrier Dec 16 03:13:01.710615 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 03:13:01.721664 kernel: audit: type=1130 audit(1765854781.711:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:01.711000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:01.711267 systemd[1]: Reached target network.target - Network. Dec 16 03:13:01.821840 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 03:13:01.822000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:01.827349 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 16 03:13:01.892851 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 16 03:13:01.914999 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 16 03:13:01.938220 kernel: cryptd: max_cpu_qlen set to 1000 Dec 16 03:13:01.956613 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 16 03:13:01.963658 kernel: AES CTR mode by8 optimization enabled Dec 16 03:13:01.972733 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 16 03:13:01.985557 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 16 03:13:01.987801 systemd-networkd[709]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 03:13:01.987807 systemd-networkd[709]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 03:13:02.030559 systemd-networkd[709]: eth0: Link UP Dec 16 03:13:02.030908 systemd-networkd[709]: eth0: Gained carrier Dec 16 03:13:02.030927 systemd-networkd[709]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 03:13:02.045000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:02.041914 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 03:13:02.042347 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 03:13:02.045844 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 03:13:02.057267 systemd-networkd[709]: eth0: DHCPv4 address 10.0.0.26/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 16 03:13:02.066213 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Dec 16 03:13:02.059564 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 03:13:02.098233 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 03:13:02.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:02.199656 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 16 03:13:02.201000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:02.202764 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 03:13:02.203265 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 03:13:02.216254 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 03:13:02.225711 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 16 03:13:02.256155 disk-uuid[834]: Primary Header is updated. Dec 16 03:13:02.256155 disk-uuid[834]: Secondary Entries is updated. Dec 16 03:13:02.256155 disk-uuid[834]: Secondary Header is updated. Dec 16 03:13:02.289326 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 16 03:13:02.295000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:03.208485 systemd-networkd[709]: eth0: Gained IPv6LL Dec 16 03:13:03.323877 disk-uuid[845]: Warning: The kernel is still using the old partition table. Dec 16 03:13:03.323877 disk-uuid[845]: The new table will be used at the next reboot or after you Dec 16 03:13:03.323877 disk-uuid[845]: run partprobe(8) or kpartx(8) Dec 16 03:13:03.323877 disk-uuid[845]: The operation has completed successfully. Dec 16 03:13:03.348079 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 16 03:13:03.348000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:03.348000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:03.348315 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 16 03:13:03.351891 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 16 03:13:03.395199 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (862) Dec 16 03:13:03.395259 kernel: BTRFS info (device vda6): first mount of filesystem 7e31dbd7-b976-4d4a-a2e9-e2baacf4ad38 Dec 16 03:13:03.399271 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 03:13:03.404572 kernel: BTRFS info (device vda6): turning on async discard Dec 16 03:13:03.404603 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 03:13:03.414216 kernel: BTRFS info (device vda6): last unmount of filesystem 7e31dbd7-b976-4d4a-a2e9-e2baacf4ad38 Dec 16 03:13:03.417220 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 16 03:13:03.421000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:03.422961 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 16 03:13:03.703956 ignition[881]: Ignition 2.24.0 Dec 16 03:13:03.703981 ignition[881]: Stage: fetch-offline Dec 16 03:13:03.704074 ignition[881]: no configs at "/usr/lib/ignition/base.d" Dec 16 03:13:03.704091 ignition[881]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 03:13:03.705941 ignition[881]: parsed url from cmdline: "" Dec 16 03:13:03.705946 ignition[881]: no config URL provided Dec 16 03:13:03.706117 ignition[881]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 03:13:03.706142 ignition[881]: no config at "/usr/lib/ignition/user.ign" Dec 16 03:13:03.706233 ignition[881]: op(1): [started] loading QEMU firmware config module Dec 16 03:13:03.706240 ignition[881]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 16 03:13:03.721801 ignition[881]: op(1): [finished] loading QEMU firmware config module Dec 16 03:13:03.721872 ignition[881]: QEMU firmware config was not found. Ignoring... Dec 16 03:13:03.812980 ignition[881]: parsing config with SHA512: 8151371dba982c68e2ecaed3cc166c56952f53469cacc4e4e3d1a58bd5eaecbe796dde7fffa1f2b4ce271d1d90c6fc1fb05ae88c03f172e5a7383ddacee6548a Dec 16 03:13:03.818786 unknown[881]: fetched base config from "system" Dec 16 03:13:03.819288 ignition[881]: fetch-offline: fetch-offline passed Dec 16 03:13:03.818805 unknown[881]: fetched user config from "qemu" Dec 16 03:13:03.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:03.819372 ignition[881]: Ignition finished successfully Dec 16 03:13:03.822924 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 03:13:03.824221 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 16 03:13:03.825178 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 16 03:13:03.880872 ignition[890]: Ignition 2.24.0 Dec 16 03:13:03.880890 ignition[890]: Stage: kargs Dec 16 03:13:03.881108 ignition[890]: no configs at "/usr/lib/ignition/base.d" Dec 16 03:13:03.881127 ignition[890]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 03:13:03.882153 ignition[890]: kargs: kargs passed Dec 16 03:13:03.882222 ignition[890]: Ignition finished successfully Dec 16 03:13:03.891581 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 16 03:13:03.892000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:03.895613 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 16 03:13:03.962863 ignition[898]: Ignition 2.24.0 Dec 16 03:13:03.963198 ignition[898]: Stage: disks Dec 16 03:13:03.964466 ignition[898]: no configs at "/usr/lib/ignition/base.d" Dec 16 03:13:03.964477 ignition[898]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 03:13:03.965783 ignition[898]: disks: disks passed Dec 16 03:13:03.965833 ignition[898]: Ignition finished successfully Dec 16 03:13:03.975994 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 16 03:13:03.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:03.980277 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 16 03:13:03.981010 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 16 03:13:03.984543 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 03:13:03.988705 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 03:13:03.991823 systemd[1]: Reached target basic.target - Basic System. Dec 16 03:13:03.996568 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 16 03:13:04.072752 systemd-fsck[907]: ROOT: clean, 15/456736 files, 38230/456704 blocks Dec 16 03:13:04.131750 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 16 03:13:04.134000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:04.136854 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 16 03:13:04.267194 kernel: EXT4-fs (vda9): mounted filesystem 1314c107-11a5-486b-9d52-be9f57b6bf1b r/w with ordered data mode. Quota mode: none. Dec 16 03:13:04.267715 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 16 03:13:04.270045 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 16 03:13:04.274084 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 03:13:04.277156 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 16 03:13:04.278874 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 16 03:13:04.278926 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 16 03:13:04.278969 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 03:13:04.293518 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 16 03:13:04.301557 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (915) Dec 16 03:13:04.301582 kernel: BTRFS info (device vda6): first mount of filesystem 7e31dbd7-b976-4d4a-a2e9-e2baacf4ad38 Dec 16 03:13:04.301593 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 03:13:04.296733 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 16 03:13:04.308395 kernel: BTRFS info (device vda6): turning on async discard Dec 16 03:13:04.308416 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 03:13:04.309716 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 03:13:04.539384 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 16 03:13:04.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:04.541907 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 16 03:13:04.544656 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 16 03:13:04.594450 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 16 03:13:04.598251 kernel: BTRFS info (device vda6): last unmount of filesystem 7e31dbd7-b976-4d4a-a2e9-e2baacf4ad38 Dec 16 03:13:04.616397 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 16 03:13:04.617000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:04.635408 ignition[1013]: INFO : Ignition 2.24.0 Dec 16 03:13:04.635408 ignition[1013]: INFO : Stage: mount Dec 16 03:13:04.638143 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 03:13:04.638143 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 03:13:04.642707 ignition[1013]: INFO : mount: mount passed Dec 16 03:13:04.642707 ignition[1013]: INFO : Ignition finished successfully Dec 16 03:13:04.645231 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 16 03:13:04.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:04.649028 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 16 03:13:04.681565 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 03:13:04.720460 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1024) Dec 16 03:13:04.720572 kernel: BTRFS info (device vda6): first mount of filesystem 7e31dbd7-b976-4d4a-a2e9-e2baacf4ad38 Dec 16 03:13:04.720589 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 03:13:04.726422 kernel: BTRFS info (device vda6): turning on async discard Dec 16 03:13:04.726505 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 03:13:04.728418 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 03:13:04.773684 ignition[1041]: INFO : Ignition 2.24.0 Dec 16 03:13:04.773684 ignition[1041]: INFO : Stage: files Dec 16 03:13:04.776690 ignition[1041]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 03:13:04.776690 ignition[1041]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 03:13:04.776690 ignition[1041]: DEBUG : files: compiled without relabeling support, skipping Dec 16 03:13:04.776690 ignition[1041]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 16 03:13:04.776690 ignition[1041]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 16 03:13:04.787398 ignition[1041]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 16 03:13:04.790011 ignition[1041]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 16 03:13:04.792645 unknown[1041]: wrote ssh authorized keys file for user: core Dec 16 03:13:04.794498 ignition[1041]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 16 03:13:04.796867 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Dec 16 03:13:04.796867 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Dec 16 03:13:04.838947 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 16 03:13:04.980357 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Dec 16 03:13:04.980357 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 16 03:13:04.987492 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 16 03:13:04.987492 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 16 03:13:04.987492 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 16 03:13:04.987492 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 03:13:04.987492 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 03:13:04.987492 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 03:13:04.987492 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 03:13:05.010097 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 03:13:05.010097 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 03:13:05.010097 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 16 03:13:05.010097 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 16 03:13:05.010097 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 16 03:13:05.010097 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Dec 16 03:13:05.307476 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 16 03:13:06.179248 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 16 03:13:06.179248 ignition[1041]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 16 03:13:06.186337 ignition[1041]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 03:13:06.186337 ignition[1041]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 03:13:06.186337 ignition[1041]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 16 03:13:06.186337 ignition[1041]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Dec 16 03:13:06.186337 ignition[1041]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 16 03:13:06.186337 ignition[1041]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 16 03:13:06.186337 ignition[1041]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Dec 16 03:13:06.186337 ignition[1041]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Dec 16 03:13:06.215674 ignition[1041]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 16 03:13:06.228060 ignition[1041]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 16 03:13:06.231242 ignition[1041]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Dec 16 03:13:06.231242 ignition[1041]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Dec 16 03:13:06.231242 ignition[1041]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Dec 16 03:13:06.231242 ignition[1041]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 16 03:13:06.231242 ignition[1041]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 16 03:13:06.231242 ignition[1041]: INFO : files: files passed Dec 16 03:13:06.231242 ignition[1041]: INFO : Ignition finished successfully Dec 16 03:13:06.243000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:06.241188 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 16 03:13:06.244558 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 16 03:13:06.251553 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 16 03:13:06.276561 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 16 03:13:06.276749 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 16 03:13:06.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:06.277000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:06.285640 initrd-setup-root-after-ignition[1072]: grep: /sysroot/oem/oem-release: No such file or directory Dec 16 03:13:06.292973 initrd-setup-root-after-ignition[1074]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 03:13:06.292973 initrd-setup-root-after-ignition[1074]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 16 03:13:06.329411 initrd-setup-root-after-ignition[1078]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 03:13:06.333356 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 03:13:06.334000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:06.334985 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 16 03:13:06.341530 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 16 03:13:06.425578 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 16 03:13:06.425735 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 16 03:13:06.456442 kernel: kauditd_printk_skb: 19 callbacks suppressed Dec 16 03:13:06.456474 kernel: audit: type=1130 audit(1765854786.445:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:06.456501 kernel: audit: type=1131 audit(1765854786.445:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:06.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:06.445000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:06.445583 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 16 03:13:06.457255 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 16 03:13:06.463852 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 16 03:13:06.466657 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 16 03:13:06.514381 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 03:13:06.519000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:06.520772 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 16 03:13:06.528107 kernel: audit: type=1130 audit(1765854786.519:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:06.551621 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 16 03:13:06.551854 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 16 03:13:06.583145 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 03:13:06.588832 systemd[1]: Stopped target timers.target - Timer Units. Dec 16 03:13:06.592781 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 16 03:13:06.602824 kernel: audit: type=1131 audit(1765854786.595:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:06.595000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:06.592896 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 03:13:06.602877 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 16 03:13:06.637565 systemd[1]: Stopped target basic.target - Basic System. Dec 16 03:13:06.638744 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 16 03:13:06.642095 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 03:13:06.643958 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 16 03:13:06.651992 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 16 03:13:06.656052 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 16 03:13:06.715353 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 03:13:06.716532 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 16 03:13:06.717249 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 16 03:13:06.725936 systemd[1]: Stopped target swap.target - Swaps. Dec 16 03:13:06.733000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:06.726937 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 16 03:13:06.727335 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 16 03:13:06.745819 kernel: audit: type=1131 audit(1765854786.733:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:06.740386 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 16 03:13:06.741802 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 03:13:06.746757 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 16 03:13:06.749537 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 03:13:06.778650 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 16 03:13:06.778857 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 16 03:13:06.781000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:06.799881 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 16 03:13:06.840034 kernel: audit: type=1131 audit(1765854786.781:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:06.840071 kernel: audit: type=1131 audit(1765854786.807:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:06.807000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:06.800100 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 03:13:06.807290 systemd[1]: Stopped target paths.target - Path Units. Dec 16 03:13:06.840700 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 16 03:13:06.840871 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 03:13:06.845042 systemd[1]: Stopped target slices.target - Slice Units. Dec 16 03:13:06.867359 systemd[1]: Stopped target sockets.target - Socket Units. Dec 16 03:13:06.868687 systemd[1]: iscsid.socket: Deactivated successfully. Dec 16 03:13:06.868883 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 03:13:06.873669 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 16 03:13:06.901076 kernel: audit: type=1131 audit(1765854786.883:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:06.883000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:06.873756 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 03:13:06.907972 kernel: audit: type=1131 audit(1765854786.901:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:06.901000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:06.876303 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Dec 16 03:13:06.876387 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Dec 16 03:13:06.879979 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 16 03:13:06.880221 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 03:13:06.883373 systemd[1]: ignition-files.service: Deactivated successfully. Dec 16 03:13:06.883554 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 16 03:13:06.903245 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 16 03:13:06.927213 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 16 03:13:06.979246 kernel: audit: type=1131 audit(1765854786.928:50): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:06.928000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:06.933000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:06.934000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:06.960000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:06.960000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:06.961000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:06.964000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:06.965000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:06.966000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:06.966000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:06.979668 ignition[1098]: INFO : Ignition 2.24.0 Dec 16 03:13:06.979668 ignition[1098]: INFO : Stage: umount Dec 16 03:13:06.979668 ignition[1098]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 03:13:06.979668 ignition[1098]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 16 03:13:06.979668 ignition[1098]: INFO : umount: umount passed Dec 16 03:13:06.979668 ignition[1098]: INFO : Ignition finished successfully Dec 16 03:13:06.980000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:07.014000 audit: BPF prog-id=6 op=UNLOAD Dec 16 03:13:06.928011 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 16 03:13:07.016000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:06.928195 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 03:13:06.928454 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 16 03:13:06.928577 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 03:13:06.933763 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 16 03:13:07.028000 audit: BPF prog-id=9 op=UNLOAD Dec 16 03:13:07.029000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:07.029000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:07.030000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:06.933918 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 03:13:06.953453 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 16 03:13:06.953625 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 16 03:13:06.961374 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 16 03:13:06.961566 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 16 03:13:06.963541 systemd[1]: Stopped target network.target - Network. Dec 16 03:13:06.964035 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 16 03:13:06.964128 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 16 03:13:06.964690 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 16 03:13:06.964775 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 16 03:13:06.965701 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 16 03:13:06.965784 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 16 03:13:06.966336 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 16 03:13:07.062000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:06.966417 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 16 03:13:06.967065 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 16 03:13:06.967619 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 16 03:13:06.979561 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 16 03:13:07.072000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:06.979774 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 16 03:13:07.014408 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 16 03:13:07.080000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:07.014554 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 16 03:13:07.023130 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 16 03:13:07.026756 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 16 03:13:07.082000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:07.026855 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 16 03:13:07.091000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:07.028773 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 16 03:13:07.029025 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 16 03:13:07.029089 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 03:13:07.092000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:07.029712 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 03:13:07.029767 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 03:13:07.093000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:07.029950 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 16 03:13:07.030005 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 16 03:13:07.094000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:07.030572 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 03:13:07.033882 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 16 03:13:07.094000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:07.060420 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 16 03:13:07.060754 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 03:13:07.063841 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 16 03:13:07.063966 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 16 03:13:07.065827 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 16 03:13:07.097000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:07.065872 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 03:13:07.066650 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 16 03:13:07.066746 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 16 03:13:07.076851 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 16 03:13:07.076928 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 16 03:13:07.081841 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 03:13:07.081928 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 03:13:07.085610 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 16 03:13:07.091012 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 16 03:13:07.091095 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 03:13:07.091842 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 16 03:13:07.091922 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 03:13:07.092722 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 16 03:13:07.092810 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 03:13:07.093633 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 16 03:13:07.093713 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 03:13:07.094536 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 03:13:07.094620 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 03:13:07.096482 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 16 03:13:07.096670 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 16 03:13:07.199071 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 16 03:13:07.199244 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 16 03:13:07.213000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:07.213000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:07.223059 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 16 03:13:07.223234 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 16 03:13:07.229000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:07.230210 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 16 03:13:07.232692 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 16 03:13:07.237000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:07.232767 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 16 03:13:07.239746 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 16 03:13:07.262863 systemd[1]: Switching root. Dec 16 03:13:07.309273 systemd-journald[315]: Journal stopped Dec 16 03:13:09.212084 systemd-journald[315]: Received SIGTERM from PID 1 (systemd). Dec 16 03:13:09.212202 kernel: SELinux: policy capability network_peer_controls=1 Dec 16 03:13:09.212226 kernel: SELinux: policy capability open_perms=1 Dec 16 03:13:09.212245 kernel: SELinux: policy capability extended_socket_class=1 Dec 16 03:13:09.212260 kernel: SELinux: policy capability always_check_network=0 Dec 16 03:13:09.212276 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 16 03:13:09.212301 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 16 03:13:09.212321 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 16 03:13:09.212337 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 16 03:13:09.212362 kernel: SELinux: policy capability userspace_initial_context=0 Dec 16 03:13:09.212378 systemd[1]: Successfully loaded SELinux policy in 74.617ms. Dec 16 03:13:09.212402 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.869ms. Dec 16 03:13:09.212419 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 03:13:09.212438 systemd[1]: Detected virtualization kvm. Dec 16 03:13:09.212455 systemd[1]: Detected architecture x86-64. Dec 16 03:13:09.212472 systemd[1]: Detected first boot. Dec 16 03:13:09.212490 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 16 03:13:09.212507 zram_generator::config[1142]: No configuration found. Dec 16 03:13:09.212525 kernel: Guest personality initialized and is inactive Dec 16 03:13:09.212544 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Dec 16 03:13:09.212559 kernel: Initialized host personality Dec 16 03:13:09.212574 kernel: NET: Registered PF_VSOCK protocol family Dec 16 03:13:09.212590 systemd[1]: Populated /etc with preset unit settings. Dec 16 03:13:09.212607 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 16 03:13:09.212625 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 16 03:13:09.212641 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 16 03:13:09.212664 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 16 03:13:09.212684 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 16 03:13:09.212700 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 16 03:13:09.212716 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 16 03:13:09.212739 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 16 03:13:09.212756 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 16 03:13:09.212773 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 16 03:13:09.212792 systemd[1]: Created slice user.slice - User and Session Slice. Dec 16 03:13:09.212808 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 03:13:09.212824 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 03:13:09.212841 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 16 03:13:09.212862 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 16 03:13:09.212889 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 16 03:13:09.212906 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 03:13:09.212925 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 16 03:13:09.212941 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 03:13:09.212957 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 03:13:09.212973 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 16 03:13:09.212990 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 16 03:13:09.213009 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 16 03:13:09.213028 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 16 03:13:09.213044 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 03:13:09.213060 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 03:13:09.213076 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Dec 16 03:13:09.213092 systemd[1]: Reached target slices.target - Slice Units. Dec 16 03:13:09.213108 systemd[1]: Reached target swap.target - Swaps. Dec 16 03:13:09.213123 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 16 03:13:09.213139 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 16 03:13:09.213158 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 16 03:13:09.213190 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 16 03:13:09.213207 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Dec 16 03:13:09.213223 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 03:13:09.213240 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Dec 16 03:13:09.213256 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Dec 16 03:13:09.213283 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 03:13:09.213302 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 03:13:09.213318 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 16 03:13:09.213334 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 16 03:13:09.213351 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 16 03:13:09.213369 systemd[1]: Mounting media.mount - External Media Directory... Dec 16 03:13:09.213388 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 03:13:09.213405 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 16 03:13:09.213422 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 16 03:13:09.213438 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 16 03:13:09.213454 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 16 03:13:09.213473 systemd[1]: Reached target machines.target - Containers. Dec 16 03:13:09.213490 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 16 03:13:09.213506 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 03:13:09.213523 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 03:13:09.213539 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 16 03:13:09.213556 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 03:13:09.213574 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 03:13:09.213595 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 03:13:09.213611 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 16 03:13:09.213627 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 03:13:09.213644 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 16 03:13:09.213660 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 16 03:13:09.213676 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 16 03:13:09.213694 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 16 03:13:09.213714 systemd[1]: Stopped systemd-fsck-usr.service. Dec 16 03:13:09.213732 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 03:13:09.213751 kernel: fuse: init (API version 7.41) Dec 16 03:13:09.213771 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 03:13:09.213789 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 03:13:09.213806 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 03:13:09.213823 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 16 03:13:09.213839 kernel: ACPI: bus type drm_connector registered Dec 16 03:13:09.213856 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 16 03:13:09.213873 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 03:13:09.213902 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 03:13:09.213924 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 16 03:13:09.213941 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 16 03:13:09.213990 systemd-journald[1213]: Collecting audit messages is enabled. Dec 16 03:13:09.214020 systemd[1]: Mounted media.mount - External Media Directory. Dec 16 03:13:09.214037 systemd-journald[1213]: Journal started Dec 16 03:13:09.214068 systemd-journald[1213]: Runtime Journal (/run/log/journal/a7dcbdddb21d4324a49972a831bf252d) is 5.9M, max 47.8M, 41.8M free. Dec 16 03:13:09.225335 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 16 03:13:09.225421 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 16 03:13:09.225446 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 16 03:13:08.981000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 16 03:13:09.132000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:09.138000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:09.144000 audit: BPF prog-id=14 op=UNLOAD Dec 16 03:13:09.144000 audit: BPF prog-id=13 op=UNLOAD Dec 16 03:13:09.145000 audit: BPF prog-id=15 op=LOAD Dec 16 03:13:09.145000 audit: BPF prog-id=16 op=LOAD Dec 16 03:13:09.146000 audit: BPF prog-id=17 op=LOAD Dec 16 03:13:09.209000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 16 03:13:09.209000 audit[1213]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffe9409b760 a2=4000 a3=0 items=0 ppid=1 pid=1213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:13:09.209000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 16 03:13:08.825589 systemd[1]: Queued start job for default target multi-user.target. Dec 16 03:13:08.852831 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 16 03:13:08.853428 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 16 03:13:09.232817 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 03:13:09.233000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:09.236301 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 03:13:09.238000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:09.239435 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 16 03:13:09.239814 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 16 03:13:09.243000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:09.243000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:09.244379 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 03:13:09.244709 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 03:13:09.246000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:09.246000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:09.247594 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 16 03:13:09.249000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:09.250462 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 03:13:09.250906 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 03:13:09.252000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:09.252000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:09.253338 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 03:13:09.253637 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 03:13:09.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:09.255000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:09.256353 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 16 03:13:09.256655 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 16 03:13:09.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:09.258000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:09.259070 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 03:13:09.259366 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 03:13:09.261000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:09.261000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:09.261886 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 03:13:09.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:09.265872 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 03:13:09.268000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:09.270151 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 16 03:13:09.272000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:09.273261 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 16 03:13:09.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:09.288689 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 03:13:09.291400 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Dec 16 03:13:09.295101 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 16 03:13:09.298374 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 16 03:13:09.300571 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 16 03:13:09.300701 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 03:13:09.304005 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 16 03:13:09.308794 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 03:13:09.309128 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 03:13:09.313308 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 16 03:13:09.319361 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 16 03:13:09.321539 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 03:13:09.323610 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 16 03:13:09.325827 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 03:13:09.327389 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 03:13:09.334265 systemd-journald[1213]: Time spent on flushing to /var/log/journal/a7dcbdddb21d4324a49972a831bf252d is 33.746ms for 1166 entries. Dec 16 03:13:09.334265 systemd-journald[1213]: System Journal (/var/log/journal/a7dcbdddb21d4324a49972a831bf252d) is 8M, max 163.5M, 155.5M free. Dec 16 03:13:09.382774 systemd-journald[1213]: Received client request to flush runtime journal. Dec 16 03:13:09.382822 kernel: loop1: detected capacity change from 0 to 111560 Dec 16 03:13:09.348000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:09.375000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:09.334182 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 16 03:13:09.338426 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 03:13:09.343359 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 16 03:13:09.346317 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 03:13:09.349065 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 16 03:13:09.372946 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 16 03:13:09.376325 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 16 03:13:09.385192 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 16 03:13:09.388770 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 16 03:13:09.391000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:09.421757 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. Dec 16 03:13:09.422157 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. Dec 16 03:13:09.425000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:09.425534 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 03:13:09.433595 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 03:13:09.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:09.438545 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 16 03:13:09.447220 kernel: loop2: detected capacity change from 0 to 50784 Dec 16 03:13:09.476770 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 16 03:13:09.480000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:09.488337 kernel: loop3: detected capacity change from 0 to 224512 Dec 16 03:13:09.494151 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 16 03:13:09.496000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:09.498000 audit: BPF prog-id=18 op=LOAD Dec 16 03:13:09.498000 audit: BPF prog-id=19 op=LOAD Dec 16 03:13:09.498000 audit: BPF prog-id=20 op=LOAD Dec 16 03:13:09.500072 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Dec 16 03:13:09.503000 audit: BPF prog-id=21 op=LOAD Dec 16 03:13:09.507354 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 03:13:09.511518 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 03:13:09.516000 audit: BPF prog-id=22 op=LOAD Dec 16 03:13:09.516000 audit: BPF prog-id=23 op=LOAD Dec 16 03:13:09.517000 audit: BPF prog-id=24 op=LOAD Dec 16 03:13:09.518246 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Dec 16 03:13:09.531000 audit: BPF prog-id=25 op=LOAD Dec 16 03:13:09.531000 audit: BPF prog-id=26 op=LOAD Dec 16 03:13:09.531000 audit: BPF prog-id=27 op=LOAD Dec 16 03:13:09.533563 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 16 03:13:09.545184 kernel: loop4: detected capacity change from 0 to 111560 Dec 16 03:13:09.554650 systemd-tmpfiles[1285]: ACLs are not supported, ignoring. Dec 16 03:13:09.555107 systemd-tmpfiles[1285]: ACLs are not supported, ignoring. Dec 16 03:13:09.564233 kernel: loop5: detected capacity change from 0 to 50784 Dec 16 03:13:09.568000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:09.565311 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 03:13:09.580208 kernel: loop6: detected capacity change from 0 to 224512 Dec 16 03:13:09.590000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:09.587267 systemd-nsresourced[1286]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Dec 16 03:13:09.588845 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Dec 16 03:13:09.593050 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 16 03:13:09.595000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:09.613059 (sd-merge)[1289]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Dec 16 03:13:09.618089 (sd-merge)[1289]: Merged extensions into '/usr'. Dec 16 03:13:09.624150 systemd[1]: Reload requested from client PID 1262 ('systemd-sysext') (unit systemd-sysext.service)... Dec 16 03:13:09.624223 systemd[1]: Reloading... Dec 16 03:13:09.685454 systemd-oomd[1282]: No swap; memory pressure usage will be degraded Dec 16 03:13:09.706512 systemd-resolved[1283]: Positive Trust Anchors: Dec 16 03:13:09.706538 systemd-resolved[1283]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 03:13:09.706544 systemd-resolved[1283]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 16 03:13:09.706581 systemd-resolved[1283]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 03:13:09.715040 systemd-resolved[1283]: Defaulting to hostname 'linux'. Dec 16 03:13:09.719196 zram_generator::config[1340]: No configuration found. Dec 16 03:13:09.961946 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 16 03:13:09.962707 systemd[1]: Reloading finished in 337 ms. Dec 16 03:13:09.994801 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Dec 16 03:13:09.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:09.997837 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 03:13:09.999000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:10.000587 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 16 03:13:10.002000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:10.007189 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 03:13:10.022426 systemd[1]: Starting ensure-sysext.service... Dec 16 03:13:10.025978 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 03:13:10.029000 audit: BPF prog-id=28 op=LOAD Dec 16 03:13:10.035000 audit: BPF prog-id=15 op=UNLOAD Dec 16 03:13:10.035000 audit: BPF prog-id=29 op=LOAD Dec 16 03:13:10.035000 audit: BPF prog-id=30 op=LOAD Dec 16 03:13:10.035000 audit: BPF prog-id=16 op=UNLOAD Dec 16 03:13:10.035000 audit: BPF prog-id=17 op=UNLOAD Dec 16 03:13:10.038000 audit: BPF prog-id=31 op=LOAD Dec 16 03:13:10.038000 audit: BPF prog-id=18 op=UNLOAD Dec 16 03:13:10.038000 audit: BPF prog-id=32 op=LOAD Dec 16 03:13:10.038000 audit: BPF prog-id=33 op=LOAD Dec 16 03:13:10.038000 audit: BPF prog-id=19 op=UNLOAD Dec 16 03:13:10.038000 audit: BPF prog-id=20 op=UNLOAD Dec 16 03:13:10.040000 audit: BPF prog-id=34 op=LOAD Dec 16 03:13:10.040000 audit: BPF prog-id=22 op=UNLOAD Dec 16 03:13:10.040000 audit: BPF prog-id=35 op=LOAD Dec 16 03:13:10.040000 audit: BPF prog-id=36 op=LOAD Dec 16 03:13:10.040000 audit: BPF prog-id=23 op=UNLOAD Dec 16 03:13:10.040000 audit: BPF prog-id=24 op=UNLOAD Dec 16 03:13:10.042000 audit: BPF prog-id=37 op=LOAD Dec 16 03:13:10.042000 audit: BPF prog-id=25 op=UNLOAD Dec 16 03:13:10.042000 audit: BPF prog-id=38 op=LOAD Dec 16 03:13:10.042000 audit: BPF prog-id=39 op=LOAD Dec 16 03:13:10.042000 audit: BPF prog-id=26 op=UNLOAD Dec 16 03:13:10.042000 audit: BPF prog-id=27 op=UNLOAD Dec 16 03:13:10.043000 audit: BPF prog-id=40 op=LOAD Dec 16 03:13:10.043000 audit: BPF prog-id=21 op=UNLOAD Dec 16 03:13:10.053872 systemd[1]: Reload requested from client PID 1370 ('systemctl') (unit ensure-sysext.service)... Dec 16 03:13:10.054071 systemd[1]: Reloading... Dec 16 03:13:10.060783 systemd-tmpfiles[1371]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 16 03:13:10.061441 systemd-tmpfiles[1371]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 16 03:13:10.061828 systemd-tmpfiles[1371]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 16 03:13:10.063279 systemd-tmpfiles[1371]: ACLs are not supported, ignoring. Dec 16 03:13:10.063357 systemd-tmpfiles[1371]: ACLs are not supported, ignoring. Dec 16 03:13:10.070809 systemd-tmpfiles[1371]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 03:13:10.070827 systemd-tmpfiles[1371]: Skipping /boot Dec 16 03:13:10.084940 systemd-tmpfiles[1371]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 03:13:10.084953 systemd-tmpfiles[1371]: Skipping /boot Dec 16 03:13:10.145270 zram_generator::config[1408]: No configuration found. Dec 16 03:13:10.390046 systemd[1]: Reloading finished in 335 ms. Dec 16 03:13:10.414778 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 16 03:13:10.416000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:10.418517 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 03:13:10.421000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:10.426000 audit: BPF prog-id=41 op=LOAD Dec 16 03:13:10.426000 audit: BPF prog-id=34 op=UNLOAD Dec 16 03:13:10.426000 audit: BPF prog-id=42 op=LOAD Dec 16 03:13:10.426000 audit: BPF prog-id=43 op=LOAD Dec 16 03:13:10.426000 audit: BPF prog-id=35 op=UNLOAD Dec 16 03:13:10.426000 audit: BPF prog-id=36 op=UNLOAD Dec 16 03:13:10.427000 audit: BPF prog-id=44 op=LOAD Dec 16 03:13:10.427000 audit: BPF prog-id=40 op=UNLOAD Dec 16 03:13:10.429000 audit: BPF prog-id=45 op=LOAD Dec 16 03:13:10.429000 audit: BPF prog-id=37 op=UNLOAD Dec 16 03:13:10.429000 audit: BPF prog-id=46 op=LOAD Dec 16 03:13:10.429000 audit: BPF prog-id=47 op=LOAD Dec 16 03:13:10.429000 audit: BPF prog-id=38 op=UNLOAD Dec 16 03:13:10.429000 audit: BPF prog-id=39 op=UNLOAD Dec 16 03:13:10.430000 audit: BPF prog-id=48 op=LOAD Dec 16 03:13:10.430000 audit: BPF prog-id=31 op=UNLOAD Dec 16 03:13:10.431000 audit: BPF prog-id=49 op=LOAD Dec 16 03:13:10.451000 audit: BPF prog-id=50 op=LOAD Dec 16 03:13:10.451000 audit: BPF prog-id=32 op=UNLOAD Dec 16 03:13:10.451000 audit: BPF prog-id=33 op=UNLOAD Dec 16 03:13:10.452000 audit: BPF prog-id=51 op=LOAD Dec 16 03:13:10.453000 audit: BPF prog-id=28 op=UNLOAD Dec 16 03:13:10.453000 audit: BPF prog-id=52 op=LOAD Dec 16 03:13:10.453000 audit: BPF prog-id=53 op=LOAD Dec 16 03:13:10.453000 audit: BPF prog-id=29 op=UNLOAD Dec 16 03:13:10.453000 audit: BPF prog-id=30 op=UNLOAD Dec 16 03:13:10.479697 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 03:13:10.481628 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 03:13:10.488721 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 16 03:13:10.491075 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 03:13:10.504226 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 03:13:10.513037 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 03:13:10.517256 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 03:13:10.520065 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 03:13:10.520444 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 03:13:10.524034 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 16 03:13:10.526453 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 03:13:10.529246 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 16 03:13:10.531000 audit: BPF prog-id=8 op=UNLOAD Dec 16 03:13:10.531000 audit: BPF prog-id=7 op=UNLOAD Dec 16 03:13:10.540000 audit: BPF prog-id=54 op=LOAD Dec 16 03:13:10.540000 audit: BPF prog-id=55 op=LOAD Dec 16 03:13:10.541647 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 03:13:10.549620 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 16 03:13:10.552392 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 03:13:10.557075 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 03:13:10.557570 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 03:13:10.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:10.560000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:10.561041 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 03:13:10.562156 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 03:13:10.567000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:10.567000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:10.568258 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 03:13:10.569861 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 03:13:10.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:10.572000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:10.580131 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 03:13:10.580355 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 03:13:10.583401 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 03:13:10.584857 systemd-udevd[1462]: Using default interface naming scheme 'v257'. Dec 16 03:13:10.584000 audit[1466]: SYSTEM_BOOT pid=1466 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 16 03:13:10.587425 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 03:13:10.592125 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 03:13:10.594094 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 03:13:10.594524 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 03:13:10.594626 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 03:13:10.594718 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 03:13:10.605400 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 03:13:10.605658 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 03:13:10.607000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 16 03:13:10.607000 audit[1477]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe823e80d0 a2=420 a3=0 items=0 ppid=1442 pid=1477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:13:10.607000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 03:13:10.609318 augenrules[1477]: No rules Dec 16 03:13:10.612274 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 03:13:10.614248 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 03:13:10.614457 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 16 03:13:10.614572 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 03:13:10.614705 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 03:13:10.616546 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 03:13:10.616870 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 03:13:10.619681 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 16 03:13:10.622651 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 16 03:13:10.625580 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 03:13:10.625824 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 03:13:10.628447 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 03:13:10.628697 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 03:13:10.631252 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 03:13:10.631478 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 03:13:10.634518 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 03:13:10.635095 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 03:13:10.645593 systemd[1]: Finished ensure-sysext.service. Dec 16 03:13:10.648058 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 03:13:10.661516 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 03:13:10.664295 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 03:13:10.664369 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 03:13:10.668339 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 16 03:13:10.676651 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 16 03:13:10.679268 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 03:13:10.776040 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 16 03:13:10.789521 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 16 03:13:10.793745 systemd[1]: Reached target time-set.target - System Time Set. Dec 16 03:13:10.798261 systemd-networkd[1500]: lo: Link UP Dec 16 03:13:10.798272 systemd-networkd[1500]: lo: Gained carrier Dec 16 03:13:10.800334 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 03:13:10.801402 systemd-networkd[1500]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 03:13:10.801411 systemd-networkd[1500]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 03:13:10.802377 systemd-networkd[1500]: eth0: Link UP Dec 16 03:13:10.803030 systemd-networkd[1500]: eth0: Gained carrier Dec 16 03:13:10.803049 systemd-networkd[1500]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 16 03:13:10.803561 systemd[1]: Reached target network.target - Network. Dec 16 03:13:10.811323 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 16 03:13:10.816793 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 16 03:13:10.825345 systemd-networkd[1500]: eth0: DHCPv4 address 10.0.0.26/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 16 03:13:10.829296 systemd-timesyncd[1503]: Network configuration changed, trying to establish connection. Dec 16 03:13:11.762975 systemd-resolved[1283]: Clock change detected. Flushing caches. Dec 16 03:13:11.763070 systemd-timesyncd[1503]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 16 03:13:11.763212 systemd-timesyncd[1503]: Initial clock synchronization to Tue 2025-12-16 03:13:11.762815 UTC. Dec 16 03:13:11.776264 kernel: mousedev: PS/2 mouse device common for all mice Dec 16 03:13:11.787899 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 16 03:13:11.799159 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 16 03:13:11.810620 kernel: ACPI: button: Power Button [PWRF] Dec 16 03:13:11.832801 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 16 03:13:11.837280 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Dec 16 03:13:11.837659 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 16 03:13:11.837942 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 16 03:13:11.840515 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 16 03:13:12.077174 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 16 03:13:12.118657 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 03:13:12.133367 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 03:13:12.133881 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 03:13:12.143870 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 03:13:12.284603 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 03:13:12.317049 kernel: kvm_amd: TSC scaling supported Dec 16 03:13:12.317168 kernel: kvm_amd: Nested Virtualization enabled Dec 16 03:13:12.317186 kernel: kvm_amd: Nested Paging enabled Dec 16 03:13:12.317236 kernel: kvm_amd: LBR virtualization supported Dec 16 03:13:12.318409 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Dec 16 03:13:12.318533 kernel: kvm_amd: Virtual GIF supported Dec 16 03:13:12.382762 kernel: EDAC MC: Ver: 3.0.0 Dec 16 03:13:12.615121 ldconfig[1453]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 16 03:13:12.632260 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 16 03:13:12.643590 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 16 03:13:12.686108 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 16 03:13:12.688951 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 03:13:12.690850 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 16 03:13:12.692935 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 16 03:13:12.695322 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Dec 16 03:13:12.697524 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 16 03:13:12.699884 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 16 03:13:12.702335 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Dec 16 03:13:12.704843 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Dec 16 03:13:12.706830 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 16 03:13:12.708930 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 16 03:13:12.708982 systemd[1]: Reached target paths.target - Path Units. Dec 16 03:13:12.710518 systemd[1]: Reached target timers.target - Timer Units. Dec 16 03:13:12.713065 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 16 03:13:12.717266 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 16 03:13:12.722586 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 16 03:13:12.725126 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 16 03:13:12.727284 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 16 03:13:12.735832 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 16 03:13:12.741694 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 16 03:13:12.746133 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 16 03:13:12.750685 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 03:13:12.753042 systemd[1]: Reached target basic.target - Basic System. Dec 16 03:13:12.756256 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 16 03:13:12.756313 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 16 03:13:12.758507 systemd[1]: Starting containerd.service - containerd container runtime... Dec 16 03:13:12.762880 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 16 03:13:12.768794 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 16 03:13:12.776542 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 16 03:13:12.783798 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 16 03:13:12.787501 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 16 03:13:12.791073 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Dec 16 03:13:12.792922 jq[1564]: false Dec 16 03:13:12.794271 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 16 03:13:12.801682 extend-filesystems[1565]: Found /dev/vda6 Dec 16 03:13:12.805578 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 16 03:13:12.810795 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 16 03:13:12.812326 extend-filesystems[1565]: Found /dev/vda9 Dec 16 03:13:12.822758 extend-filesystems[1565]: Checking size of /dev/vda9 Dec 16 03:13:12.819879 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 16 03:13:12.828743 google_oslogin_nss_cache[1566]: oslogin_cache_refresh[1566]: Refreshing passwd entry cache Dec 16 03:13:12.828707 oslogin_cache_refresh[1566]: Refreshing passwd entry cache Dec 16 03:13:12.835643 extend-filesystems[1565]: Resized partition /dev/vda9 Dec 16 03:13:12.838462 google_oslogin_nss_cache[1566]: oslogin_cache_refresh[1566]: Failure getting users, quitting Dec 16 03:13:12.838462 google_oslogin_nss_cache[1566]: oslogin_cache_refresh[1566]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 03:13:12.838462 google_oslogin_nss_cache[1566]: oslogin_cache_refresh[1566]: Refreshing group entry cache Dec 16 03:13:12.837741 oslogin_cache_refresh[1566]: Failure getting users, quitting Dec 16 03:13:12.837767 oslogin_cache_refresh[1566]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 03:13:12.837839 oslogin_cache_refresh[1566]: Refreshing group entry cache Dec 16 03:13:12.841741 extend-filesystems[1585]: resize2fs 1.47.3 (8-Jul-2025) Dec 16 03:13:12.859696 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Dec 16 03:13:12.840957 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 16 03:13:12.859890 google_oslogin_nss_cache[1566]: oslogin_cache_refresh[1566]: Failure getting groups, quitting Dec 16 03:13:12.859890 google_oslogin_nss_cache[1566]: oslogin_cache_refresh[1566]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 03:13:12.855675 oslogin_cache_refresh[1566]: Failure getting groups, quitting Dec 16 03:13:12.845755 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 16 03:13:12.855692 oslogin_cache_refresh[1566]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 03:13:12.846411 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 16 03:13:12.848070 systemd[1]: Starting update-engine.service - Update Engine... Dec 16 03:13:12.863844 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 16 03:13:12.874932 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 16 03:13:12.878278 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 16 03:13:12.879453 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 16 03:13:12.879974 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Dec 16 03:13:12.880304 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Dec 16 03:13:12.885331 systemd[1]: motdgen.service: Deactivated successfully. Dec 16 03:13:12.889439 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 16 03:13:12.895439 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 16 03:13:12.895759 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 16 03:13:12.955969 jq[1593]: true Dec 16 03:13:12.956149 update_engine[1589]: I20251216 03:13:12.903651 1589 main.cc:92] Flatcar Update Engine starting Dec 16 03:13:12.966739 jq[1604]: true Dec 16 03:13:12.977567 tar[1600]: linux-amd64/LICENSE Dec 16 03:13:12.977567 tar[1600]: linux-amd64/helm Dec 16 03:13:12.989859 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Dec 16 03:13:13.045287 dbus-daemon[1562]: [system] SELinux support is enabled Dec 16 03:13:13.045869 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 16 03:13:13.131551 update_engine[1589]: I20251216 03:13:13.068239 1589 update_check_scheduler.cc:74] Next update check in 3m40s Dec 16 03:13:13.054804 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 16 03:13:13.131751 extend-filesystems[1585]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 16 03:13:13.131751 extend-filesystems[1585]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 16 03:13:13.131751 extend-filesystems[1585]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Dec 16 03:13:13.054837 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 16 03:13:13.181467 extend-filesystems[1565]: Resized filesystem in /dev/vda9 Dec 16 03:13:13.183246 sshd_keygen[1590]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 16 03:13:13.057916 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 16 03:13:13.057956 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 16 03:13:13.065827 systemd[1]: Started update-engine.service - Update Engine. Dec 16 03:13:13.072925 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 16 03:13:13.116040 systemd-logind[1586]: Watching system buttons on /dev/input/event2 (Power Button) Dec 16 03:13:13.116107 systemd-logind[1586]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 16 03:13:13.122360 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 16 03:13:13.122863 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 16 03:13:13.133129 systemd-logind[1586]: New seat seat0. Dec 16 03:13:13.165136 systemd[1]: Started systemd-logind.service - User Login Management. Dec 16 03:13:13.246770 locksmithd[1630]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 16 03:13:13.261440 bash[1629]: Updated "/home/core/.ssh/authorized_keys" Dec 16 03:13:13.262426 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 16 03:13:13.267652 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 16 03:13:13.287897 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 16 03:13:13.292949 systemd-networkd[1500]: eth0: Gained IPv6LL Dec 16 03:13:13.293816 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 16 03:13:13.339191 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 16 03:13:13.342100 systemd[1]: Reached target network-online.target - Network is Online. Dec 16 03:13:13.349766 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 16 03:13:13.357843 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 03:13:13.362158 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 16 03:13:13.366505 systemd[1]: issuegen.service: Deactivated successfully. Dec 16 03:13:13.366961 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 16 03:13:13.380329 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 16 03:13:13.417195 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 16 03:13:13.428510 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 16 03:13:13.434113 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 16 03:13:13.437068 systemd[1]: Reached target getty.target - Login Prompts. Dec 16 03:13:13.447402 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 16 03:13:13.467017 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 16 03:13:13.467583 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 16 03:13:13.470685 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 16 03:13:13.571474 containerd[1606]: time="2025-12-16T03:13:13Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 16 03:13:13.573918 containerd[1606]: time="2025-12-16T03:13:13.573846329Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Dec 16 03:13:13.598932 containerd[1606]: time="2025-12-16T03:13:13.598538676Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="12.724µs" Dec 16 03:13:13.599135 containerd[1606]: time="2025-12-16T03:13:13.599104878Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 16 03:13:13.599698 containerd[1606]: time="2025-12-16T03:13:13.599677291Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 16 03:13:13.599806 containerd[1606]: time="2025-12-16T03:13:13.599786446Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 16 03:13:13.600157 containerd[1606]: time="2025-12-16T03:13:13.600133657Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 16 03:13:13.600251 containerd[1606]: time="2025-12-16T03:13:13.600236781Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 03:13:13.600397 containerd[1606]: time="2025-12-16T03:13:13.600376753Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 03:13:13.600450 containerd[1606]: time="2025-12-16T03:13:13.600438649Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 03:13:13.600873 containerd[1606]: time="2025-12-16T03:13:13.600851824Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 03:13:13.600961 containerd[1606]: time="2025-12-16T03:13:13.600946171Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 03:13:13.601036 containerd[1606]: time="2025-12-16T03:13:13.601020641Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 03:13:13.601082 containerd[1606]: time="2025-12-16T03:13:13.601070865Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 16 03:13:13.601499 containerd[1606]: time="2025-12-16T03:13:13.601475894Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 16 03:13:13.601610 containerd[1606]: time="2025-12-16T03:13:13.601591591Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 16 03:13:13.601865 containerd[1606]: time="2025-12-16T03:13:13.601843885Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 16 03:13:13.602329 containerd[1606]: time="2025-12-16T03:13:13.602309257Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 03:13:13.602431 containerd[1606]: time="2025-12-16T03:13:13.602410517Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 03:13:13.602530 containerd[1606]: time="2025-12-16T03:13:13.602503782Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 16 03:13:13.602654 containerd[1606]: time="2025-12-16T03:13:13.602635770Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 16 03:13:13.603326 containerd[1606]: time="2025-12-16T03:13:13.603150014Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 16 03:13:13.603326 containerd[1606]: time="2025-12-16T03:13:13.603254921Z" level=info msg="metadata content store policy set" policy=shared Dec 16 03:13:13.653110 containerd[1606]: time="2025-12-16T03:13:13.652946480Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 16 03:13:13.654548 containerd[1606]: time="2025-12-16T03:13:13.654497238Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 16 03:13:13.657848 containerd[1606]: time="2025-12-16T03:13:13.655005692Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 16 03:13:13.657848 containerd[1606]: time="2025-12-16T03:13:13.655034796Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 16 03:13:13.657848 containerd[1606]: time="2025-12-16T03:13:13.655104808Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 16 03:13:13.657848 containerd[1606]: time="2025-12-16T03:13:13.655125927Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 16 03:13:13.657848 containerd[1606]: time="2025-12-16T03:13:13.655143210Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 16 03:13:13.657848 containerd[1606]: time="2025-12-16T03:13:13.655160382Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 16 03:13:13.657848 containerd[1606]: time="2025-12-16T03:13:13.655176482Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 16 03:13:13.657848 containerd[1606]: time="2025-12-16T03:13:13.655202040Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 16 03:13:13.657848 containerd[1606]: time="2025-12-16T03:13:13.655220705Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 16 03:13:13.657848 containerd[1606]: time="2025-12-16T03:13:13.655234601Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 16 03:13:13.657848 containerd[1606]: time="2025-12-16T03:13:13.655248036Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 16 03:13:13.657848 containerd[1606]: time="2025-12-16T03:13:13.655299813Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 16 03:13:13.657848 containerd[1606]: time="2025-12-16T03:13:13.657619414Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 16 03:13:13.658297 containerd[1606]: time="2025-12-16T03:13:13.657655722Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 16 03:13:13.658297 containerd[1606]: time="2025-12-16T03:13:13.657676931Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 16 03:13:13.658297 containerd[1606]: time="2025-12-16T03:13:13.657692280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 16 03:13:13.658471 containerd[1606]: time="2025-12-16T03:13:13.657704393Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 16 03:13:13.658548 containerd[1606]: time="2025-12-16T03:13:13.658529039Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 16 03:13:13.658613 containerd[1606]: time="2025-12-16T03:13:13.658599972Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 16 03:13:13.658679 containerd[1606]: time="2025-12-16T03:13:13.658664854Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 16 03:13:13.658787 containerd[1606]: time="2025-12-16T03:13:13.658769821Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 16 03:13:13.658864 containerd[1606]: time="2025-12-16T03:13:13.658850913Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 16 03:13:13.658927 containerd[1606]: time="2025-12-16T03:13:13.658914202Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 16 03:13:13.659191 containerd[1606]: time="2025-12-16T03:13:13.659166525Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 16 03:13:13.659350 containerd[1606]: time="2025-12-16T03:13:13.659320914Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 16 03:13:13.659417 containerd[1606]: time="2025-12-16T03:13:13.659404892Z" level=info msg="Start snapshots syncer" Dec 16 03:13:13.659533 containerd[1606]: time="2025-12-16T03:13:13.659516852Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 16 03:13:13.660525 containerd[1606]: time="2025-12-16T03:13:13.660307014Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 16 03:13:13.660525 containerd[1606]: time="2025-12-16T03:13:13.660434924Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 16 03:13:13.661006 containerd[1606]: time="2025-12-16T03:13:13.660908752Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 16 03:13:13.661237 containerd[1606]: time="2025-12-16T03:13:13.661216950Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 16 03:13:13.661330 containerd[1606]: time="2025-12-16T03:13:13.661313932Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 16 03:13:13.661478 containerd[1606]: time="2025-12-16T03:13:13.661454906Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 16 03:13:13.661557 containerd[1606]: time="2025-12-16T03:13:13.661542641Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 16 03:13:13.661621 containerd[1606]: time="2025-12-16T03:13:13.661607653Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 16 03:13:13.661703 containerd[1606]: time="2025-12-16T03:13:13.661687212Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 16 03:13:13.661800 containerd[1606]: time="2025-12-16T03:13:13.661785737Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 16 03:13:13.661862 containerd[1606]: time="2025-12-16T03:13:13.661849226Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 16 03:13:13.661927 containerd[1606]: time="2025-12-16T03:13:13.661914208Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 16 03:13:13.662074 containerd[1606]: time="2025-12-16T03:13:13.662021038Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 03:13:13.662074 containerd[1606]: time="2025-12-16T03:13:13.662043059Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 03:13:13.662559 containerd[1606]: time="2025-12-16T03:13:13.662054050Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 03:13:13.662559 containerd[1606]: time="2025-12-16T03:13:13.662298519Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 03:13:13.662559 containerd[1606]: time="2025-12-16T03:13:13.662311783Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 16 03:13:13.662559 containerd[1606]: time="2025-12-16T03:13:13.662381364Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 16 03:13:13.662559 containerd[1606]: time="2025-12-16T03:13:13.662401732Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 16 03:13:13.662559 containerd[1606]: time="2025-12-16T03:13:13.662426408Z" level=info msg="runtime interface created" Dec 16 03:13:13.662559 containerd[1606]: time="2025-12-16T03:13:13.662436898Z" level=info msg="created NRI interface" Dec 16 03:13:13.662559 containerd[1606]: time="2025-12-16T03:13:13.662449772Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 16 03:13:13.662559 containerd[1606]: time="2025-12-16T03:13:13.662467866Z" level=info msg="Connect containerd service" Dec 16 03:13:13.662559 containerd[1606]: time="2025-12-16T03:13:13.662503002Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 16 03:13:13.664730 containerd[1606]: time="2025-12-16T03:13:13.664683201Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 03:13:13.968704 tar[1600]: linux-amd64/README.md Dec 16 03:13:13.976799 containerd[1606]: time="2025-12-16T03:13:13.976645712Z" level=info msg="Start subscribing containerd event" Dec 16 03:13:13.977267 containerd[1606]: time="2025-12-16T03:13:13.976804920Z" level=info msg="Start recovering state" Dec 16 03:13:13.977267 containerd[1606]: time="2025-12-16T03:13:13.977097439Z" level=info msg="Start event monitor" Dec 16 03:13:13.977267 containerd[1606]: time="2025-12-16T03:13:13.977114932Z" level=info msg="Start cni network conf syncer for default" Dec 16 03:13:13.977267 containerd[1606]: time="2025-12-16T03:13:13.977151500Z" level=info msg="Start streaming server" Dec 16 03:13:13.977267 containerd[1606]: time="2025-12-16T03:13:13.977170736Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 16 03:13:13.977267 containerd[1606]: time="2025-12-16T03:13:13.977180144Z" level=info msg="runtime interface starting up..." Dec 16 03:13:13.977267 containerd[1606]: time="2025-12-16T03:13:13.977188900Z" level=info msg="starting plugins..." Dec 16 03:13:13.977267 containerd[1606]: time="2025-12-16T03:13:13.977206053Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 16 03:13:13.977965 containerd[1606]: time="2025-12-16T03:13:13.977211884Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 16 03:13:13.977965 containerd[1606]: time="2025-12-16T03:13:13.977642701Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 16 03:13:13.977965 containerd[1606]: time="2025-12-16T03:13:13.977761174Z" level=info msg="containerd successfully booted in 0.407179s" Dec 16 03:13:13.978041 systemd[1]: Started containerd.service - containerd container runtime. Dec 16 03:13:13.992481 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 16 03:13:14.014351 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 16 03:13:14.018622 systemd[1]: Started sshd@0-10.0.0.26:22-10.0.0.1:49680.service - OpenSSH per-connection server daemon (10.0.0.1:49680). Dec 16 03:13:14.130208 sshd[1698]: Accepted publickey for core from 10.0.0.1 port 49680 ssh2: RSA SHA256:GhpAgDjPDQKTYeqxTnKpUWsy+dD7djTvvXmspUjCjIY Dec 16 03:13:14.132465 sshd-session[1698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:13:14.139435 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 16 03:13:14.143159 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 16 03:13:14.150117 systemd-logind[1586]: New session 1 of user core. Dec 16 03:13:14.169558 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 16 03:13:14.175562 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 16 03:13:14.198786 (systemd)[1704]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:13:14.201993 systemd-logind[1586]: New session 2 of user core. Dec 16 03:13:14.579517 systemd[1704]: Queued start job for default target default.target. Dec 16 03:13:14.609300 systemd[1704]: Created slice app.slice - User Application Slice. Dec 16 03:13:14.609353 systemd[1704]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Dec 16 03:13:14.609373 systemd[1704]: Reached target paths.target - Paths. Dec 16 03:13:14.609457 systemd[1704]: Reached target timers.target - Timers. Dec 16 03:13:14.611512 systemd[1704]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 16 03:13:14.612950 systemd[1704]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Dec 16 03:13:14.628329 systemd[1704]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 16 03:13:14.628435 systemd[1704]: Reached target sockets.target - Sockets. Dec 16 03:13:14.637405 systemd[1704]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Dec 16 03:13:14.637582 systemd[1704]: Reached target basic.target - Basic System. Dec 16 03:13:14.637661 systemd[1704]: Reached target default.target - Main User Target. Dec 16 03:13:14.637747 systemd[1704]: Startup finished in 246ms. Dec 16 03:13:14.641952 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 16 03:13:14.708453 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 16 03:13:14.742828 systemd[1]: Started sshd@1-10.0.0.26:22-10.0.0.1:49682.service - OpenSSH per-connection server daemon (10.0.0.1:49682). Dec 16 03:13:14.900966 sshd[1718]: Accepted publickey for core from 10.0.0.1 port 49682 ssh2: RSA SHA256:GhpAgDjPDQKTYeqxTnKpUWsy+dD7djTvvXmspUjCjIY Dec 16 03:13:14.903703 sshd-session[1718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:13:14.913417 systemd-logind[1586]: New session 3 of user core. Dec 16 03:13:14.923140 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 16 03:13:14.946390 sshd[1722]: Connection closed by 10.0.0.1 port 49682 Dec 16 03:13:14.946989 sshd-session[1718]: pam_unix(sshd:session): session closed for user core Dec 16 03:13:14.958520 systemd[1]: sshd@1-10.0.0.26:22-10.0.0.1:49682.service: Deactivated successfully. Dec 16 03:13:14.961296 systemd[1]: session-3.scope: Deactivated successfully. Dec 16 03:13:14.962999 systemd-logind[1586]: Session 3 logged out. Waiting for processes to exit. Dec 16 03:13:14.966481 systemd[1]: Started sshd@2-10.0.0.26:22-10.0.0.1:49694.service - OpenSSH per-connection server daemon (10.0.0.1:49694). Dec 16 03:13:14.970663 systemd-logind[1586]: Removed session 3. Dec 16 03:13:15.113329 sshd[1728]: Accepted publickey for core from 10.0.0.1 port 49694 ssh2: RSA SHA256:GhpAgDjPDQKTYeqxTnKpUWsy+dD7djTvvXmspUjCjIY Dec 16 03:13:15.115458 sshd-session[1728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:13:15.120980 systemd-logind[1586]: New session 4 of user core. Dec 16 03:13:15.132015 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 16 03:13:15.137839 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:13:15.140784 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 16 03:13:15.142962 systemd[1]: Startup finished in 4.641s (kernel) + 7.980s (initrd) + 6.097s (userspace) = 18.719s. Dec 16 03:13:15.143807 (kubelet)[1738]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 03:13:15.158079 sshd[1739]: Connection closed by 10.0.0.1 port 49694 Dec 16 03:13:15.158899 sshd-session[1728]: pam_unix(sshd:session): session closed for user core Dec 16 03:13:15.164141 systemd[1]: sshd@2-10.0.0.26:22-10.0.0.1:49694.service: Deactivated successfully. Dec 16 03:13:15.166530 systemd[1]: session-4.scope: Deactivated successfully. Dec 16 03:13:15.167484 systemd-logind[1586]: Session 4 logged out. Waiting for processes to exit. Dec 16 03:13:15.169036 systemd-logind[1586]: Removed session 4. Dec 16 03:13:15.808508 kubelet[1738]: E1216 03:13:15.808422 1738 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 03:13:15.812442 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 03:13:15.812668 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 03:13:15.813150 systemd[1]: kubelet.service: Consumed 1.954s CPU time, 266M memory peak. Dec 16 03:13:25.186407 systemd[1]: Started sshd@3-10.0.0.26:22-10.0.0.1:42288.service - OpenSSH per-connection server daemon (10.0.0.1:42288). Dec 16 03:13:25.262928 sshd[1757]: Accepted publickey for core from 10.0.0.1 port 42288 ssh2: RSA SHA256:GhpAgDjPDQKTYeqxTnKpUWsy+dD7djTvvXmspUjCjIY Dec 16 03:13:25.265305 sshd-session[1757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:13:25.272647 systemd-logind[1586]: New session 5 of user core. Dec 16 03:13:25.282940 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 16 03:13:25.299000 sshd[1761]: Connection closed by 10.0.0.1 port 42288 Dec 16 03:13:25.299434 sshd-session[1757]: pam_unix(sshd:session): session closed for user core Dec 16 03:13:25.309973 systemd[1]: sshd@3-10.0.0.26:22-10.0.0.1:42288.service: Deactivated successfully. Dec 16 03:13:25.312206 systemd[1]: session-5.scope: Deactivated successfully. Dec 16 03:13:25.313061 systemd-logind[1586]: Session 5 logged out. Waiting for processes to exit. Dec 16 03:13:25.316203 systemd[1]: Started sshd@4-10.0.0.26:22-10.0.0.1:42294.service - OpenSSH per-connection server daemon (10.0.0.1:42294). Dec 16 03:13:25.316890 systemd-logind[1586]: Removed session 5. Dec 16 03:13:25.380697 sshd[1767]: Accepted publickey for core from 10.0.0.1 port 42294 ssh2: RSA SHA256:GhpAgDjPDQKTYeqxTnKpUWsy+dD7djTvvXmspUjCjIY Dec 16 03:13:25.382534 sshd-session[1767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:13:25.387222 systemd-logind[1586]: New session 6 of user core. Dec 16 03:13:25.401862 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 16 03:13:25.411104 sshd[1771]: Connection closed by 10.0.0.1 port 42294 Dec 16 03:13:25.411396 sshd-session[1767]: pam_unix(sshd:session): session closed for user core Dec 16 03:13:25.420491 systemd[1]: sshd@4-10.0.0.26:22-10.0.0.1:42294.service: Deactivated successfully. Dec 16 03:13:25.422358 systemd[1]: session-6.scope: Deactivated successfully. Dec 16 03:13:25.423104 systemd-logind[1586]: Session 6 logged out. Waiting for processes to exit. Dec 16 03:13:25.425622 systemd[1]: Started sshd@5-10.0.0.26:22-10.0.0.1:42296.service - OpenSSH per-connection server daemon (10.0.0.1:42296). Dec 16 03:13:25.426451 systemd-logind[1586]: Removed session 6. Dec 16 03:13:25.492642 sshd[1777]: Accepted publickey for core from 10.0.0.1 port 42296 ssh2: RSA SHA256:GhpAgDjPDQKTYeqxTnKpUWsy+dD7djTvvXmspUjCjIY Dec 16 03:13:25.494232 sshd-session[1777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:13:25.498862 systemd-logind[1586]: New session 7 of user core. Dec 16 03:13:25.505875 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 16 03:13:25.521478 sshd[1781]: Connection closed by 10.0.0.1 port 42296 Dec 16 03:13:25.521877 sshd-session[1777]: pam_unix(sshd:session): session closed for user core Dec 16 03:13:25.531491 systemd[1]: sshd@5-10.0.0.26:22-10.0.0.1:42296.service: Deactivated successfully. Dec 16 03:13:25.533702 systemd[1]: session-7.scope: Deactivated successfully. Dec 16 03:13:25.534875 systemd-logind[1586]: Session 7 logged out. Waiting for processes to exit. Dec 16 03:13:25.538166 systemd[1]: Started sshd@6-10.0.0.26:22-10.0.0.1:42304.service - OpenSSH per-connection server daemon (10.0.0.1:42304). Dec 16 03:13:25.538916 systemd-logind[1586]: Removed session 7. Dec 16 03:13:25.617956 sshd[1787]: Accepted publickey for core from 10.0.0.1 port 42304 ssh2: RSA SHA256:GhpAgDjPDQKTYeqxTnKpUWsy+dD7djTvvXmspUjCjIY Dec 16 03:13:25.620224 sshd-session[1787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:13:25.628927 systemd-logind[1586]: New session 8 of user core. Dec 16 03:13:25.636004 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 16 03:13:25.664063 sudo[1792]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 16 03:13:25.664493 sudo[1792]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 03:13:25.684542 sudo[1792]: pam_unix(sudo:session): session closed for user root Dec 16 03:13:25.686422 sshd[1791]: Connection closed by 10.0.0.1 port 42304 Dec 16 03:13:25.686773 sshd-session[1787]: pam_unix(sshd:session): session closed for user core Dec 16 03:13:25.697433 systemd[1]: sshd@6-10.0.0.26:22-10.0.0.1:42304.service: Deactivated successfully. Dec 16 03:13:25.699745 systemd[1]: session-8.scope: Deactivated successfully. Dec 16 03:13:25.700687 systemd-logind[1586]: Session 8 logged out. Waiting for processes to exit. Dec 16 03:13:25.703972 systemd[1]: Started sshd@7-10.0.0.26:22-10.0.0.1:42306.service - OpenSSH per-connection server daemon (10.0.0.1:42306). Dec 16 03:13:25.704688 systemd-logind[1586]: Removed session 8. Dec 16 03:13:25.776633 sshd[1799]: Accepted publickey for core from 10.0.0.1 port 42306 ssh2: RSA SHA256:GhpAgDjPDQKTYeqxTnKpUWsy+dD7djTvvXmspUjCjIY Dec 16 03:13:25.779049 sshd-session[1799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:13:25.784571 systemd-logind[1586]: New session 9 of user core. Dec 16 03:13:25.797982 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 16 03:13:25.816997 sudo[1805]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 16 03:13:25.817461 sudo[1805]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 03:13:25.822381 sudo[1805]: pam_unix(sudo:session): session closed for user root Dec 16 03:13:25.832641 sudo[1804]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 16 03:13:25.833152 sudo[1804]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 03:13:25.840407 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 16 03:13:25.842116 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 03:13:25.844110 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 03:13:25.897000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 16 03:13:25.903482 augenrules[1832]: No rules Dec 16 03:13:25.905260 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 03:13:25.905661 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 03:13:25.909766 kernel: kauditd_printk_skb: 167 callbacks suppressed Dec 16 03:13:25.909839 kernel: audit: type=1305 audit(1765854805.897:214): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 16 03:13:25.908276 sudo[1804]: pam_unix(sudo:session): session closed for user root Dec 16 03:13:25.897000 audit[1832]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcfc7cd4c0 a2=420 a3=0 items=0 ppid=1811 pid=1832 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:13:25.918056 kernel: audit: type=1300 audit(1765854805.897:214): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcfc7cd4c0 a2=420 a3=0 items=0 ppid=1811 pid=1832 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:13:25.897000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 03:13:25.918260 sshd[1803]: Connection closed by 10.0.0.1 port 42306 Dec 16 03:13:25.921079 sshd-session[1799]: pam_unix(sshd:session): session closed for user core Dec 16 03:13:25.921364 kernel: audit: type=1327 audit(1765854805.897:214): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 16 03:13:25.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:25.926617 kernel: audit: type=1130 audit(1765854805.905:215): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:25.926700 kernel: audit: type=1131 audit(1765854805.905:216): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:25.905000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:25.931611 kernel: audit: type=1106 audit(1765854805.906:217): pid=1804 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:13:25.906000 audit[1804]: USER_END pid=1804 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:13:25.937445 kernel: audit: type=1104 audit(1765854805.906:218): pid=1804 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:13:25.906000 audit[1804]: CRED_DISP pid=1804 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:13:25.921000 audit[1799]: USER_END pid=1799 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:13:25.950052 kernel: audit: type=1106 audit(1765854805.921:219): pid=1799 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:13:25.950116 kernel: audit: type=1104 audit(1765854805.922:220): pid=1799 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:13:25.922000 audit[1799]: CRED_DISP pid=1799 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:13:25.975050 systemd[1]: sshd@7-10.0.0.26:22-10.0.0.1:42306.service: Deactivated successfully. Dec 16 03:13:25.973000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.26:22-10.0.0.1:42306 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:25.977468 systemd[1]: session-9.scope: Deactivated successfully. Dec 16 03:13:25.978464 systemd-logind[1586]: Session 9 logged out. Waiting for processes to exit. Dec 16 03:13:25.981703 systemd[1]: Started sshd@8-10.0.0.26:22-10.0.0.1:42322.service - OpenSSH per-connection server daemon (10.0.0.1:42322). Dec 16 03:13:25.983252 systemd-logind[1586]: Removed session 9. Dec 16 03:13:25.980000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.26:22-10.0.0.1:42322 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:25.988078 kernel: audit: type=1131 audit(1765854805.973:221): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.26:22-10.0.0.1:42306 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:26.038000 audit[1841]: USER_ACCT pid=1841 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:13:26.041618 sshd[1841]: Accepted publickey for core from 10.0.0.1 port 42322 ssh2: RSA SHA256:GhpAgDjPDQKTYeqxTnKpUWsy+dD7djTvvXmspUjCjIY Dec 16 03:13:26.041000 audit[1841]: CRED_ACQ pid=1841 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:13:26.041000 audit[1841]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc13f75290 a2=3 a3=0 items=0 ppid=1 pid=1841 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:13:26.041000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:13:26.044086 sshd-session[1841]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:13:26.050426 systemd-logind[1586]: New session 10 of user core. Dec 16 03:13:26.069035 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 16 03:13:26.071000 audit[1841]: USER_START pid=1841 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:13:26.073000 audit[1845]: CRED_ACQ pid=1845 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:13:26.085000 audit[1846]: USER_ACCT pid=1846 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:13:26.086000 audit[1846]: CRED_REFR pid=1846 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:13:26.086000 audit[1846]: USER_START pid=1846 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:13:26.087515 sudo[1846]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 16 03:13:26.088159 sudo[1846]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 03:13:26.145949 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:13:26.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:26.157128 (kubelet)[1857]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 03:13:26.255169 kubelet[1857]: E1216 03:13:26.255087 1857 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 03:13:26.262364 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 03:13:26.262601 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 03:13:26.261000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 03:13:26.263164 systemd[1]: kubelet.service: Consumed 326ms CPU time, 111.2M memory peak. Dec 16 03:13:26.728488 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 16 03:13:26.745364 (dockerd)[1882]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 16 03:13:27.320584 dockerd[1882]: time="2025-12-16T03:13:27.320512255Z" level=info msg="Starting up" Dec 16 03:13:27.321578 dockerd[1882]: time="2025-12-16T03:13:27.321555671Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 16 03:13:27.347053 dockerd[1882]: time="2025-12-16T03:13:27.346974821Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 16 03:13:27.451681 dockerd[1882]: time="2025-12-16T03:13:27.451608637Z" level=info msg="Loading containers: start." Dec 16 03:13:27.490773 kernel: Initializing XFRM netlink socket Dec 16 03:13:27.575000 audit[1935]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1935 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:13:27.575000 audit[1935]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffc3ecf6d50 a2=0 a3=0 items=0 ppid=1882 pid=1935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:13:27.575000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 16 03:13:27.579000 audit[1937]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1937 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:13:27.579000 audit[1937]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffe66439b60 a2=0 a3=0 items=0 ppid=1882 pid=1937 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:13:27.579000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 16 03:13:27.583000 audit[1939]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1939 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:13:27.583000 audit[1939]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe268aaaa0 a2=0 a3=0 items=0 ppid=1882 pid=1939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:13:27.583000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 16 03:13:27.590000 audit[1941]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1941 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:13:27.590000 audit[1941]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd4c5130c0 a2=0 a3=0 items=0 ppid=1882 pid=1941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:13:27.590000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 16 03:13:27.593000 audit[1943]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=1943 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:13:27.593000 audit[1943]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe7d0157a0 a2=0 a3=0 items=0 ppid=1882 pid=1943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:13:27.593000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 16 03:13:27.596000 audit[1945]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1945 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:13:27.596000 audit[1945]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff316ec730 a2=0 a3=0 items=0 ppid=1882 pid=1945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:13:27.596000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 03:13:27.599000 audit[1947]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1947 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:13:27.599000 audit[1947]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffcd8998e30 a2=0 a3=0 items=0 ppid=1882 pid=1947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:13:27.599000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 03:13:27.602000 audit[1949]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1949 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:13:27.602000 audit[1949]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7fff0a9253c0 a2=0 a3=0 items=0 ppid=1882 pid=1949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:13:27.602000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 16 03:13:27.639000 audit[1952]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1952 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:13:27.639000 audit[1952]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7fffac213d00 a2=0 a3=0 items=0 ppid=1882 pid=1952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:13:27.639000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Dec 16 03:13:27.642000 audit[1954]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=1954 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:13:27.642000 audit[1954]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffd1865a6f0 a2=0 a3=0 items=0 ppid=1882 pid=1954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:13:27.642000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 16 03:13:27.644000 audit[1956]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1956 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:13:27.644000 audit[1956]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7fff574f45e0 a2=0 a3=0 items=0 ppid=1882 pid=1956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:13:27.644000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 16 03:13:27.647000 audit[1958]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1958 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:13:27.647000 audit[1958]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffc74811b90 a2=0 a3=0 items=0 ppid=1882 pid=1958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:13:27.647000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 03:13:27.649000 audit[1960]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=1960 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:13:27.649000 audit[1960]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffff184d2f0 a2=0 a3=0 items=0 ppid=1882 pid=1960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:13:27.649000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 16 03:13:27.692000 audit[1990]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=1990 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:13:27.692000 audit[1990]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffc7d854e70 a2=0 a3=0 items=0 ppid=1882 pid=1990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:13:27.692000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 16 03:13:27.694000 audit[1992]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=1992 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:13:27.694000 audit[1992]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fffd87e86a0 a2=0 a3=0 items=0 ppid=1882 pid=1992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:13:27.694000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 16 03:13:27.696000 audit[1994]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=1994 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:13:27.696000 audit[1994]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffca7479130 a2=0 a3=0 items=0 ppid=1882 pid=1994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:13:27.696000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 16 03:13:27.699000 audit[1996]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=1996 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:13:27.699000 audit[1996]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcbacac480 a2=0 a3=0 items=0 ppid=1882 pid=1996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:13:27.699000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 16 03:13:27.701000 audit[1998]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=1998 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:13:27.701000 audit[1998]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffb93297d0 a2=0 a3=0 items=0 ppid=1882 pid=1998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:13:27.701000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 16 03:13:27.703000 audit[2000]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2000 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:13:27.703000 audit[2000]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc1be98fe0 a2=0 a3=0 items=0 ppid=1882 pid=2000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:13:27.703000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 03:13:27.705000 audit[2002]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2002 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:13:27.705000 audit[2002]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffee26e7a50 a2=0 a3=0 items=0 ppid=1882 pid=2002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:13:27.705000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 03:13:27.710000 audit[2004]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2004 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:13:27.710000 audit[2004]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffed28ad620 a2=0 a3=0 items=0 ppid=1882 pid=2004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:13:27.710000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 16 03:13:27.713000 audit[2006]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2006 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:13:27.713000 audit[2006]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffcba5da7b0 a2=0 a3=0 items=0 ppid=1882 pid=2006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:13:27.713000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Dec 16 03:13:27.715000 audit[2008]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2008 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:13:27.715000 audit[2008]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffcee3f12b0 a2=0 a3=0 items=0 ppid=1882 pid=2008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:13:27.715000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 16 03:13:27.718000 audit[2010]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2010 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:13:27.718000 audit[2010]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffd1b3581c0 a2=0 a3=0 items=0 ppid=1882 pid=2010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:13:27.718000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 16 03:13:27.720000 audit[2012]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2012 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:13:27.720000 audit[2012]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffd42b1b620 a2=0 a3=0 items=0 ppid=1882 pid=2012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:13:27.720000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 16 03:13:27.724000 audit[2014]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2014 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:13:27.724000 audit[2014]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7fff68c3bb10 a2=0 a3=0 items=0 ppid=1882 pid=2014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:13:27.724000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 16 03:13:27.734000 audit[2019]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2019 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:13:27.734000 audit[2019]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc8198d210 a2=0 a3=0 items=0 ppid=1882 pid=2019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:13:27.734000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 16 03:13:27.736000 audit[2021]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2021 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:13:27.736000 audit[2021]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffc18a51c50 a2=0 a3=0 items=0 ppid=1882 pid=2021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:13:27.736000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 16 03:13:27.739000 audit[2023]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2023 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:13:27.739000 audit[2023]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffd449fde90 a2=0 a3=0 items=0 ppid=1882 pid=2023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:13:27.739000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 16 03:13:27.741000 audit[2025]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2025 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:13:27.741000 audit[2025]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffdd766270 a2=0 a3=0 items=0 ppid=1882 pid=2025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:13:27.741000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 16 03:13:27.743000 audit[2027]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2027 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:13:27.743000 audit[2027]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffc3c0cb300 a2=0 a3=0 items=0 ppid=1882 pid=2027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:13:27.743000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 16 03:13:27.746000 audit[2029]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2029 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:13:27.746000 audit[2029]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffe0c20d460 a2=0 a3=0 items=0 ppid=1882 pid=2029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:13:27.746000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 16 03:13:27.772000 audit[2034]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2034 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:13:27.772000 audit[2034]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7ffd5df38130 a2=0 a3=0 items=0 ppid=1882 pid=2034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:13:27.772000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Dec 16 03:13:27.775000 audit[2036]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2036 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:13:27.775000 audit[2036]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7fff59c8f610 a2=0 a3=0 items=0 ppid=1882 pid=2036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:13:27.775000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Dec 16 03:13:27.786000 audit[2044]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2044 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:13:27.786000 audit[2044]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffc1e9d38e0 a2=0 a3=0 items=0 ppid=1882 pid=2044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:13:27.786000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Dec 16 03:13:27.797000 audit[2050]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2050 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:13:27.797000 audit[2050]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7fff50f4bbd0 a2=0 a3=0 items=0 ppid=1882 pid=2050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:13:27.797000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Dec 16 03:13:27.801000 audit[2052]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2052 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:13:27.801000 audit[2052]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7fff9b9d3830 a2=0 a3=0 items=0 ppid=1882 pid=2052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:13:27.801000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Dec 16 03:13:27.804000 audit[2054]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2054 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:13:27.804000 audit[2054]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc3f137cb0 a2=0 a3=0 items=0 ppid=1882 pid=2054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:13:27.804000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Dec 16 03:13:27.806000 audit[2056]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2056 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:13:27.806000 audit[2056]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7fffe0edf4c0 a2=0 a3=0 items=0 ppid=1882 pid=2056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:13:27.806000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 16 03:13:27.809000 audit[2058]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2058 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:13:27.809000 audit[2058]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffe4c8846d0 a2=0 a3=0 items=0 ppid=1882 pid=2058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:13:27.809000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Dec 16 03:13:27.811879 systemd-networkd[1500]: docker0: Link UP Dec 16 03:13:27.818411 dockerd[1882]: time="2025-12-16T03:13:27.818331894Z" level=info msg="Loading containers: done." Dec 16 03:13:27.839480 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck498655698-merged.mount: Deactivated successfully. Dec 16 03:13:27.847150 dockerd[1882]: time="2025-12-16T03:13:27.847060541Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 16 03:13:27.847313 dockerd[1882]: time="2025-12-16T03:13:27.847203669Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 16 03:13:27.847341 dockerd[1882]: time="2025-12-16T03:13:27.847325497Z" level=info msg="Initializing buildkit" Dec 16 03:13:27.884128 dockerd[1882]: time="2025-12-16T03:13:27.884048556Z" level=info msg="Completed buildkit initialization" Dec 16 03:13:27.891488 dockerd[1882]: time="2025-12-16T03:13:27.891429328Z" level=info msg="Daemon has completed initialization" Dec 16 03:13:27.891670 dockerd[1882]: time="2025-12-16T03:13:27.891507194Z" level=info msg="API listen on /run/docker.sock" Dec 16 03:13:27.891894 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 16 03:13:27.890000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:28.961101 containerd[1606]: time="2025-12-16T03:13:28.960025738Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\"" Dec 16 03:13:30.692431 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2054844440.mount: Deactivated successfully. Dec 16 03:13:35.285033 containerd[1606]: time="2025-12-16T03:13:35.284931494Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:13:35.286488 containerd[1606]: time="2025-12-16T03:13:35.286412411Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.10: active requests=0, bytes read=28066163" Dec 16 03:13:35.294092 containerd[1606]: time="2025-12-16T03:13:35.293999600Z" level=info msg="ImageCreate event name:\"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:13:35.298743 containerd[1606]: time="2025-12-16T03:13:35.298647396Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:13:35.300555 containerd[1606]: time="2025-12-16T03:13:35.300492506Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.10\" with image id \"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\", size \"29068782\" in 6.340356342s" Dec 16 03:13:35.300555 containerd[1606]: time="2025-12-16T03:13:35.300548321Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\" returns image reference \"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\"" Dec 16 03:13:35.302562 containerd[1606]: time="2025-12-16T03:13:35.301492261Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\"" Dec 16 03:13:36.513385 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 16 03:13:36.528111 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 03:13:37.111789 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:13:37.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:37.115997 kernel: kauditd_printk_skb: 134 callbacks suppressed Dec 16 03:13:37.116086 kernel: audit: type=1130 audit(1765854817.111:274): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:37.132439 (kubelet)[2169]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 03:13:37.283379 kubelet[2169]: E1216 03:13:37.283269 2169 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 03:13:37.291000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 03:13:37.292590 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 03:13:37.292882 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 03:13:37.293503 systemd[1]: kubelet.service: Consumed 478ms CPU time, 109.9M memory peak. Dec 16 03:13:37.300900 kernel: audit: type=1131 audit(1765854817.291:275): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 03:13:40.017470 containerd[1606]: time="2025-12-16T03:13:40.017187653Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:13:40.022745 containerd[1606]: time="2025-12-16T03:13:40.021485573Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.10: active requests=0, bytes read=24983855" Dec 16 03:13:40.022745 containerd[1606]: time="2025-12-16T03:13:40.021615387Z" level=info msg="ImageCreate event name:\"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:13:40.034893 containerd[1606]: time="2025-12-16T03:13:40.034302369Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:13:40.042445 containerd[1606]: time="2025-12-16T03:13:40.041299411Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.10\" with image id \"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\", size \"26649046\" in 4.739762838s" Dec 16 03:13:40.042445 containerd[1606]: time="2025-12-16T03:13:40.041361618Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\" returns image reference \"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\"" Dec 16 03:13:40.043923 containerd[1606]: time="2025-12-16T03:13:40.043616236Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\"" Dec 16 03:13:43.573529 containerd[1606]: time="2025-12-16T03:13:43.573423545Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:13:43.578420 containerd[1606]: time="2025-12-16T03:13:43.578344313Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.10: active requests=0, bytes read=19396111" Dec 16 03:13:43.581031 containerd[1606]: time="2025-12-16T03:13:43.580961431Z" level=info msg="ImageCreate event name:\"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:13:43.591683 containerd[1606]: time="2025-12-16T03:13:43.590681720Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:13:43.595730 containerd[1606]: time="2025-12-16T03:13:43.595052807Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.10\" with image id \"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\", size \"21061302\" in 3.55137713s" Dec 16 03:13:43.595730 containerd[1606]: time="2025-12-16T03:13:43.595105957Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\" returns image reference \"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\"" Dec 16 03:13:43.598701 containerd[1606]: time="2025-12-16T03:13:43.598404453Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\"" Dec 16 03:13:46.355257 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3609808117.mount: Deactivated successfully. Dec 16 03:13:47.549380 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 16 03:13:47.553955 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 03:13:48.103436 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:13:48.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:48.120895 kernel: audit: type=1130 audit(1765854828.102:276): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:13:48.122221 (kubelet)[2200]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 03:13:48.255182 kubelet[2200]: E1216 03:13:48.255083 2200 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 03:13:48.260544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 03:13:48.260841 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 03:13:48.259000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 03:13:48.261552 systemd[1]: kubelet.service: Consumed 485ms CPU time, 110.2M memory peak. Dec 16 03:13:48.273770 kernel: audit: type=1131 audit(1765854828.259:277): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 03:13:49.044701 containerd[1606]: time="2025-12-16T03:13:49.044595262Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:13:49.045777 containerd[1606]: time="2025-12-16T03:13:49.045692464Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.10: active requests=0, bytes read=31157702" Dec 16 03:13:49.049173 containerd[1606]: time="2025-12-16T03:13:49.049061741Z" level=info msg="ImageCreate event name:\"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:13:49.057769 containerd[1606]: time="2025-12-16T03:13:49.057610773Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:13:49.060702 containerd[1606]: time="2025-12-16T03:13:49.058353617Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.10\" with image id \"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\", repo tag \"registry.k8s.io/kube-proxy:v1.32.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\", size \"31160442\" in 5.459903809s" Dec 16 03:13:49.060702 containerd[1606]: time="2025-12-16T03:13:49.058401519Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\" returns image reference \"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\"" Dec 16 03:13:49.061135 containerd[1606]: time="2025-12-16T03:13:49.061079391Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Dec 16 03:13:50.640276 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3330763784.mount: Deactivated successfully. Dec 16 03:13:58.355885 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 16 03:13:58.358060 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 03:13:58.454945 update_engine[1589]: I20251216 03:13:58.454785 1589 update_attempter.cc:509] Updating boot flags... Dec 16 03:14:02.182645 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:14:02.182000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:02.188771 kernel: audit: type=1130 audit(1765854842.182:278): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:02.188788 (kubelet)[2288]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 03:14:02.227177 kubelet[2288]: E1216 03:14:02.227099 2288 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 03:14:02.231354 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 03:14:02.231628 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 03:14:02.231000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 03:14:02.232192 systemd[1]: kubelet.service: Consumed 229ms CPU time, 110.6M memory peak. Dec 16 03:14:02.282785 kernel: audit: type=1131 audit(1765854842.231:279): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 03:14:02.809950 containerd[1606]: time="2025-12-16T03:14:02.809859238Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:14:02.812161 containerd[1606]: time="2025-12-16T03:14:02.812116060Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18309336" Dec 16 03:14:02.814112 containerd[1606]: time="2025-12-16T03:14:02.814066983Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:14:02.818648 containerd[1606]: time="2025-12-16T03:14:02.818609302Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:14:02.819831 containerd[1606]: time="2025-12-16T03:14:02.819792332Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 13.758658556s" Dec 16 03:14:02.819831 containerd[1606]: time="2025-12-16T03:14:02.819829212Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Dec 16 03:14:02.820638 containerd[1606]: time="2025-12-16T03:14:02.820587026Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 16 03:14:04.486318 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount666440107.mount: Deactivated successfully. Dec 16 03:14:04.498740 containerd[1606]: time="2025-12-16T03:14:04.498652228Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 03:14:04.500425 containerd[1606]: time="2025-12-16T03:14:04.500379083Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=1382" Dec 16 03:14:04.502446 containerd[1606]: time="2025-12-16T03:14:04.502400986Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 03:14:04.506016 containerd[1606]: time="2025-12-16T03:14:04.505943665Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 03:14:04.506889 containerd[1606]: time="2025-12-16T03:14:04.506841072Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.686205805s" Dec 16 03:14:04.506889 containerd[1606]: time="2025-12-16T03:14:04.506878764Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Dec 16 03:14:04.507529 containerd[1606]: time="2025-12-16T03:14:04.507490370Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Dec 16 03:14:06.135317 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount630158110.mount: Deactivated successfully. Dec 16 03:14:09.190952 containerd[1606]: time="2025-12-16T03:14:09.189468974Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:14:09.193378 containerd[1606]: time="2025-12-16T03:14:09.193316152Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=45609274" Dec 16 03:14:09.199112 containerd[1606]: time="2025-12-16T03:14:09.198947747Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:14:09.205975 containerd[1606]: time="2025-12-16T03:14:09.205882951Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:14:09.210573 containerd[1606]: time="2025-12-16T03:14:09.208364002Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 4.700781948s" Dec 16 03:14:09.210573 containerd[1606]: time="2025-12-16T03:14:09.209234244Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Dec 16 03:14:12.115411 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:14:12.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:12.115610 systemd[1]: kubelet.service: Consumed 229ms CPU time, 110.6M memory peak. Dec 16 03:14:12.117822 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 03:14:12.114000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:12.124594 kernel: audit: type=1130 audit(1765854852.114:280): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:12.124687 kernel: audit: type=1131 audit(1765854852.114:281): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:12.144045 systemd[1]: Reload requested from client PID 2388 ('systemctl') (unit session-10.scope)... Dec 16 03:14:12.144059 systemd[1]: Reloading... Dec 16 03:14:12.246771 zram_generator::config[2434]: No configuration found. Dec 16 03:14:13.262526 systemd[1]: Reloading finished in 1118 ms. Dec 16 03:14:13.289755 kernel: audit: type=1334 audit(1765854853.285:282): prog-id=61 op=LOAD Dec 16 03:14:13.289881 kernel: audit: type=1334 audit(1765854853.285:283): prog-id=56 op=UNLOAD Dec 16 03:14:13.285000 audit: BPF prog-id=61 op=LOAD Dec 16 03:14:13.285000 audit: BPF prog-id=56 op=UNLOAD Dec 16 03:14:13.288000 audit: BPF prog-id=62 op=LOAD Dec 16 03:14:13.291763 kernel: audit: type=1334 audit(1765854853.288:284): prog-id=62 op=LOAD Dec 16 03:14:13.291828 kernel: audit: type=1334 audit(1765854853.288:285): prog-id=58 op=UNLOAD Dec 16 03:14:13.288000 audit: BPF prog-id=58 op=UNLOAD Dec 16 03:14:13.293296 kernel: audit: type=1334 audit(1765854853.288:286): prog-id=63 op=LOAD Dec 16 03:14:13.288000 audit: BPF prog-id=63 op=LOAD Dec 16 03:14:13.294737 kernel: audit: type=1334 audit(1765854853.288:287): prog-id=64 op=LOAD Dec 16 03:14:13.288000 audit: BPF prog-id=64 op=LOAD Dec 16 03:14:13.288000 audit: BPF prog-id=59 op=UNLOAD Dec 16 03:14:13.297588 kernel: audit: type=1334 audit(1765854853.288:288): prog-id=59 op=UNLOAD Dec 16 03:14:13.297650 kernel: audit: type=1334 audit(1765854853.288:289): prog-id=60 op=UNLOAD Dec 16 03:14:13.288000 audit: BPF prog-id=60 op=UNLOAD Dec 16 03:14:13.289000 audit: BPF prog-id=65 op=LOAD Dec 16 03:14:13.289000 audit: BPF prog-id=57 op=UNLOAD Dec 16 03:14:13.290000 audit: BPF prog-id=66 op=LOAD Dec 16 03:14:13.297000 audit: BPF prog-id=48 op=UNLOAD Dec 16 03:14:13.297000 audit: BPF prog-id=67 op=LOAD Dec 16 03:14:13.297000 audit: BPF prog-id=68 op=LOAD Dec 16 03:14:13.297000 audit: BPF prog-id=49 op=UNLOAD Dec 16 03:14:13.297000 audit: BPF prog-id=50 op=UNLOAD Dec 16 03:14:13.299000 audit: BPF prog-id=69 op=LOAD Dec 16 03:14:13.299000 audit: BPF prog-id=41 op=UNLOAD Dec 16 03:14:13.299000 audit: BPF prog-id=70 op=LOAD Dec 16 03:14:13.299000 audit: BPF prog-id=71 op=LOAD Dec 16 03:14:13.299000 audit: BPF prog-id=42 op=UNLOAD Dec 16 03:14:13.299000 audit: BPF prog-id=43 op=UNLOAD Dec 16 03:14:13.301000 audit: BPF prog-id=72 op=LOAD Dec 16 03:14:13.301000 audit: BPF prog-id=44 op=UNLOAD Dec 16 03:14:13.303000 audit: BPF prog-id=73 op=LOAD Dec 16 03:14:13.303000 audit: BPF prog-id=45 op=UNLOAD Dec 16 03:14:13.303000 audit: BPF prog-id=74 op=LOAD Dec 16 03:14:13.303000 audit: BPF prog-id=75 op=LOAD Dec 16 03:14:13.303000 audit: BPF prog-id=46 op=UNLOAD Dec 16 03:14:13.303000 audit: BPF prog-id=47 op=UNLOAD Dec 16 03:14:13.304000 audit: BPF prog-id=76 op=LOAD Dec 16 03:14:13.304000 audit: BPF prog-id=51 op=UNLOAD Dec 16 03:14:13.304000 audit: BPF prog-id=77 op=LOAD Dec 16 03:14:13.304000 audit: BPF prog-id=78 op=LOAD Dec 16 03:14:13.304000 audit: BPF prog-id=52 op=UNLOAD Dec 16 03:14:13.304000 audit: BPF prog-id=53 op=UNLOAD Dec 16 03:14:13.305000 audit: BPF prog-id=79 op=LOAD Dec 16 03:14:13.305000 audit: BPF prog-id=80 op=LOAD Dec 16 03:14:13.305000 audit: BPF prog-id=54 op=UNLOAD Dec 16 03:14:13.305000 audit: BPF prog-id=55 op=UNLOAD Dec 16 03:14:13.328434 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 16 03:14:13.328559 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 16 03:14:13.328916 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:14:13.328000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 16 03:14:13.328965 systemd[1]: kubelet.service: Consumed 181ms CPU time, 98.3M memory peak. Dec 16 03:14:13.330557 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 03:14:13.540557 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:14:13.540000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:13.554206 (kubelet)[2482]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 03:14:13.681552 kubelet[2482]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 03:14:13.681552 kubelet[2482]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 03:14:13.681552 kubelet[2482]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 03:14:13.682019 kubelet[2482]: I1216 03:14:13.681627 2482 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 03:14:13.865747 kubelet[2482]: I1216 03:14:13.865693 2482 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 16 03:14:13.865747 kubelet[2482]: I1216 03:14:13.865733 2482 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 03:14:13.865997 kubelet[2482]: I1216 03:14:13.865978 2482 server.go:954] "Client rotation is on, will bootstrap in background" Dec 16 03:14:13.918866 kubelet[2482]: E1216 03:14:13.918804 2482 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.26:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.26:6443: connect: connection refused" logger="UnhandledError" Dec 16 03:14:13.938947 kubelet[2482]: I1216 03:14:13.938904 2482 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 03:14:13.961375 kubelet[2482]: I1216 03:14:13.961334 2482 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 03:14:13.967272 kubelet[2482]: I1216 03:14:13.967235 2482 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 03:14:13.968829 kubelet[2482]: I1216 03:14:13.968788 2482 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 03:14:13.969016 kubelet[2482]: I1216 03:14:13.968822 2482 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 03:14:13.969152 kubelet[2482]: I1216 03:14:13.969029 2482 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 03:14:13.969152 kubelet[2482]: I1216 03:14:13.969038 2482 container_manager_linux.go:304] "Creating device plugin manager" Dec 16 03:14:13.969217 kubelet[2482]: I1216 03:14:13.969207 2482 state_mem.go:36] "Initialized new in-memory state store" Dec 16 03:14:14.027874 kubelet[2482]: I1216 03:14:14.027806 2482 kubelet.go:446] "Attempting to sync node with API server" Dec 16 03:14:14.027874 kubelet[2482]: I1216 03:14:14.027879 2482 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 03:14:14.028061 kubelet[2482]: I1216 03:14:14.027923 2482 kubelet.go:352] "Adding apiserver pod source" Dec 16 03:14:14.028061 kubelet[2482]: I1216 03:14:14.027943 2482 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 03:14:14.035984 kubelet[2482]: I1216 03:14:14.035934 2482 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 16 03:14:14.036513 kubelet[2482]: I1216 03:14:14.036461 2482 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 16 03:14:14.050781 kubelet[2482]: W1216 03:14:14.050703 2482 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 16 03:14:14.052730 kubelet[2482]: W1216 03:14:14.052656 2482 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Dec 16 03:14:14.052799 kubelet[2482]: E1216 03:14:14.052738 2482 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.26:6443: connect: connection refused" logger="UnhandledError" Dec 16 03:14:14.052799 kubelet[2482]: W1216 03:14:14.052680 2482 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.26:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Dec 16 03:14:14.052861 kubelet[2482]: E1216 03:14:14.052794 2482 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.26:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.26:6443: connect: connection refused" logger="UnhandledError" Dec 16 03:14:14.063901 kubelet[2482]: I1216 03:14:14.063815 2482 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 03:14:14.063901 kubelet[2482]: I1216 03:14:14.063911 2482 server.go:1287] "Started kubelet" Dec 16 03:14:14.065871 kubelet[2482]: I1216 03:14:14.065828 2482 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 03:14:14.066932 kubelet[2482]: I1216 03:14:14.066847 2482 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 03:14:14.067217 kubelet[2482]: I1216 03:14:14.067196 2482 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 03:14:14.067310 kubelet[2482]: I1216 03:14:14.067272 2482 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 03:14:14.068298 kubelet[2482]: I1216 03:14:14.068270 2482 server.go:479] "Adding debug handlers to kubelet server" Dec 16 03:14:14.069641 kubelet[2482]: I1216 03:14:14.069570 2482 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 03:14:14.070281 kubelet[2482]: E1216 03:14:14.070001 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:14.070281 kubelet[2482]: I1216 03:14:14.070031 2482 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 03:14:14.070281 kubelet[2482]: I1216 03:14:14.070189 2482 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 03:14:14.070281 kubelet[2482]: I1216 03:14:14.070270 2482 reconciler.go:26] "Reconciler: start to sync state" Dec 16 03:14:14.070672 kubelet[2482]: W1216 03:14:14.070633 2482 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Dec 16 03:14:14.070746 kubelet[2482]: E1216 03:14:14.070674 2482 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.26:6443: connect: connection refused" logger="UnhandledError" Dec 16 03:14:14.071473 kubelet[2482]: I1216 03:14:14.071447 2482 factory.go:221] Registration of the systemd container factory successfully Dec 16 03:14:14.071531 kubelet[2482]: I1216 03:14:14.071522 2482 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 03:14:14.100146 kubelet[2482]: E1216 03:14:14.100053 2482 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.26:6443: connect: connection refused" interval="200ms" Dec 16 03:14:14.102357 kubelet[2482]: E1216 03:14:14.102324 2482 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 03:14:14.102514 kubelet[2482]: I1216 03:14:14.102496 2482 factory.go:221] Registration of the containerd container factory successfully Dec 16 03:14:14.102000 audit[2495]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2495 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:14:14.102000 audit[2495]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe7673dd10 a2=0 a3=0 items=0 ppid=2482 pid=2495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:14.102000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 16 03:14:14.104000 audit[2496]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2496 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:14:14.104000 audit[2496]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff11894f60 a2=0 a3=0 items=0 ppid=2482 pid=2496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:14.104000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 16 03:14:14.106592 kubelet[2482]: E1216 03:14:14.104979 2482 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.26:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.26:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188193a245603892 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-12-16 03:14:14.063863954 +0000 UTC m=+0.500198261,LastTimestamp:2025-12-16 03:14:14.063863954 +0000 UTC m=+0.500198261,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 16 03:14:14.107000 audit[2498]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2498 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:14:14.107000 audit[2498]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffc11295500 a2=0 a3=0 items=0 ppid=2482 pid=2498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:14.107000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 03:14:14.111000 audit[2501]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2501 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:14:14.111000 audit[2501]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffc724a8a00 a2=0 a3=0 items=0 ppid=2482 pid=2501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:14.111000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 03:14:14.120135 kubelet[2482]: I1216 03:14:14.120040 2482 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 03:14:14.120135 kubelet[2482]: I1216 03:14:14.120062 2482 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 03:14:14.120135 kubelet[2482]: I1216 03:14:14.120085 2482 state_mem.go:36] "Initialized new in-memory state store" Dec 16 03:14:14.170923 kubelet[2482]: E1216 03:14:14.170855 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:14.271541 kubelet[2482]: E1216 03:14:14.271456 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:14.301468 kubelet[2482]: E1216 03:14:14.301403 2482 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.26:6443: connect: connection refused" interval="400ms" Dec 16 03:14:14.372548 kubelet[2482]: E1216 03:14:14.372354 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:14.473222 kubelet[2482]: E1216 03:14:14.473138 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:14.574155 kubelet[2482]: E1216 03:14:14.574091 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:14.675060 kubelet[2482]: E1216 03:14:14.674877 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:14.702908 kubelet[2482]: E1216 03:14:14.702857 2482 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.26:6443: connect: connection refused" interval="800ms" Dec 16 03:14:14.775764 kubelet[2482]: E1216 03:14:14.775667 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:14.876501 kubelet[2482]: E1216 03:14:14.876434 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:14.907379 kubelet[2482]: W1216 03:14:14.907297 2482 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Dec 16 03:14:14.907437 kubelet[2482]: E1216 03:14:14.907391 2482 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.26:6443: connect: connection refused" logger="UnhandledError" Dec 16 03:14:14.977345 kubelet[2482]: E1216 03:14:14.977259 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:15.021250 kubelet[2482]: W1216 03:14:15.021182 2482 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Dec 16 03:14:15.021306 kubelet[2482]: E1216 03:14:15.021254 2482 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.26:6443: connect: connection refused" logger="UnhandledError" Dec 16 03:14:15.077733 kubelet[2482]: E1216 03:14:15.077695 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:15.178406 kubelet[2482]: E1216 03:14:15.178366 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:15.274405 kubelet[2482]: W1216 03:14:15.274298 2482 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.26:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Dec 16 03:14:15.274405 kubelet[2482]: E1216 03:14:15.274356 2482 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.26:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.26:6443: connect: connection refused" logger="UnhandledError" Dec 16 03:14:15.278813 kubelet[2482]: E1216 03:14:15.278777 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:15.379740 kubelet[2482]: E1216 03:14:15.379633 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:15.480679 kubelet[2482]: E1216 03:14:15.480601 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:15.504427 kubelet[2482]: E1216 03:14:15.504351 2482 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.26:6443: connect: connection refused" interval="1.6s" Dec 16 03:14:15.581259 kubelet[2482]: E1216 03:14:15.581154 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:15.682367 kubelet[2482]: E1216 03:14:15.682264 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:15.783317 kubelet[2482]: E1216 03:14:15.783214 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:15.884145 kubelet[2482]: E1216 03:14:15.884074 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:15.966182 kubelet[2482]: E1216 03:14:15.966102 2482 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.26:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.26:6443: connect: connection refused" logger="UnhandledError" Dec 16 03:14:15.985002 kubelet[2482]: E1216 03:14:15.984926 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:16.085834 kubelet[2482]: E1216 03:14:16.085776 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:16.186641 kubelet[2482]: E1216 03:14:16.186455 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:16.287271 kubelet[2482]: E1216 03:14:16.287207 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:16.388241 kubelet[2482]: E1216 03:14:16.388166 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:16.489227 kubelet[2482]: E1216 03:14:16.489063 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:16.590164 kubelet[2482]: E1216 03:14:16.590115 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:16.690904 kubelet[2482]: E1216 03:14:16.690819 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:16.792002 kubelet[2482]: E1216 03:14:16.791795 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:16.892944 kubelet[2482]: E1216 03:14:16.892878 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:16.993089 kubelet[2482]: E1216 03:14:16.993023 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:17.093817 kubelet[2482]: E1216 03:14:17.093614 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:17.105493 kubelet[2482]: E1216 03:14:17.105447 2482 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.26:6443: connect: connection refused" interval="3.2s" Dec 16 03:14:17.194079 kubelet[2482]: E1216 03:14:17.194020 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:17.294787 kubelet[2482]: E1216 03:14:17.294688 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:17.395514 kubelet[2482]: E1216 03:14:17.395469 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:17.429434 kubelet[2482]: W1216 03:14:17.429367 2482 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.26:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Dec 16 03:14:17.429616 kubelet[2482]: E1216 03:14:17.429439 2482 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.26:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.26:6443: connect: connection refused" logger="UnhandledError" Dec 16 03:14:17.496409 kubelet[2482]: E1216 03:14:17.496323 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:17.582023 kubelet[2482]: W1216 03:14:17.581955 2482 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Dec 16 03:14:17.582023 kubelet[2482]: E1216 03:14:17.582022 2482 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.26:6443: connect: connection refused" logger="UnhandledError" Dec 16 03:14:17.597493 kubelet[2482]: E1216 03:14:17.597444 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:17.690786 kubelet[2482]: W1216 03:14:17.690556 2482 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Dec 16 03:14:17.690786 kubelet[2482]: E1216 03:14:17.690600 2482 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.26:6443: connect: connection refused" logger="UnhandledError" Dec 16 03:14:17.698226 kubelet[2482]: E1216 03:14:17.698160 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:17.798837 kubelet[2482]: E1216 03:14:17.798775 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:17.899641 kubelet[2482]: E1216 03:14:17.899554 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:18.000608 kubelet[2482]: E1216 03:14:18.000430 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:18.101143 kubelet[2482]: E1216 03:14:18.101081 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:18.201594 kubelet[2482]: E1216 03:14:18.201519 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:18.302503 kubelet[2482]: E1216 03:14:18.302307 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:18.403086 kubelet[2482]: E1216 03:14:18.403005 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:18.504228 kubelet[2482]: E1216 03:14:18.504132 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:18.604833 kubelet[2482]: E1216 03:14:18.604556 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:18.704823 kubelet[2482]: E1216 03:14:18.704699 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:18.805853 kubelet[2482]: E1216 03:14:18.805764 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:18.906572 kubelet[2482]: E1216 03:14:18.906527 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:19.006997 kubelet[2482]: E1216 03:14:19.006905 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:19.107833 kubelet[2482]: E1216 03:14:19.107766 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:19.170000 audit[2508]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2508 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:14:19.171439 kubelet[2482]: I1216 03:14:19.171366 2482 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 16 03:14:19.173154 kubelet[2482]: I1216 03:14:19.172757 2482 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 16 03:14:19.173154 kubelet[2482]: I1216 03:14:19.172799 2482 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 16 03:14:19.173154 kubelet[2482]: I1216 03:14:19.172840 2482 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 03:14:19.173154 kubelet[2482]: I1216 03:14:19.172854 2482 kubelet.go:2382] "Starting kubelet main sync loop" Dec 16 03:14:19.173154 kubelet[2482]: E1216 03:14:19.172933 2482 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 03:14:19.173693 kubelet[2482]: W1216 03:14:19.173657 2482 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Dec 16 03:14:19.174303 kubelet[2482]: E1216 03:14:19.173704 2482 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.26:6443: connect: connection refused" logger="UnhandledError" Dec 16 03:14:19.192815 kernel: kauditd_printk_skb: 46 callbacks suppressed Dec 16 03:14:19.193028 kernel: audit: type=1325 audit(1765854859.170:328): table=filter:46 family=2 entries=1 op=nft_register_rule pid=2508 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:14:19.170000 audit[2508]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffed59d8ea0 a2=0 a3=0 items=0 ppid=2482 pid=2508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:19.202032 kernel: audit: type=1300 audit(1765854859.170:328): arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffed59d8ea0 a2=0 a3=0 items=0 ppid=2482 pid=2508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:19.202225 kernel: audit: type=1327 audit(1765854859.170:328): proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Dec 16 03:14:19.170000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Dec 16 03:14:19.171000 audit[2509]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2509 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:14:19.208490 kubelet[2482]: E1216 03:14:19.208443 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:19.209920 kernel: audit: type=1325 audit(1765854859.171:329): table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2509 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:14:19.209991 kernel: audit: type=1300 audit(1765854859.171:329): arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe78456590 a2=0 a3=0 items=0 ppid=2482 pid=2509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:19.171000 audit[2509]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe78456590 a2=0 a3=0 items=0 ppid=2482 pid=2509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:19.171000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 16 03:14:19.218402 kernel: audit: type=1327 audit(1765854859.171:329): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 16 03:14:19.218493 kernel: audit: type=1325 audit(1765854859.172:330): table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2510 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:14:19.172000 audit[2510]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2510 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:14:19.219516 kubelet[2482]: E1216 03:14:19.219354 2482 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.26:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.26:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188193a245603892 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-12-16 03:14:14.063863954 +0000 UTC m=+0.500198261,LastTimestamp:2025-12-16 03:14:14.063863954 +0000 UTC m=+0.500198261,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 16 03:14:19.221274 kernel: audit: type=1300 audit(1765854859.172:330): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc0b1be7b0 a2=0 a3=0 items=0 ppid=2482 pid=2510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:19.172000 audit[2510]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc0b1be7b0 a2=0 a3=0 items=0 ppid=2482 pid=2510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:19.172000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 16 03:14:19.229657 kernel: audit: type=1327 audit(1765854859.172:330): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 16 03:14:19.229724 kernel: audit: type=1325 audit(1765854859.173:331): table=mangle:49 family=10 entries=1 op=nft_register_chain pid=2512 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:14:19.173000 audit[2512]: NETFILTER_CFG table=mangle:49 family=10 entries=1 op=nft_register_chain pid=2512 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:14:19.173000 audit[2512]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd822749f0 a2=0 a3=0 items=0 ppid=2482 pid=2512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:19.173000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 16 03:14:19.174000 audit[2513]: NETFILTER_CFG table=nat:50 family=2 entries=1 op=nft_register_chain pid=2513 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:14:19.174000 audit[2513]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe58edbc90 a2=0 a3=0 items=0 ppid=2482 pid=2513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:19.174000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 16 03:14:19.175000 audit[2514]: NETFILTER_CFG table=nat:51 family=10 entries=1 op=nft_register_chain pid=2514 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:14:19.175000 audit[2514]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff965c4a10 a2=0 a3=0 items=0 ppid=2482 pid=2514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:19.175000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 16 03:14:19.176000 audit[2516]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2516 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:14:19.176000 audit[2516]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd359b3380 a2=0 a3=0 items=0 ppid=2482 pid=2516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:19.176000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 16 03:14:19.176000 audit[2517]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2517 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:14:19.176000 audit[2517]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffda3830180 a2=0 a3=0 items=0 ppid=2482 pid=2517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:19.176000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 16 03:14:19.273498 kubelet[2482]: E1216 03:14:19.273395 2482 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 16 03:14:19.309419 kubelet[2482]: E1216 03:14:19.309292 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:19.410377 kubelet[2482]: E1216 03:14:19.410278 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:19.427865 kubelet[2482]: I1216 03:14:19.427658 2482 policy_none.go:49] "None policy: Start" Dec 16 03:14:19.427865 kubelet[2482]: I1216 03:14:19.427782 2482 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 03:14:19.427865 kubelet[2482]: I1216 03:14:19.427833 2482 state_mem.go:35] "Initializing new in-memory state store" Dec 16 03:14:19.474296 kubelet[2482]: E1216 03:14:19.474142 2482 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 16 03:14:19.510996 kubelet[2482]: E1216 03:14:19.510924 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:19.611752 kubelet[2482]: E1216 03:14:19.611654 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:19.712929 kubelet[2482]: E1216 03:14:19.712688 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:19.813565 kubelet[2482]: E1216 03:14:19.813478 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:19.875046 kubelet[2482]: E1216 03:14:19.874969 2482 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 16 03:14:19.910621 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 16 03:14:19.914470 kubelet[2482]: E1216 03:14:19.914405 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:19.926213 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 16 03:14:19.930808 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 16 03:14:19.954287 kubelet[2482]: I1216 03:14:19.954246 2482 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 16 03:14:19.954803 kubelet[2482]: I1216 03:14:19.954766 2482 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 03:14:19.954803 kubelet[2482]: I1216 03:14:19.954791 2482 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 03:14:19.955265 kubelet[2482]: I1216 03:14:19.955237 2482 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 03:14:19.957926 kubelet[2482]: E1216 03:14:19.957890 2482 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 03:14:19.958066 kubelet[2482]: E1216 03:14:19.957978 2482 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 16 03:14:20.057359 kubelet[2482]: I1216 03:14:20.057214 2482 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 03:14:20.057806 kubelet[2482]: E1216 03:14:20.057758 2482 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.26:6443/api/v1/nodes\": dial tcp 10.0.0.26:6443: connect: connection refused" node="localhost" Dec 16 03:14:20.260111 kubelet[2482]: I1216 03:14:20.260071 2482 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 03:14:20.260632 kubelet[2482]: E1216 03:14:20.260582 2482 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.26:6443/api/v1/nodes\": dial tcp 10.0.0.26:6443: connect: connection refused" node="localhost" Dec 16 03:14:20.300086 kubelet[2482]: E1216 03:14:20.300043 2482 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.26:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.26:6443: connect: connection refused" logger="UnhandledError" Dec 16 03:14:20.306843 kubelet[2482]: E1216 03:14:20.306787 2482 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.26:6443: connect: connection refused" interval="6.4s" Dec 16 03:14:20.542050 kubelet[2482]: W1216 03:14:20.541982 2482 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Dec 16 03:14:20.542050 kubelet[2482]: E1216 03:14:20.542053 2482 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.26:6443: connect: connection refused" logger="UnhandledError" Dec 16 03:14:20.662502 kubelet[2482]: I1216 03:14:20.662468 2482 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 03:14:20.662855 kubelet[2482]: E1216 03:14:20.662817 2482 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.26:6443/api/v1/nodes\": dial tcp 10.0.0.26:6443: connect: connection refused" node="localhost" Dec 16 03:14:20.683373 systemd[1]: Created slice kubepods-burstable-pod0a68423804124305a9de061f38780871.slice - libcontainer container kubepods-burstable-pod0a68423804124305a9de061f38780871.slice. Dec 16 03:14:20.713844 kubelet[2482]: E1216 03:14:20.713814 2482 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 03:14:20.717028 systemd[1]: Created slice kubepods-burstable-pod655b93addaea0f01ce24b01887655ff4.slice - libcontainer container kubepods-burstable-pod655b93addaea0f01ce24b01887655ff4.slice. Dec 16 03:14:20.719864 kubelet[2482]: E1216 03:14:20.719841 2482 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 03:14:20.721847 systemd[1]: Created slice kubepods-burstable-pod55d9ac750f8c9141f337af8b08cf5c9d.slice - libcontainer container kubepods-burstable-pod55d9ac750f8c9141f337af8b08cf5c9d.slice. Dec 16 03:14:20.723693 kubelet[2482]: E1216 03:14:20.723672 2482 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 03:14:20.824242 kubelet[2482]: I1216 03:14:20.824061 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 03:14:20.824242 kubelet[2482]: I1216 03:14:20.824115 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 03:14:20.824242 kubelet[2482]: I1216 03:14:20.824143 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/655b93addaea0f01ce24b01887655ff4-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"655b93addaea0f01ce24b01887655ff4\") " pod="kube-system/kube-apiserver-localhost" Dec 16 03:14:20.824242 kubelet[2482]: I1216 03:14:20.824162 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 03:14:20.824242 kubelet[2482]: I1216 03:14:20.824177 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 03:14:20.824905 kubelet[2482]: I1216 03:14:20.824192 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 03:14:20.824905 kubelet[2482]: I1216 03:14:20.824208 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0a68423804124305a9de061f38780871-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0a68423804124305a9de061f38780871\") " pod="kube-system/kube-scheduler-localhost" Dec 16 03:14:20.824905 kubelet[2482]: I1216 03:14:20.824252 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/655b93addaea0f01ce24b01887655ff4-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"655b93addaea0f01ce24b01887655ff4\") " pod="kube-system/kube-apiserver-localhost" Dec 16 03:14:20.824905 kubelet[2482]: I1216 03:14:20.824297 2482 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/655b93addaea0f01ce24b01887655ff4-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"655b93addaea0f01ce24b01887655ff4\") " pod="kube-system/kube-apiserver-localhost" Dec 16 03:14:21.015238 kubelet[2482]: E1216 03:14:21.015172 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:14:21.016252 containerd[1606]: time="2025-12-16T03:14:21.016189639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0a68423804124305a9de061f38780871,Namespace:kube-system,Attempt:0,}" Dec 16 03:14:21.021384 kubelet[2482]: E1216 03:14:21.021329 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:14:21.021689 containerd[1606]: time="2025-12-16T03:14:21.021636413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:655b93addaea0f01ce24b01887655ff4,Namespace:kube-system,Attempt:0,}" Dec 16 03:14:21.025048 kubelet[2482]: E1216 03:14:21.025006 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:14:21.025314 containerd[1606]: time="2025-12-16T03:14:21.025275086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:55d9ac750f8c9141f337af8b08cf5c9d,Namespace:kube-system,Attempt:0,}" Dec 16 03:14:21.464839 kubelet[2482]: I1216 03:14:21.464787 2482 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 03:14:21.465209 kubelet[2482]: E1216 03:14:21.465177 2482 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.26:6443/api/v1/nodes\": dial tcp 10.0.0.26:6443: connect: connection refused" node="localhost" Dec 16 03:14:21.493072 kubelet[2482]: W1216 03:14:21.493024 2482 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Dec 16 03:14:21.493072 kubelet[2482]: E1216 03:14:21.493078 2482 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.26:6443: connect: connection refused" logger="UnhandledError" Dec 16 03:14:21.713115 kubelet[2482]: W1216 03:14:21.713055 2482 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Dec 16 03:14:21.713115 kubelet[2482]: E1216 03:14:21.713106 2482 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.26:6443: connect: connection refused" logger="UnhandledError" Dec 16 03:14:22.014439 containerd[1606]: time="2025-12-16T03:14:22.014373351Z" level=info msg="connecting to shim 635c729778752cc2ad841a4e20476fbc48d8cc164268715d95ad6322c1cd4507" address="unix:///run/containerd/s/7d9333d531a639af2090c146dea807b60d45ed6d7723caa2fd389df223294cfe" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:14:22.047922 systemd[1]: Started cri-containerd-635c729778752cc2ad841a4e20476fbc48d8cc164268715d95ad6322c1cd4507.scope - libcontainer container 635c729778752cc2ad841a4e20476fbc48d8cc164268715d95ad6322c1cd4507. Dec 16 03:14:22.216223 containerd[1606]: time="2025-12-16T03:14:22.216149636Z" level=info msg="connecting to shim 661e007a87ac3d13cabe79ce7a4aed8c43ecad0a9ac9fc44e0a27c444cfd79f6" address="unix:///run/containerd/s/6a93f5ee91a0457f2a0572b14d66d614bd95a031592b5e9cfddc95f65d4e4fcd" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:14:22.222377 containerd[1606]: time="2025-12-16T03:14:22.222239867Z" level=info msg="connecting to shim 5ae60b8d0a2b37a6994d70235b922739ae3cd1cdcc6962ff9de69df0340ece4c" address="unix:///run/containerd/s/5a190bbd816c3cee01adf954389e7145fe6460ea51de745c0adcb702413a5175" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:14:22.223000 audit: BPF prog-id=81 op=LOAD Dec 16 03:14:22.223000 audit: BPF prog-id=82 op=LOAD Dec 16 03:14:22.223000 audit[2538]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2526 pid=2538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:22.223000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633356337323937373837353263633261643834316134653230343736 Dec 16 03:14:22.224000 audit: BPF prog-id=82 op=UNLOAD Dec 16 03:14:22.224000 audit[2538]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2526 pid=2538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:22.224000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633356337323937373837353263633261643834316134653230343736 Dec 16 03:14:22.224000 audit: BPF prog-id=83 op=LOAD Dec 16 03:14:22.224000 audit[2538]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2526 pid=2538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:22.224000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633356337323937373837353263633261643834316134653230343736 Dec 16 03:14:22.224000 audit: BPF prog-id=84 op=LOAD Dec 16 03:14:22.224000 audit[2538]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2526 pid=2538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:22.224000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633356337323937373837353263633261643834316134653230343736 Dec 16 03:14:22.224000 audit: BPF prog-id=84 op=UNLOAD Dec 16 03:14:22.224000 audit[2538]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2526 pid=2538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:22.224000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633356337323937373837353263633261643834316134653230343736 Dec 16 03:14:22.224000 audit: BPF prog-id=83 op=UNLOAD Dec 16 03:14:22.224000 audit[2538]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2526 pid=2538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:22.224000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633356337323937373837353263633261643834316134653230343736 Dec 16 03:14:22.224000 audit: BPF prog-id=85 op=LOAD Dec 16 03:14:22.224000 audit[2538]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2526 pid=2538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:22.224000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633356337323937373837353263633261643834316134653230343736 Dec 16 03:14:22.366972 systemd[1]: Started cri-containerd-661e007a87ac3d13cabe79ce7a4aed8c43ecad0a9ac9fc44e0a27c444cfd79f6.scope - libcontainer container 661e007a87ac3d13cabe79ce7a4aed8c43ecad0a9ac9fc44e0a27c444cfd79f6. Dec 16 03:14:22.394073 systemd[1]: Started cri-containerd-5ae60b8d0a2b37a6994d70235b922739ae3cd1cdcc6962ff9de69df0340ece4c.scope - libcontainer container 5ae60b8d0a2b37a6994d70235b922739ae3cd1cdcc6962ff9de69df0340ece4c. Dec 16 03:14:22.397000 audit: BPF prog-id=86 op=LOAD Dec 16 03:14:22.397000 audit: BPF prog-id=87 op=LOAD Dec 16 03:14:22.397000 audit[2590]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2571 pid=2590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:22.397000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636316530303761383761633364313363616265373963653761346165 Dec 16 03:14:22.397000 audit: BPF prog-id=87 op=UNLOAD Dec 16 03:14:22.397000 audit[2590]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2571 pid=2590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:22.397000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636316530303761383761633364313363616265373963653761346165 Dec 16 03:14:22.398000 audit: BPF prog-id=88 op=LOAD Dec 16 03:14:22.398000 audit[2590]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2571 pid=2590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:22.398000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636316530303761383761633364313363616265373963653761346165 Dec 16 03:14:22.398000 audit: BPF prog-id=89 op=LOAD Dec 16 03:14:22.398000 audit[2590]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2571 pid=2590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:22.398000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636316530303761383761633364313363616265373963653761346165 Dec 16 03:14:22.398000 audit: BPF prog-id=89 op=UNLOAD Dec 16 03:14:22.398000 audit[2590]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2571 pid=2590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:22.398000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636316530303761383761633364313363616265373963653761346165 Dec 16 03:14:22.398000 audit: BPF prog-id=88 op=UNLOAD Dec 16 03:14:22.398000 audit[2590]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2571 pid=2590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:22.398000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636316530303761383761633364313363616265373963653761346165 Dec 16 03:14:22.398000 audit: BPF prog-id=90 op=LOAD Dec 16 03:14:22.398000 audit[2590]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2571 pid=2590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:22.398000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636316530303761383761633364313363616265373963653761346165 Dec 16 03:14:22.411000 audit: BPF prog-id=91 op=LOAD Dec 16 03:14:22.411000 audit: BPF prog-id=92 op=LOAD Dec 16 03:14:22.411000 audit[2606]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=2580 pid=2606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:22.411000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561653630623864306132623337613639393464373032333562393232 Dec 16 03:14:22.411000 audit: BPF prog-id=92 op=UNLOAD Dec 16 03:14:22.411000 audit[2606]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2580 pid=2606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:22.411000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561653630623864306132623337613639393464373032333562393232 Dec 16 03:14:22.411000 audit: BPF prog-id=93 op=LOAD Dec 16 03:14:22.411000 audit[2606]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=2580 pid=2606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:22.411000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561653630623864306132623337613639393464373032333562393232 Dec 16 03:14:22.411000 audit: BPF prog-id=94 op=LOAD Dec 16 03:14:22.411000 audit[2606]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=2580 pid=2606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:22.411000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561653630623864306132623337613639393464373032333562393232 Dec 16 03:14:22.411000 audit: BPF prog-id=94 op=UNLOAD Dec 16 03:14:22.411000 audit[2606]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2580 pid=2606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:22.411000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561653630623864306132623337613639393464373032333562393232 Dec 16 03:14:22.411000 audit: BPF prog-id=93 op=UNLOAD Dec 16 03:14:22.411000 audit[2606]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2580 pid=2606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:22.411000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561653630623864306132623337613639393464373032333562393232 Dec 16 03:14:22.411000 audit: BPF prog-id=95 op=LOAD Dec 16 03:14:22.411000 audit[2606]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=2580 pid=2606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:22.411000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561653630623864306132623337613639393464373032333562393232 Dec 16 03:14:22.427216 containerd[1606]: time="2025-12-16T03:14:22.426907259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0a68423804124305a9de061f38780871,Namespace:kube-system,Attempt:0,} returns sandbox id \"635c729778752cc2ad841a4e20476fbc48d8cc164268715d95ad6322c1cd4507\"" Dec 16 03:14:22.429630 kubelet[2482]: E1216 03:14:22.429249 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:14:22.433778 containerd[1606]: time="2025-12-16T03:14:22.433010944Z" level=info msg="CreateContainer within sandbox \"635c729778752cc2ad841a4e20476fbc48d8cc164268715d95ad6322c1cd4507\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 16 03:14:22.499630 containerd[1606]: time="2025-12-16T03:14:22.499563639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:55d9ac750f8c9141f337af8b08cf5c9d,Namespace:kube-system,Attempt:0,} returns sandbox id \"661e007a87ac3d13cabe79ce7a4aed8c43ecad0a9ac9fc44e0a27c444cfd79f6\"" Dec 16 03:14:22.500687 kubelet[2482]: E1216 03:14:22.500652 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:14:22.503434 containerd[1606]: time="2025-12-16T03:14:22.502963792Z" level=info msg="CreateContainer within sandbox \"661e007a87ac3d13cabe79ce7a4aed8c43ecad0a9ac9fc44e0a27c444cfd79f6\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 16 03:14:22.503853 containerd[1606]: time="2025-12-16T03:14:22.503820984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:655b93addaea0f01ce24b01887655ff4,Namespace:kube-system,Attempt:0,} returns sandbox id \"5ae60b8d0a2b37a6994d70235b922739ae3cd1cdcc6962ff9de69df0340ece4c\"" Dec 16 03:14:22.505553 kubelet[2482]: E1216 03:14:22.504479 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:14:22.506526 containerd[1606]: time="2025-12-16T03:14:22.506484812Z" level=info msg="CreateContainer within sandbox \"5ae60b8d0a2b37a6994d70235b922739ae3cd1cdcc6962ff9de69df0340ece4c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 16 03:14:22.651424 kubelet[2482]: W1216 03:14:22.651242 2482 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Dec 16 03:14:22.651424 kubelet[2482]: E1216 03:14:22.651319 2482 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.26:6443: connect: connection refused" logger="UnhandledError" Dec 16 03:14:22.691640 containerd[1606]: time="2025-12-16T03:14:22.691568031Z" level=info msg="Container 46cf2520b1efeec3896137c3084e63620b889976accba10d207424551155aa41: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:14:22.712070 containerd[1606]: time="2025-12-16T03:14:22.711204682Z" level=info msg="Container 82a14d85c50e8dd3312272c5bcbcb35b087239889df4b870bb77306580abcfaa: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:14:22.721747 containerd[1606]: time="2025-12-16T03:14:22.721660402Z" level=info msg="Container 81b2609ef73bee00c390f116112864f397aa593f3cd93a0cab287d5c77d85ab1: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:14:22.731078 containerd[1606]: time="2025-12-16T03:14:22.730992608Z" level=info msg="CreateContainer within sandbox \"635c729778752cc2ad841a4e20476fbc48d8cc164268715d95ad6322c1cd4507\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"46cf2520b1efeec3896137c3084e63620b889976accba10d207424551155aa41\"" Dec 16 03:14:22.733481 containerd[1606]: time="2025-12-16T03:14:22.731895546Z" level=info msg="StartContainer for \"46cf2520b1efeec3896137c3084e63620b889976accba10d207424551155aa41\"" Dec 16 03:14:22.733481 containerd[1606]: time="2025-12-16T03:14:22.733378244Z" level=info msg="connecting to shim 46cf2520b1efeec3896137c3084e63620b889976accba10d207424551155aa41" address="unix:///run/containerd/s/7d9333d531a639af2090c146dea807b60d45ed6d7723caa2fd389df223294cfe" protocol=ttrpc version=3 Dec 16 03:14:22.744700 containerd[1606]: time="2025-12-16T03:14:22.744618308Z" level=info msg="CreateContainer within sandbox \"5ae60b8d0a2b37a6994d70235b922739ae3cd1cdcc6962ff9de69df0340ece4c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"81b2609ef73bee00c390f116112864f397aa593f3cd93a0cab287d5c77d85ab1\"" Dec 16 03:14:22.745741 containerd[1606]: time="2025-12-16T03:14:22.745653314Z" level=info msg="StartContainer for \"81b2609ef73bee00c390f116112864f397aa593f3cd93a0cab287d5c77d85ab1\"" Dec 16 03:14:22.749555 containerd[1606]: time="2025-12-16T03:14:22.749485599Z" level=info msg="CreateContainer within sandbox \"661e007a87ac3d13cabe79ce7a4aed8c43ecad0a9ac9fc44e0a27c444cfd79f6\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"82a14d85c50e8dd3312272c5bcbcb35b087239889df4b870bb77306580abcfaa\"" Dec 16 03:14:22.752813 containerd[1606]: time="2025-12-16T03:14:22.751472204Z" level=info msg="connecting to shim 81b2609ef73bee00c390f116112864f397aa593f3cd93a0cab287d5c77d85ab1" address="unix:///run/containerd/s/5a190bbd816c3cee01adf954389e7145fe6460ea51de745c0adcb702413a5175" protocol=ttrpc version=3 Dec 16 03:14:22.753545 containerd[1606]: time="2025-12-16T03:14:22.753497512Z" level=info msg="StartContainer for \"82a14d85c50e8dd3312272c5bcbcb35b087239889df4b870bb77306580abcfaa\"" Dec 16 03:14:22.755136 containerd[1606]: time="2025-12-16T03:14:22.755096048Z" level=info msg="connecting to shim 82a14d85c50e8dd3312272c5bcbcb35b087239889df4b870bb77306580abcfaa" address="unix:///run/containerd/s/6a93f5ee91a0457f2a0572b14d66d614bd95a031592b5e9cfddc95f65d4e4fcd" protocol=ttrpc version=3 Dec 16 03:14:22.781129 systemd[1]: Started cri-containerd-46cf2520b1efeec3896137c3084e63620b889976accba10d207424551155aa41.scope - libcontainer container 46cf2520b1efeec3896137c3084e63620b889976accba10d207424551155aa41. Dec 16 03:14:22.795033 systemd[1]: Started cri-containerd-81b2609ef73bee00c390f116112864f397aa593f3cd93a0cab287d5c77d85ab1.scope - libcontainer container 81b2609ef73bee00c390f116112864f397aa593f3cd93a0cab287d5c77d85ab1. Dec 16 03:14:22.809111 systemd[1]: Started cri-containerd-82a14d85c50e8dd3312272c5bcbcb35b087239889df4b870bb77306580abcfaa.scope - libcontainer container 82a14d85c50e8dd3312272c5bcbcb35b087239889df4b870bb77306580abcfaa. Dec 16 03:14:22.814000 audit: BPF prog-id=96 op=LOAD Dec 16 03:14:22.815000 audit: BPF prog-id=97 op=LOAD Dec 16 03:14:22.815000 audit[2652]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2526 pid=2652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:22.815000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436636632353230623165666565633338393631333763333038346536 Dec 16 03:14:22.817000 audit: BPF prog-id=97 op=UNLOAD Dec 16 03:14:22.817000 audit[2652]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2526 pid=2652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:22.817000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436636632353230623165666565633338393631333763333038346536 Dec 16 03:14:22.817000 audit: BPF prog-id=98 op=LOAD Dec 16 03:14:22.817000 audit[2652]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2526 pid=2652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:22.817000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436636632353230623165666565633338393631333763333038346536 Dec 16 03:14:22.818000 audit: BPF prog-id=99 op=LOAD Dec 16 03:14:22.818000 audit[2652]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2526 pid=2652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:22.818000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436636632353230623165666565633338393631333763333038346536 Dec 16 03:14:22.818000 audit: BPF prog-id=99 op=UNLOAD Dec 16 03:14:22.818000 audit[2652]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2526 pid=2652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:22.818000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436636632353230623165666565633338393631333763333038346536 Dec 16 03:14:22.818000 audit: BPF prog-id=98 op=UNLOAD Dec 16 03:14:22.818000 audit[2652]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2526 pid=2652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:22.818000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436636632353230623165666565633338393631333763333038346536 Dec 16 03:14:22.819000 audit: BPF prog-id=100 op=LOAD Dec 16 03:14:22.819000 audit[2652]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2526 pid=2652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:22.819000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436636632353230623165666565633338393631333763333038346536 Dec 16 03:14:22.823000 audit: BPF prog-id=101 op=LOAD Dec 16 03:14:22.823000 audit: BPF prog-id=102 op=LOAD Dec 16 03:14:22.823000 audit[2664]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2580 pid=2664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:22.823000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831623236303965663733626565303063333930663131363131323836 Dec 16 03:14:22.824000 audit: BPF prog-id=102 op=UNLOAD Dec 16 03:14:22.824000 audit[2664]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2580 pid=2664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:22.824000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831623236303965663733626565303063333930663131363131323836 Dec 16 03:14:22.824000 audit: BPF prog-id=103 op=LOAD Dec 16 03:14:22.824000 audit[2664]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2580 pid=2664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:22.824000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831623236303965663733626565303063333930663131363131323836 Dec 16 03:14:22.824000 audit: BPF prog-id=104 op=LOAD Dec 16 03:14:22.824000 audit[2664]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2580 pid=2664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:22.824000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831623236303965663733626565303063333930663131363131323836 Dec 16 03:14:22.824000 audit: BPF prog-id=104 op=UNLOAD Dec 16 03:14:22.824000 audit[2664]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2580 pid=2664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:22.824000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831623236303965663733626565303063333930663131363131323836 Dec 16 03:14:22.824000 audit: BPF prog-id=103 op=UNLOAD Dec 16 03:14:22.824000 audit[2664]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2580 pid=2664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:22.824000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831623236303965663733626565303063333930663131363131323836 Dec 16 03:14:22.824000 audit: BPF prog-id=105 op=LOAD Dec 16 03:14:22.824000 audit[2664]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2580 pid=2664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:22.824000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831623236303965663733626565303063333930663131363131323836 Dec 16 03:14:22.868000 audit: BPF prog-id=106 op=LOAD Dec 16 03:14:22.870000 audit: BPF prog-id=107 op=LOAD Dec 16 03:14:22.870000 audit[2665]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2571 pid=2665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:22.870000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832613134643835633530653864643333313232373263356263626362 Dec 16 03:14:22.870000 audit: BPF prog-id=107 op=UNLOAD Dec 16 03:14:22.870000 audit[2665]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2571 pid=2665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:22.870000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832613134643835633530653864643333313232373263356263626362 Dec 16 03:14:22.870000 audit: BPF prog-id=108 op=LOAD Dec 16 03:14:22.870000 audit[2665]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2571 pid=2665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:22.870000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832613134643835633530653864643333313232373263356263626362 Dec 16 03:14:22.870000 audit: BPF prog-id=109 op=LOAD Dec 16 03:14:22.870000 audit[2665]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2571 pid=2665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:22.870000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832613134643835633530653864643333313232373263356263626362 Dec 16 03:14:22.870000 audit: BPF prog-id=109 op=UNLOAD Dec 16 03:14:22.870000 audit[2665]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2571 pid=2665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:22.870000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832613134643835633530653864643333313232373263356263626362 Dec 16 03:14:22.870000 audit: BPF prog-id=108 op=UNLOAD Dec 16 03:14:22.870000 audit[2665]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2571 pid=2665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:22.870000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832613134643835633530653864643333313232373263356263626362 Dec 16 03:14:22.870000 audit: BPF prog-id=110 op=LOAD Dec 16 03:14:22.870000 audit[2665]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2571 pid=2665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:22.870000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832613134643835633530653864643333313232373263356263626362 Dec 16 03:14:22.894246 containerd[1606]: time="2025-12-16T03:14:22.894174297Z" level=info msg="StartContainer for \"81b2609ef73bee00c390f116112864f397aa593f3cd93a0cab287d5c77d85ab1\" returns successfully" Dec 16 03:14:22.895905 containerd[1606]: time="2025-12-16T03:14:22.895866849Z" level=info msg="StartContainer for \"46cf2520b1efeec3896137c3084e63620b889976accba10d207424551155aa41\" returns successfully" Dec 16 03:14:22.926405 containerd[1606]: time="2025-12-16T03:14:22.926203579Z" level=info msg="StartContainer for \"82a14d85c50e8dd3312272c5bcbcb35b087239889df4b870bb77306580abcfaa\" returns successfully" Dec 16 03:14:23.067749 kubelet[2482]: I1216 03:14:23.067507 2482 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 03:14:23.220196 kubelet[2482]: E1216 03:14:23.220084 2482 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 03:14:23.220833 kubelet[2482]: E1216 03:14:23.220725 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:14:23.224477 kubelet[2482]: E1216 03:14:23.224457 2482 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 03:14:23.224659 kubelet[2482]: E1216 03:14:23.224646 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:14:23.228598 kubelet[2482]: E1216 03:14:23.228576 2482 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 03:14:23.228973 kubelet[2482]: E1216 03:14:23.228919 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:14:24.251785 kubelet[2482]: E1216 03:14:24.232654 2482 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 03:14:24.251785 kubelet[2482]: E1216 03:14:24.232862 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:14:24.251785 kubelet[2482]: E1216 03:14:24.233289 2482 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 03:14:24.251785 kubelet[2482]: E1216 03:14:24.233460 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:14:25.236273 kubelet[2482]: E1216 03:14:25.236234 2482 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 03:14:25.236448 kubelet[2482]: E1216 03:14:25.236389 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:14:27.264233 kubelet[2482]: E1216 03:14:27.262251 2482 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 16 03:14:27.264233 kubelet[2482]: E1216 03:14:27.262432 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:14:27.592308 kubelet[2482]: I1216 03:14:27.590173 2482 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 16 03:14:27.592308 kubelet[2482]: E1216 03:14:27.590255 2482 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Dec 16 03:14:27.699973 kubelet[2482]: E1216 03:14:27.699831 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:27.800188 kubelet[2482]: E1216 03:14:27.800072 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:27.903311 kubelet[2482]: E1216 03:14:27.900537 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:28.000998 kubelet[2482]: E1216 03:14:28.000832 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:28.103088 kubelet[2482]: E1216 03:14:28.103006 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:28.203664 kubelet[2482]: E1216 03:14:28.203462 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:28.309202 kubelet[2482]: E1216 03:14:28.303810 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:28.410782 kubelet[2482]: E1216 03:14:28.410645 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:28.513048 kubelet[2482]: E1216 03:14:28.511666 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:28.612765 kubelet[2482]: E1216 03:14:28.612650 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:28.713315 kubelet[2482]: E1216 03:14:28.712821 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:28.816446 kubelet[2482]: E1216 03:14:28.813503 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:28.917259 kubelet[2482]: E1216 03:14:28.917177 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:29.022774 kubelet[2482]: E1216 03:14:29.022483 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:29.123775 kubelet[2482]: E1216 03:14:29.123674 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:29.224814 kubelet[2482]: E1216 03:14:29.224554 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:29.328257 kubelet[2482]: E1216 03:14:29.328147 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:29.428861 kubelet[2482]: E1216 03:14:29.428617 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:29.531155 kubelet[2482]: E1216 03:14:29.528750 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:29.629803 kubelet[2482]: E1216 03:14:29.629409 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:29.730657 kubelet[2482]: E1216 03:14:29.730433 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:29.831161 kubelet[2482]: E1216 03:14:29.830980 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:29.931487 kubelet[2482]: E1216 03:14:29.931385 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:29.961288 kubelet[2482]: E1216 03:14:29.958695 2482 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 16 03:14:30.032382 kubelet[2482]: E1216 03:14:30.031820 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:30.132411 kubelet[2482]: E1216 03:14:30.132189 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:30.234791 kubelet[2482]: E1216 03:14:30.234472 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:30.337339 kubelet[2482]: E1216 03:14:30.335203 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:30.436304 kubelet[2482]: E1216 03:14:30.436212 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:30.541765 kubelet[2482]: E1216 03:14:30.536800 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:30.640796 kubelet[2482]: E1216 03:14:30.640742 2482 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 16 03:14:30.800186 kubelet[2482]: I1216 03:14:30.799276 2482 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 16 03:14:30.841959 kubelet[2482]: I1216 03:14:30.840140 2482 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 16 03:14:30.876748 kubelet[2482]: I1216 03:14:30.876196 2482 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 16 03:14:31.143116 kubelet[2482]: I1216 03:14:31.142232 2482 apiserver.go:52] "Watching apiserver" Dec 16 03:14:31.149048 kubelet[2482]: E1216 03:14:31.148477 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:14:31.149048 kubelet[2482]: E1216 03:14:31.148563 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:14:31.149048 kubelet[2482]: E1216 03:14:31.148574 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:14:31.174533 kubelet[2482]: I1216 03:14:31.172843 2482 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 03:14:31.293648 kubelet[2482]: E1216 03:14:31.293322 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:14:33.555979 kubelet[2482]: E1216 03:14:33.554999 2482 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:14:34.365779 kubelet[2482]: I1216 03:14:34.365636 2482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=4.365592745 podStartE2EDuration="4.365592745s" podCreationTimestamp="2025-12-16 03:14:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 03:14:33.954888469 +0000 UTC m=+20.391222776" watchObservedRunningTime="2025-12-16 03:14:34.365592745 +0000 UTC m=+20.801927062" Dec 16 03:14:34.695869 kubelet[2482]: I1216 03:14:34.687641 2482 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=4.687616958 podStartE2EDuration="4.687616958s" podCreationTimestamp="2025-12-16 03:14:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 03:14:34.372830705 +0000 UTC m=+20.809165012" watchObservedRunningTime="2025-12-16 03:14:34.687616958 +0000 UTC m=+21.123951265" Dec 16 03:14:35.698102 systemd[1]: Reload requested from client PID 2757 ('systemctl') (unit session-10.scope)... Dec 16 03:14:35.699638 systemd[1]: Reloading... Dec 16 03:14:36.041013 zram_generator::config[2809]: No configuration found. Dec 16 03:14:36.595225 systemd[1]: Reloading finished in 891 ms. Dec 16 03:14:36.636385 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 03:14:36.668815 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 03:14:36.670006 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:14:36.670214 systemd[1]: kubelet.service: Consumed 1.647s CPU time, 134.1M memory peak. Dec 16 03:14:36.669000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:36.675797 kernel: kauditd_printk_skb: 146 callbacks suppressed Dec 16 03:14:36.675867 kernel: audit: type=1131 audit(1765854876.669:384): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:36.674976 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 03:14:36.684845 kernel: audit: type=1334 audit(1765854876.675:385): prog-id=111 op=LOAD Dec 16 03:14:36.675000 audit: BPF prog-id=111 op=LOAD Dec 16 03:14:36.687805 kernel: audit: type=1334 audit(1765854876.675:386): prog-id=69 op=UNLOAD Dec 16 03:14:36.675000 audit: BPF prog-id=69 op=UNLOAD Dec 16 03:14:36.675000 audit: BPF prog-id=112 op=LOAD Dec 16 03:14:36.675000 audit: BPF prog-id=113 op=LOAD Dec 16 03:14:36.696482 kernel: audit: type=1334 audit(1765854876.675:387): prog-id=112 op=LOAD Dec 16 03:14:36.696552 kernel: audit: type=1334 audit(1765854876.675:388): prog-id=113 op=LOAD Dec 16 03:14:36.696576 kernel: audit: type=1334 audit(1765854876.675:389): prog-id=70 op=UNLOAD Dec 16 03:14:36.675000 audit: BPF prog-id=70 op=UNLOAD Dec 16 03:14:36.675000 audit: BPF prog-id=71 op=UNLOAD Dec 16 03:14:36.701385 kernel: audit: type=1334 audit(1765854876.675:390): prog-id=71 op=UNLOAD Dec 16 03:14:36.701472 kernel: audit: type=1334 audit(1765854876.678:391): prog-id=114 op=LOAD Dec 16 03:14:36.678000 audit: BPF prog-id=114 op=LOAD Dec 16 03:14:36.678000 audit: BPF prog-id=115 op=LOAD Dec 16 03:14:36.705743 kernel: audit: type=1334 audit(1765854876.678:392): prog-id=115 op=LOAD Dec 16 03:14:36.705850 kernel: audit: type=1334 audit(1765854876.678:393): prog-id=79 op=UNLOAD Dec 16 03:14:36.678000 audit: BPF prog-id=79 op=UNLOAD Dec 16 03:14:36.678000 audit: BPF prog-id=80 op=UNLOAD Dec 16 03:14:36.679000 audit: BPF prog-id=116 op=LOAD Dec 16 03:14:36.680000 audit: BPF prog-id=65 op=UNLOAD Dec 16 03:14:36.681000 audit: BPF prog-id=117 op=LOAD Dec 16 03:14:36.681000 audit: BPF prog-id=76 op=UNLOAD Dec 16 03:14:36.681000 audit: BPF prog-id=118 op=LOAD Dec 16 03:14:36.681000 audit: BPF prog-id=119 op=LOAD Dec 16 03:14:36.681000 audit: BPF prog-id=77 op=UNLOAD Dec 16 03:14:36.681000 audit: BPF prog-id=78 op=UNLOAD Dec 16 03:14:36.683000 audit: BPF prog-id=120 op=LOAD Dec 16 03:14:36.683000 audit: BPF prog-id=66 op=UNLOAD Dec 16 03:14:36.684000 audit: BPF prog-id=121 op=LOAD Dec 16 03:14:36.684000 audit: BPF prog-id=122 op=LOAD Dec 16 03:14:36.684000 audit: BPF prog-id=67 op=UNLOAD Dec 16 03:14:36.684000 audit: BPF prog-id=68 op=UNLOAD Dec 16 03:14:36.685000 audit: BPF prog-id=123 op=LOAD Dec 16 03:14:36.685000 audit: BPF prog-id=61 op=UNLOAD Dec 16 03:14:36.689000 audit: BPF prog-id=124 op=LOAD Dec 16 03:14:36.697000 audit: BPF prog-id=62 op=UNLOAD Dec 16 03:14:36.698000 audit: BPF prog-id=125 op=LOAD Dec 16 03:14:36.698000 audit: BPF prog-id=126 op=LOAD Dec 16 03:14:36.698000 audit: BPF prog-id=63 op=UNLOAD Dec 16 03:14:36.698000 audit: BPF prog-id=64 op=UNLOAD Dec 16 03:14:36.705000 audit: BPF prog-id=127 op=LOAD Dec 16 03:14:36.705000 audit: BPF prog-id=73 op=UNLOAD Dec 16 03:14:36.706000 audit: BPF prog-id=128 op=LOAD Dec 16 03:14:36.706000 audit: BPF prog-id=129 op=LOAD Dec 16 03:14:36.706000 audit: BPF prog-id=74 op=UNLOAD Dec 16 03:14:36.706000 audit: BPF prog-id=75 op=UNLOAD Dec 16 03:14:36.707000 audit: BPF prog-id=130 op=LOAD Dec 16 03:14:36.707000 audit: BPF prog-id=72 op=UNLOAD Dec 16 03:14:37.251860 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 03:14:37.253000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:37.256234 (kubelet)[2848]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 03:14:37.356842 kubelet[2848]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 03:14:37.356842 kubelet[2848]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 03:14:37.356842 kubelet[2848]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 03:14:37.357448 kubelet[2848]: I1216 03:14:37.357341 2848 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 03:14:37.389100 kubelet[2848]: I1216 03:14:37.388501 2848 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 16 03:14:37.389100 kubelet[2848]: I1216 03:14:37.388541 2848 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 03:14:37.398408 kubelet[2848]: I1216 03:14:37.390886 2848 server.go:954] "Client rotation is on, will bootstrap in background" Dec 16 03:14:37.398408 kubelet[2848]: I1216 03:14:37.394213 2848 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 16 03:14:37.398408 kubelet[2848]: I1216 03:14:37.397101 2848 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 03:14:37.420450 kubelet[2848]: I1216 03:14:37.420386 2848 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 03:14:37.432251 kubelet[2848]: I1216 03:14:37.432179 2848 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 03:14:37.432505 kubelet[2848]: I1216 03:14:37.432459 2848 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 03:14:37.432963 kubelet[2848]: I1216 03:14:37.432493 2848 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 03:14:37.432963 kubelet[2848]: I1216 03:14:37.432858 2848 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 03:14:37.432963 kubelet[2848]: I1216 03:14:37.432871 2848 container_manager_linux.go:304] "Creating device plugin manager" Dec 16 03:14:37.432963 kubelet[2848]: I1216 03:14:37.432955 2848 state_mem.go:36] "Initialized new in-memory state store" Dec 16 03:14:37.433297 kubelet[2848]: I1216 03:14:37.433161 2848 kubelet.go:446] "Attempting to sync node with API server" Dec 16 03:14:37.433297 kubelet[2848]: I1216 03:14:37.433187 2848 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 03:14:37.433297 kubelet[2848]: I1216 03:14:37.433213 2848 kubelet.go:352] "Adding apiserver pod source" Dec 16 03:14:37.433297 kubelet[2848]: I1216 03:14:37.433228 2848 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 03:14:37.437751 kubelet[2848]: I1216 03:14:37.434905 2848 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 16 03:14:37.439860 kubelet[2848]: I1216 03:14:37.439823 2848 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 16 03:14:37.447741 kubelet[2848]: I1216 03:14:37.446624 2848 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 03:14:37.447741 kubelet[2848]: I1216 03:14:37.446695 2848 server.go:1287] "Started kubelet" Dec 16 03:14:37.447741 kubelet[2848]: I1216 03:14:37.447584 2848 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 03:14:37.449110 kubelet[2848]: I1216 03:14:37.449091 2848 server.go:479] "Adding debug handlers to kubelet server" Dec 16 03:14:37.450576 kubelet[2848]: I1216 03:14:37.450358 2848 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 03:14:37.451169 kubelet[2848]: I1216 03:14:37.451141 2848 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 03:14:37.461255 kubelet[2848]: I1216 03:14:37.461201 2848 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 03:14:37.461846 kubelet[2848]: I1216 03:14:37.461819 2848 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 03:14:37.464068 kubelet[2848]: I1216 03:14:37.463624 2848 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 03:14:37.464734 kubelet[2848]: I1216 03:14:37.463640 2848 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 03:14:37.466305 kubelet[2848]: I1216 03:14:37.466287 2848 reconciler.go:26] "Reconciler: start to sync state" Dec 16 03:14:37.469472 kubelet[2848]: E1216 03:14:37.469426 2848 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 03:14:37.470316 kubelet[2848]: I1216 03:14:37.470293 2848 factory.go:221] Registration of the containerd container factory successfully Dec 16 03:14:37.471732 kubelet[2848]: I1216 03:14:37.470392 2848 factory.go:221] Registration of the systemd container factory successfully Dec 16 03:14:37.471732 kubelet[2848]: I1216 03:14:37.470644 2848 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 03:14:37.495097 kubelet[2848]: I1216 03:14:37.495039 2848 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 16 03:14:37.500225 kubelet[2848]: I1216 03:14:37.500179 2848 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 16 03:14:37.500405 kubelet[2848]: I1216 03:14:37.500394 2848 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 16 03:14:37.500496 kubelet[2848]: I1216 03:14:37.500482 2848 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 03:14:37.500597 kubelet[2848]: I1216 03:14:37.500585 2848 kubelet.go:2382] "Starting kubelet main sync loop" Dec 16 03:14:37.500771 kubelet[2848]: E1216 03:14:37.500731 2848 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 03:14:37.607088 kubelet[2848]: E1216 03:14:37.602858 2848 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 16 03:14:37.607088 kubelet[2848]: I1216 03:14:37.605317 2848 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 03:14:37.607088 kubelet[2848]: I1216 03:14:37.605332 2848 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 03:14:37.607088 kubelet[2848]: I1216 03:14:37.605358 2848 state_mem.go:36] "Initialized new in-memory state store" Dec 16 03:14:37.607088 kubelet[2848]: I1216 03:14:37.605617 2848 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 16 03:14:37.607088 kubelet[2848]: I1216 03:14:37.605634 2848 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 16 03:14:37.607088 kubelet[2848]: I1216 03:14:37.605659 2848 policy_none.go:49] "None policy: Start" Dec 16 03:14:37.607088 kubelet[2848]: I1216 03:14:37.605675 2848 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 03:14:37.607088 kubelet[2848]: I1216 03:14:37.605773 2848 state_mem.go:35] "Initializing new in-memory state store" Dec 16 03:14:37.607088 kubelet[2848]: I1216 03:14:37.606010 2848 state_mem.go:75] "Updated machine memory state" Dec 16 03:14:37.614174 kubelet[2848]: I1216 03:14:37.614122 2848 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 16 03:14:37.614418 kubelet[2848]: I1216 03:14:37.614387 2848 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 03:14:37.614498 kubelet[2848]: I1216 03:14:37.614411 2848 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 03:14:37.616499 kubelet[2848]: I1216 03:14:37.616475 2848 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 03:14:37.620175 kubelet[2848]: E1216 03:14:37.619931 2848 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 03:14:37.729166 kubelet[2848]: I1216 03:14:37.729100 2848 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 16 03:14:37.806982 kubelet[2848]: I1216 03:14:37.806768 2848 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 16 03:14:37.807313 kubelet[2848]: I1216 03:14:37.807292 2848 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 16 03:14:37.807565 kubelet[2848]: I1216 03:14:37.807544 2848 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 16 03:14:37.885554 kubelet[2848]: I1216 03:14:37.882129 2848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/655b93addaea0f01ce24b01887655ff4-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"655b93addaea0f01ce24b01887655ff4\") " pod="kube-system/kube-apiserver-localhost" Dec 16 03:14:37.885554 kubelet[2848]: I1216 03:14:37.882184 2848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 03:14:37.885554 kubelet[2848]: I1216 03:14:37.882212 2848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 03:14:37.885554 kubelet[2848]: I1216 03:14:37.882252 2848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 03:14:37.885554 kubelet[2848]: I1216 03:14:37.882277 2848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0a68423804124305a9de061f38780871-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0a68423804124305a9de061f38780871\") " pod="kube-system/kube-scheduler-localhost" Dec 16 03:14:37.886536 kubelet[2848]: I1216 03:14:37.882297 2848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/655b93addaea0f01ce24b01887655ff4-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"655b93addaea0f01ce24b01887655ff4\") " pod="kube-system/kube-apiserver-localhost" Dec 16 03:14:37.886536 kubelet[2848]: I1216 03:14:37.882316 2848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/655b93addaea0f01ce24b01887655ff4-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"655b93addaea0f01ce24b01887655ff4\") " pod="kube-system/kube-apiserver-localhost" Dec 16 03:14:37.886536 kubelet[2848]: I1216 03:14:37.882339 2848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 03:14:37.886536 kubelet[2848]: I1216 03:14:37.882364 2848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 16 03:14:38.150097 kubelet[2848]: E1216 03:14:38.146059 2848 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Dec 16 03:14:38.150097 kubelet[2848]: E1216 03:14:38.146340 2848 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:14:38.150097 kubelet[2848]: E1216 03:14:38.146453 2848 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 16 03:14:38.150097 kubelet[2848]: E1216 03:14:38.146567 2848 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:14:38.150097 kubelet[2848]: E1216 03:14:38.146624 2848 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Dec 16 03:14:38.150097 kubelet[2848]: E1216 03:14:38.146757 2848 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:14:38.152974 kubelet[2848]: I1216 03:14:38.151598 2848 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Dec 16 03:14:38.152974 kubelet[2848]: I1216 03:14:38.151736 2848 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 16 03:14:38.441119 kubelet[2848]: I1216 03:14:38.440789 2848 apiserver.go:52] "Watching apiserver" Dec 16 03:14:38.465401 kubelet[2848]: I1216 03:14:38.464924 2848 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 03:14:38.559929 kubelet[2848]: I1216 03:14:38.557784 2848 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 16 03:14:38.559929 kubelet[2848]: I1216 03:14:38.558169 2848 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 16 03:14:38.559929 kubelet[2848]: E1216 03:14:38.558535 2848 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:14:38.642300 kubelet[2848]: E1216 03:14:38.642180 2848 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 16 03:14:38.642806 kubelet[2848]: E1216 03:14:38.642746 2848 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:14:38.644332 kubelet[2848]: E1216 03:14:38.644290 2848 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Dec 16 03:14:38.644666 kubelet[2848]: E1216 03:14:38.644635 2848 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:14:39.566686 kubelet[2848]: E1216 03:14:39.559165 2848 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:14:39.566686 kubelet[2848]: E1216 03:14:39.559615 2848 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:14:40.451369 kubelet[2848]: I1216 03:14:40.447765 2848 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 16 03:14:40.451369 kubelet[2848]: I1216 03:14:40.449065 2848 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 16 03:14:40.451681 containerd[1606]: time="2025-12-16T03:14:40.448647473Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 16 03:14:41.196433 systemd[1]: Created slice kubepods-besteffort-pod62c3dc1c_4b01_45af_8a2a_24c8701eb2ac.slice - libcontainer container kubepods-besteffort-pod62c3dc1c_4b01_45af_8a2a_24c8701eb2ac.slice. Dec 16 03:14:41.242330 kubelet[2848]: I1216 03:14:41.242242 2848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/62c3dc1c-4b01-45af-8a2a-24c8701eb2ac-kube-proxy\") pod \"kube-proxy-p6f4x\" (UID: \"62c3dc1c-4b01-45af-8a2a-24c8701eb2ac\") " pod="kube-system/kube-proxy-p6f4x" Dec 16 03:14:41.242330 kubelet[2848]: I1216 03:14:41.242331 2848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/62c3dc1c-4b01-45af-8a2a-24c8701eb2ac-lib-modules\") pod \"kube-proxy-p6f4x\" (UID: \"62c3dc1c-4b01-45af-8a2a-24c8701eb2ac\") " pod="kube-system/kube-proxy-p6f4x" Dec 16 03:14:41.243078 kubelet[2848]: I1216 03:14:41.242364 2848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptkhs\" (UniqueName: \"kubernetes.io/projected/62c3dc1c-4b01-45af-8a2a-24c8701eb2ac-kube-api-access-ptkhs\") pod \"kube-proxy-p6f4x\" (UID: \"62c3dc1c-4b01-45af-8a2a-24c8701eb2ac\") " pod="kube-system/kube-proxy-p6f4x" Dec 16 03:14:41.243078 kubelet[2848]: I1216 03:14:41.242394 2848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/62c3dc1c-4b01-45af-8a2a-24c8701eb2ac-xtables-lock\") pod \"kube-proxy-p6f4x\" (UID: \"62c3dc1c-4b01-45af-8a2a-24c8701eb2ac\") " pod="kube-system/kube-proxy-p6f4x" Dec 16 03:14:41.510488 kubelet[2848]: E1216 03:14:41.510329 2848 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:14:41.513006 containerd[1606]: time="2025-12-16T03:14:41.512532109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-p6f4x,Uid:62c3dc1c-4b01-45af-8a2a-24c8701eb2ac,Namespace:kube-system,Attempt:0,}" Dec 16 03:14:41.787268 systemd[1]: Created slice kubepods-besteffort-podf2605fcc_f37c_46de_933a_e8f8f352dfcc.slice - libcontainer container kubepods-besteffort-podf2605fcc_f37c_46de_933a_e8f8f352dfcc.slice. Dec 16 03:14:41.838568 containerd[1606]: time="2025-12-16T03:14:41.838483551Z" level=info msg="connecting to shim 79ff70a01e4ba6cfcdf6f198518a3606c4ed6ccba288945c46ff0eae616b7fdd" address="unix:///run/containerd/s/b6b857e321e583df343ca33d9a714c49b14c30ac43aa1b8f0484dd8a054247c8" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:14:41.852086 kubelet[2848]: I1216 03:14:41.851815 2848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpbf7\" (UniqueName: \"kubernetes.io/projected/f2605fcc-f37c-46de-933a-e8f8f352dfcc-kube-api-access-cpbf7\") pod \"tigera-operator-7dcd859c48-sk6hh\" (UID: \"f2605fcc-f37c-46de-933a-e8f8f352dfcc\") " pod="tigera-operator/tigera-operator-7dcd859c48-sk6hh" Dec 16 03:14:41.852086 kubelet[2848]: I1216 03:14:41.851891 2848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f2605fcc-f37c-46de-933a-e8f8f352dfcc-var-lib-calico\") pod \"tigera-operator-7dcd859c48-sk6hh\" (UID: \"f2605fcc-f37c-46de-933a-e8f8f352dfcc\") " pod="tigera-operator/tigera-operator-7dcd859c48-sk6hh" Dec 16 03:14:42.003145 systemd[1]: Started cri-containerd-79ff70a01e4ba6cfcdf6f198518a3606c4ed6ccba288945c46ff0eae616b7fdd.scope - libcontainer container 79ff70a01e4ba6cfcdf6f198518a3606c4ed6ccba288945c46ff0eae616b7fdd. Dec 16 03:14:42.039780 kernel: kauditd_printk_skb: 32 callbacks suppressed Dec 16 03:14:42.039946 kernel: audit: type=1334 audit(1765854882.036:426): prog-id=131 op=LOAD Dec 16 03:14:42.036000 audit: BPF prog-id=131 op=LOAD Dec 16 03:14:42.044297 kernel: audit: type=1334 audit(1765854882.039:427): prog-id=132 op=LOAD Dec 16 03:14:42.039000 audit: BPF prog-id=132 op=LOAD Dec 16 03:14:42.039000 audit[2916]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=2905 pid=2916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:42.056468 kernel: audit: type=1300 audit(1765854882.039:427): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=2905 pid=2916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:42.056632 kernel: audit: type=1327 audit(1765854882.039:427): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739666637306130316534626136636663646636663139383531386133 Dec 16 03:14:42.039000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739666637306130316534626136636663646636663139383531386133 Dec 16 03:14:42.066010 kernel: audit: type=1334 audit(1765854882.039:428): prog-id=132 op=UNLOAD Dec 16 03:14:42.039000 audit: BPF prog-id=132 op=UNLOAD Dec 16 03:14:42.075377 kernel: audit: type=1300 audit(1765854882.039:428): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2905 pid=2916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:42.039000 audit[2916]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2905 pid=2916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:42.039000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739666637306130316534626136636663646636663139383531386133 Dec 16 03:14:42.096919 kernel: audit: type=1327 audit(1765854882.039:428): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739666637306130316534626136636663646636663139383531386133 Dec 16 03:14:42.097077 kernel: audit: type=1334 audit(1765854882.039:429): prog-id=133 op=LOAD Dec 16 03:14:42.097107 kernel: audit: type=1300 audit(1765854882.039:429): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=2905 pid=2916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:42.039000 audit: BPF prog-id=133 op=LOAD Dec 16 03:14:42.039000 audit[2916]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=2905 pid=2916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:42.102903 containerd[1606]: time="2025-12-16T03:14:42.100275018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-sk6hh,Uid:f2605fcc-f37c-46de-933a-e8f8f352dfcc,Namespace:tigera-operator,Attempt:0,}" Dec 16 03:14:42.105861 kernel: audit: type=1327 audit(1765854882.039:429): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739666637306130316534626136636663646636663139383531386133 Dec 16 03:14:42.039000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739666637306130316534626136636663646636663139383531386133 Dec 16 03:14:42.039000 audit: BPF prog-id=134 op=LOAD Dec 16 03:14:42.039000 audit[2916]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=2905 pid=2916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:42.039000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739666637306130316534626136636663646636663139383531386133 Dec 16 03:14:42.039000 audit: BPF prog-id=134 op=UNLOAD Dec 16 03:14:42.039000 audit[2916]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2905 pid=2916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:42.039000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739666637306130316534626136636663646636663139383531386133 Dec 16 03:14:42.039000 audit: BPF prog-id=133 op=UNLOAD Dec 16 03:14:42.039000 audit[2916]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2905 pid=2916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:42.039000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739666637306130316534626136636663646636663139383531386133 Dec 16 03:14:42.039000 audit: BPF prog-id=135 op=LOAD Dec 16 03:14:42.039000 audit[2916]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=2905 pid=2916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:42.039000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739666637306130316534626136636663646636663139383531386133 Dec 16 03:14:42.139610 containerd[1606]: time="2025-12-16T03:14:42.139459505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-p6f4x,Uid:62c3dc1c-4b01-45af-8a2a-24c8701eb2ac,Namespace:kube-system,Attempt:0,} returns sandbox id \"79ff70a01e4ba6cfcdf6f198518a3606c4ed6ccba288945c46ff0eae616b7fdd\"" Dec 16 03:14:42.140553 kubelet[2848]: E1216 03:14:42.140505 2848 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:14:42.145865 containerd[1606]: time="2025-12-16T03:14:42.142962960Z" level=info msg="CreateContainer within sandbox \"79ff70a01e4ba6cfcdf6f198518a3606c4ed6ccba288945c46ff0eae616b7fdd\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 16 03:14:42.233665 containerd[1606]: time="2025-12-16T03:14:42.232993190Z" level=info msg="connecting to shim ed3d2d0a7f4ebb82642cfe0726b23129eb4075677a2d45d6f4882e0301f986dd" address="unix:///run/containerd/s/c677dbbf50f3b859ee8b9814df39848cec73bb1b48ecfb04e3815441d8efa6aa" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:14:42.254322 containerd[1606]: time="2025-12-16T03:14:42.254221579Z" level=info msg="Container a0bc84504faac33099635522156450a27fc9266802e0b310252dc756d8c040eb: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:14:42.271612 containerd[1606]: time="2025-12-16T03:14:42.271522637Z" level=info msg="CreateContainer within sandbox \"79ff70a01e4ba6cfcdf6f198518a3606c4ed6ccba288945c46ff0eae616b7fdd\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a0bc84504faac33099635522156450a27fc9266802e0b310252dc756d8c040eb\"" Dec 16 03:14:42.274773 containerd[1606]: time="2025-12-16T03:14:42.273035957Z" level=info msg="StartContainer for \"a0bc84504faac33099635522156450a27fc9266802e0b310252dc756d8c040eb\"" Dec 16 03:14:42.276148 containerd[1606]: time="2025-12-16T03:14:42.276117140Z" level=info msg="connecting to shim a0bc84504faac33099635522156450a27fc9266802e0b310252dc756d8c040eb" address="unix:///run/containerd/s/b6b857e321e583df343ca33d9a714c49b14c30ac43aa1b8f0484dd8a054247c8" protocol=ttrpc version=3 Dec 16 03:14:42.283142 systemd[1]: Started cri-containerd-ed3d2d0a7f4ebb82642cfe0726b23129eb4075677a2d45d6f4882e0301f986dd.scope - libcontainer container ed3d2d0a7f4ebb82642cfe0726b23129eb4075677a2d45d6f4882e0301f986dd. Dec 16 03:14:42.314196 systemd[1]: Started cri-containerd-a0bc84504faac33099635522156450a27fc9266802e0b310252dc756d8c040eb.scope - libcontainer container a0bc84504faac33099635522156450a27fc9266802e0b310252dc756d8c040eb. Dec 16 03:14:42.321000 audit: BPF prog-id=136 op=LOAD Dec 16 03:14:42.321000 audit: BPF prog-id=137 op=LOAD Dec 16 03:14:42.321000 audit[2963]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2952 pid=2963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:42.321000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564336432643061376634656262383236343263666530373236623233 Dec 16 03:14:42.321000 audit: BPF prog-id=137 op=UNLOAD Dec 16 03:14:42.321000 audit[2963]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2952 pid=2963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:42.321000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564336432643061376634656262383236343263666530373236623233 Dec 16 03:14:42.321000 audit: BPF prog-id=138 op=LOAD Dec 16 03:14:42.321000 audit[2963]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2952 pid=2963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:42.321000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564336432643061376634656262383236343263666530373236623233 Dec 16 03:14:42.324000 audit: BPF prog-id=139 op=LOAD Dec 16 03:14:42.324000 audit[2963]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2952 pid=2963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:42.324000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564336432643061376634656262383236343263666530373236623233 Dec 16 03:14:42.327000 audit: BPF prog-id=139 op=UNLOAD Dec 16 03:14:42.327000 audit[2963]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2952 pid=2963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:42.327000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564336432643061376634656262383236343263666530373236623233 Dec 16 03:14:42.327000 audit: BPF prog-id=138 op=UNLOAD Dec 16 03:14:42.327000 audit[2963]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2952 pid=2963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:42.327000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564336432643061376634656262383236343263666530373236623233 Dec 16 03:14:42.327000 audit: BPF prog-id=140 op=LOAD Dec 16 03:14:42.327000 audit[2963]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2952 pid=2963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:42.327000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564336432643061376634656262383236343263666530373236623233 Dec 16 03:14:42.387617 containerd[1606]: time="2025-12-16T03:14:42.383890278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-sk6hh,Uid:f2605fcc-f37c-46de-933a-e8f8f352dfcc,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"ed3d2d0a7f4ebb82642cfe0726b23129eb4075677a2d45d6f4882e0301f986dd\"" Dec 16 03:14:42.389110 containerd[1606]: time="2025-12-16T03:14:42.388926609Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 16 03:14:42.395000 audit: BPF prog-id=141 op=LOAD Dec 16 03:14:42.395000 audit[2975]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2905 pid=2975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:42.395000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130626338343530346661616333333039393633353532323135363435 Dec 16 03:14:42.395000 audit: BPF prog-id=142 op=LOAD Dec 16 03:14:42.395000 audit[2975]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2905 pid=2975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:42.395000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130626338343530346661616333333039393633353532323135363435 Dec 16 03:14:42.395000 audit: BPF prog-id=142 op=UNLOAD Dec 16 03:14:42.395000 audit[2975]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2905 pid=2975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:42.395000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130626338343530346661616333333039393633353532323135363435 Dec 16 03:14:42.395000 audit: BPF prog-id=141 op=UNLOAD Dec 16 03:14:42.395000 audit[2975]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2905 pid=2975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:42.395000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130626338343530346661616333333039393633353532323135363435 Dec 16 03:14:42.395000 audit: BPF prog-id=143 op=LOAD Dec 16 03:14:42.395000 audit[2975]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2905 pid=2975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:42.395000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130626338343530346661616333333039393633353532323135363435 Dec 16 03:14:42.527886 containerd[1606]: time="2025-12-16T03:14:42.527386195Z" level=info msg="StartContainer for \"a0bc84504faac33099635522156450a27fc9266802e0b310252dc756d8c040eb\" returns successfully" Dec 16 03:14:42.596008 kubelet[2848]: E1216 03:14:42.595842 2848 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:14:42.642964 kubelet[2848]: I1216 03:14:42.642845 2848 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-p6f4x" podStartSLOduration=1.642819479 podStartE2EDuration="1.642819479s" podCreationTimestamp="2025-12-16 03:14:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 03:14:42.641945118 +0000 UTC m=+5.380637548" watchObservedRunningTime="2025-12-16 03:14:42.642819479 +0000 UTC m=+5.381511889" Dec 16 03:14:42.761000 audit[3054]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=3054 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:14:42.761000 audit[3054]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffec4d06840 a2=0 a3=7ffec4d0682c items=0 ppid=2997 pid=3054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:42.761000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 16 03:14:42.764000 audit[3055]: NETFILTER_CFG table=mangle:55 family=10 entries=1 op=nft_register_chain pid=3055 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:14:42.764000 audit[3055]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe6fc03190 a2=0 a3=7ffe6fc0317c items=0 ppid=2997 pid=3055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:42.764000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 16 03:14:42.767000 audit[3056]: NETFILTER_CFG table=nat:56 family=2 entries=1 op=nft_register_chain pid=3056 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:14:42.767000 audit[3056]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe62b72950 a2=0 a3=7ffe62b7293c items=0 ppid=2997 pid=3056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:42.767000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 16 03:14:42.770000 audit[3058]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_chain pid=3058 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:14:42.772000 audit[3057]: NETFILTER_CFG table=nat:58 family=10 entries=1 op=nft_register_chain pid=3057 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:14:42.772000 audit[3057]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc2ff6ce60 a2=0 a3=7ffc2ff6ce4c items=0 ppid=2997 pid=3057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:42.772000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 16 03:14:42.770000 audit[3058]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffa60a7200 a2=0 a3=7fffa60a71ec items=0 ppid=2997 pid=3058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:42.770000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 16 03:14:42.774000 audit[3062]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_chain pid=3062 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:14:42.774000 audit[3062]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffea99084e0 a2=0 a3=7ffea99084cc items=0 ppid=2997 pid=3062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:42.774000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 16 03:14:42.872000 audit[3063]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3063 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:14:42.872000 audit[3063]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffcb9a92a80 a2=0 a3=7ffcb9a92a6c items=0 ppid=2997 pid=3063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:42.872000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 16 03:14:42.903000 audit[3065]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3065 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:14:42.903000 audit[3065]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffd97e112a0 a2=0 a3=7ffd97e1128c items=0 ppid=2997 pid=3065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:42.903000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Dec 16 03:14:42.943000 audit[3068]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3068 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:14:42.943000 audit[3068]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff91911ad0 a2=0 a3=7fff91911abc items=0 ppid=2997 pid=3068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:42.943000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Dec 16 03:14:42.948000 audit[3069]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3069 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:14:42.948000 audit[3069]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd87a15b40 a2=0 a3=7ffd87a15b2c items=0 ppid=2997 pid=3069 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:42.948000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 16 03:14:42.955000 audit[3071]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3071 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:14:42.955000 audit[3071]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcb44d45d0 a2=0 a3=7ffcb44d45bc items=0 ppid=2997 pid=3071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:42.955000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 16 03:14:42.968000 audit[3072]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3072 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:14:42.968000 audit[3072]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc27aed470 a2=0 a3=7ffc27aed45c items=0 ppid=2997 pid=3072 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:42.968000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 16 03:14:42.976000 audit[3074]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3074 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:14:42.976000 audit[3074]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe5d8a4250 a2=0 a3=7ffe5d8a423c items=0 ppid=2997 pid=3074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:42.976000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 16 03:14:42.993000 audit[3077]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3077 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:14:42.993000 audit[3077]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffebf6ce220 a2=0 a3=7ffebf6ce20c items=0 ppid=2997 pid=3077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:42.993000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Dec 16 03:14:43.002000 audit[3078]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3078 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:14:43.002000 audit[3078]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffedb104850 a2=0 a3=7ffedb10483c items=0 ppid=2997 pid=3078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:43.002000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 16 03:14:43.009000 audit[3080]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3080 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:14:43.009000 audit[3080]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc49668c60 a2=0 a3=7ffc49668c4c items=0 ppid=2997 pid=3080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:43.009000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 16 03:14:43.016000 audit[3081]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3081 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:14:43.016000 audit[3081]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe4aba49a0 a2=0 a3=7ffe4aba498c items=0 ppid=2997 pid=3081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:43.016000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 16 03:14:43.036000 audit[3083]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3083 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:14:43.036000 audit[3083]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcdfede690 a2=0 a3=7ffcdfede67c items=0 ppid=2997 pid=3083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:43.036000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 16 03:14:43.065000 audit[3086]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3086 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:14:43.065000 audit[3086]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd3ba88370 a2=0 a3=7ffd3ba8835c items=0 ppid=2997 pid=3086 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:43.065000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 16 03:14:43.089000 audit[3089]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3089 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:14:43.089000 audit[3089]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffeaafd0a60 a2=0 a3=7ffeaafd0a4c items=0 ppid=2997 pid=3089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:43.089000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 16 03:14:43.093000 audit[3090]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3090 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:14:43.093000 audit[3090]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffedada2460 a2=0 a3=7ffedada244c items=0 ppid=2997 pid=3090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:43.093000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 16 03:14:43.102000 audit[3092]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3092 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:14:43.102000 audit[3092]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffc84ed2650 a2=0 a3=7ffc84ed263c items=0 ppid=2997 pid=3092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:43.102000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 03:14:43.114000 audit[3095]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3095 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:14:43.114000 audit[3095]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff984a7490 a2=0 a3=7fff984a747c items=0 ppid=2997 pid=3095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:43.114000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 03:14:43.115000 audit[3096]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3096 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:14:43.115000 audit[3096]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe59030730 a2=0 a3=7ffe5903071c items=0 ppid=2997 pid=3096 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:43.115000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 16 03:14:43.127000 audit[3098]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3098 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 16 03:14:43.127000 audit[3098]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffe5cfb9c00 a2=0 a3=7ffe5cfb9bec items=0 ppid=2997 pid=3098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:43.127000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 16 03:14:43.274000 audit[3104]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3104 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:14:43.274000 audit[3104]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffcd54b4e00 a2=0 a3=7ffcd54b4dec items=0 ppid=2997 pid=3104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:43.274000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:14:43.342000 audit[3104]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3104 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:14:43.342000 audit[3104]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffcd54b4e00 a2=0 a3=7ffcd54b4dec items=0 ppid=2997 pid=3104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:43.342000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:14:43.348000 audit[3109]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3109 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:14:43.348000 audit[3109]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffd6db7b6b0 a2=0 a3=7ffd6db7b69c items=0 ppid=2997 pid=3109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:43.348000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 16 03:14:43.367000 audit[3111]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3111 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:14:43.367000 audit[3111]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffc47263670 a2=0 a3=7ffc4726365c items=0 ppid=2997 pid=3111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:43.367000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Dec 16 03:14:43.381000 audit[3114]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3114 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:14:43.381000 audit[3114]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc643418b0 a2=0 a3=7ffc6434189c items=0 ppid=2997 pid=3114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:43.381000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Dec 16 03:14:43.384000 audit[3115]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3115 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:14:43.384000 audit[3115]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc8a8fd540 a2=0 a3=7ffc8a8fd52c items=0 ppid=2997 pid=3115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:43.384000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 16 03:14:43.394000 audit[3117]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3117 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:14:43.394000 audit[3117]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd45caa6c0 a2=0 a3=7ffd45caa6ac items=0 ppid=2997 pid=3117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:43.394000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 16 03:14:43.403000 audit[3118]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3118 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:14:43.403000 audit[3118]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff14bc4eb0 a2=0 a3=7fff14bc4e9c items=0 ppid=2997 pid=3118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:43.403000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 16 03:14:43.423000 audit[3120]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3120 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:14:43.423000 audit[3120]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff08332a10 a2=0 a3=7fff083329fc items=0 ppid=2997 pid=3120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:43.423000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Dec 16 03:14:43.437000 audit[3123]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3123 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:14:43.437000 audit[3123]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7fff178d77e0 a2=0 a3=7fff178d77cc items=0 ppid=2997 pid=3123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:43.437000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 16 03:14:43.440000 audit[3124]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3124 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:14:43.440000 audit[3124]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff781b2190 a2=0 a3=7fff781b217c items=0 ppid=2997 pid=3124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:43.440000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 16 03:14:43.444000 audit[3126]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3126 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:14:43.444000 audit[3126]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcf0686620 a2=0 a3=7ffcf068660c items=0 ppid=2997 pid=3126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:43.444000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 16 03:14:43.446000 audit[3127]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3127 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:14:43.446000 audit[3127]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdfbf8c9d0 a2=0 a3=7ffdfbf8c9bc items=0 ppid=2997 pid=3127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:43.446000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 16 03:14:43.451000 audit[3129]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3129 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:14:43.451000 audit[3129]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe7d981740 a2=0 a3=7ffe7d98172c items=0 ppid=2997 pid=3129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:43.451000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 16 03:14:43.464000 audit[3132]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3132 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:14:43.464000 audit[3132]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc8e4d8fe0 a2=0 a3=7ffc8e4d8fcc items=0 ppid=2997 pid=3132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:43.464000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 16 03:14:43.486000 audit[3135]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3135 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:14:43.486000 audit[3135]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd06788610 a2=0 a3=7ffd067885fc items=0 ppid=2997 pid=3135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:43.486000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Dec 16 03:14:43.493000 audit[3136]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3136 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:14:43.493000 audit[3136]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffff8448500 a2=0 a3=7ffff84484ec items=0 ppid=2997 pid=3136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:43.493000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 16 03:14:43.508000 audit[3138]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3138 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:14:43.508000 audit[3138]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffd9887e000 a2=0 a3=7ffd9887dfec items=0 ppid=2997 pid=3138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:43.508000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 03:14:43.520000 audit[3141]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3141 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:14:43.520000 audit[3141]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe94e37c80 a2=0 a3=7ffe94e37c6c items=0 ppid=2997 pid=3141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:43.520000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 16 03:14:43.523000 audit[3142]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3142 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:14:43.523000 audit[3142]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdff259ba0 a2=0 a3=7ffdff259b8c items=0 ppid=2997 pid=3142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:43.523000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 16 03:14:43.530000 audit[3144]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3144 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:14:43.530000 audit[3144]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffc327973e0 a2=0 a3=7ffc327973cc items=0 ppid=2997 pid=3144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:43.530000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 16 03:14:43.535000 audit[3145]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3145 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:14:43.535000 audit[3145]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffff84394f0 a2=0 a3=7ffff84394dc items=0 ppid=2997 pid=3145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:43.535000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 16 03:14:43.551000 audit[3147]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3147 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:14:43.551000 audit[3147]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc44b462d0 a2=0 a3=7ffc44b462bc items=0 ppid=2997 pid=3147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:43.551000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 03:14:43.564000 audit[3150]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3150 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 16 03:14:43.564000 audit[3150]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff2d47de70 a2=0 a3=7fff2d47de5c items=0 ppid=2997 pid=3150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:43.564000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 16 03:14:43.570000 audit[3152]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3152 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 16 03:14:43.570000 audit[3152]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffcf7e31330 a2=0 a3=7ffcf7e3131c items=0 ppid=2997 pid=3152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:43.570000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:14:43.573000 audit[3152]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3152 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 16 03:14:43.573000 audit[3152]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffcf7e31330 a2=0 a3=7ffcf7e3131c items=0 ppid=2997 pid=3152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:43.573000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:14:44.615254 kubelet[2848]: E1216 03:14:44.614591 2848 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:14:45.265937 kubelet[2848]: E1216 03:14:45.265059 2848 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:14:45.518096 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2493124673.mount: Deactivated successfully. Dec 16 03:14:45.608752 kubelet[2848]: E1216 03:14:45.608562 2848 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:14:45.608752 kubelet[2848]: E1216 03:14:45.608647 2848 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:14:46.144746 kubelet[2848]: E1216 03:14:46.144479 2848 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:14:46.612346 kubelet[2848]: E1216 03:14:46.612292 2848 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:14:47.060173 containerd[1606]: time="2025-12-16T03:14:47.058908848Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:14:47.060904 containerd[1606]: time="2025-12-16T03:14:47.060825314Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25053688" Dec 16 03:14:47.069944 containerd[1606]: time="2025-12-16T03:14:47.068415306Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:14:47.083955 containerd[1606]: time="2025-12-16T03:14:47.081972574Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:14:47.083955 containerd[1606]: time="2025-12-16T03:14:47.082863765Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 4.693776314s" Dec 16 03:14:47.083955 containerd[1606]: time="2025-12-16T03:14:47.082892640Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Dec 16 03:14:47.096769 containerd[1606]: time="2025-12-16T03:14:47.094675017Z" level=info msg="CreateContainer within sandbox \"ed3d2d0a7f4ebb82642cfe0726b23129eb4075677a2d45d6f4882e0301f986dd\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 16 03:14:47.138993 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1573267888.mount: Deactivated successfully. Dec 16 03:14:47.162750 containerd[1606]: time="2025-12-16T03:14:47.161906705Z" level=info msg="Container 8b532f4e5cd98d4e3f5fbdbbfa24566c694ecc5784c89c285a19800e155f5881: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:14:47.181732 containerd[1606]: time="2025-12-16T03:14:47.178603564Z" level=info msg="CreateContainer within sandbox \"ed3d2d0a7f4ebb82642cfe0726b23129eb4075677a2d45d6f4882e0301f986dd\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"8b532f4e5cd98d4e3f5fbdbbfa24566c694ecc5784c89c285a19800e155f5881\"" Dec 16 03:14:47.187856 containerd[1606]: time="2025-12-16T03:14:47.187360004Z" level=info msg="StartContainer for \"8b532f4e5cd98d4e3f5fbdbbfa24566c694ecc5784c89c285a19800e155f5881\"" Dec 16 03:14:47.194052 containerd[1606]: time="2025-12-16T03:14:47.193965770Z" level=info msg="connecting to shim 8b532f4e5cd98d4e3f5fbdbbfa24566c694ecc5784c89c285a19800e155f5881" address="unix:///run/containerd/s/c677dbbf50f3b859ee8b9814df39848cec73bb1b48ecfb04e3815441d8efa6aa" protocol=ttrpc version=3 Dec 16 03:14:47.251094 systemd[1]: Started cri-containerd-8b532f4e5cd98d4e3f5fbdbbfa24566c694ecc5784c89c285a19800e155f5881.scope - libcontainer container 8b532f4e5cd98d4e3f5fbdbbfa24566c694ecc5784c89c285a19800e155f5881. Dec 16 03:14:47.294000 audit: BPF prog-id=144 op=LOAD Dec 16 03:14:47.296984 kernel: kauditd_printk_skb: 202 callbacks suppressed Dec 16 03:14:47.297062 kernel: audit: type=1334 audit(1765854887.294:498): prog-id=144 op=LOAD Dec 16 03:14:47.304877 kernel: audit: type=1334 audit(1765854887.295:499): prog-id=145 op=LOAD Dec 16 03:14:47.295000 audit: BPF prog-id=145 op=LOAD Dec 16 03:14:47.295000 audit[3162]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2952 pid=3162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:47.324232 kernel: audit: type=1300 audit(1765854887.295:499): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2952 pid=3162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:47.324408 kernel: audit: type=1327 audit(1765854887.295:499): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862353332663465356364393864346533663566626462626661323435 Dec 16 03:14:47.295000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862353332663465356364393864346533663566626462626661323435 Dec 16 03:14:47.300000 audit: BPF prog-id=145 op=UNLOAD Dec 16 03:14:47.300000 audit[3162]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2952 pid=3162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:47.348818 kernel: audit: type=1334 audit(1765854887.300:500): prog-id=145 op=UNLOAD Dec 16 03:14:47.348953 kernel: audit: type=1300 audit(1765854887.300:500): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2952 pid=3162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:47.348992 kernel: audit: type=1327 audit(1765854887.300:500): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862353332663465356364393864346533663566626462626661323435 Dec 16 03:14:47.300000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862353332663465356364393864346533663566626462626661323435 Dec 16 03:14:47.368210 kernel: audit: type=1334 audit(1765854887.300:501): prog-id=146 op=LOAD Dec 16 03:14:47.300000 audit: BPF prog-id=146 op=LOAD Dec 16 03:14:47.300000 audit[3162]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2952 pid=3162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:47.379373 kernel: audit: type=1300 audit(1765854887.300:501): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2952 pid=3162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:47.380977 kernel: audit: type=1327 audit(1765854887.300:501): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862353332663465356364393864346533663566626462626661323435 Dec 16 03:14:47.300000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862353332663465356364393864346533663566626462626661323435 Dec 16 03:14:47.300000 audit: BPF prog-id=147 op=LOAD Dec 16 03:14:47.300000 audit[3162]: SYSCALL arch=c000003e syscall=321 success=yes exit=24 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2952 pid=3162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:47.300000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862353332663465356364393864346533663566626462626661323435 Dec 16 03:14:47.301000 audit: BPF prog-id=147 op=UNLOAD Dec 16 03:14:47.301000 audit[3162]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=18 a1=0 a2=0 a3=0 items=0 ppid=2952 pid=3162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:47.301000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862353332663465356364393864346533663566626462626661323435 Dec 16 03:14:47.301000 audit: BPF prog-id=146 op=UNLOAD Dec 16 03:14:47.301000 audit[3162]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2952 pid=3162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:47.301000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862353332663465356364393864346533663566626462626661323435 Dec 16 03:14:47.301000 audit: BPF prog-id=148 op=LOAD Dec 16 03:14:47.301000 audit[3162]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2952 pid=3162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:47.301000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862353332663465356364393864346533663566626462626661323435 Dec 16 03:14:47.403151 containerd[1606]: time="2025-12-16T03:14:47.400600713Z" level=info msg="StartContainer for \"8b532f4e5cd98d4e3f5fbdbbfa24566c694ecc5784c89c285a19800e155f5881\" returns successfully" Dec 16 03:14:55.482487 sudo[1846]: pam_unix(sudo:session): session closed for user root Dec 16 03:14:55.481000 audit[1846]: USER_END pid=1846 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:14:55.484502 kernel: kauditd_printk_skb: 12 callbacks suppressed Dec 16 03:14:55.484600 kernel: audit: type=1106 audit(1765854895.481:506): pid=1846 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:14:55.494058 kernel: audit: type=1104 audit(1765854895.481:507): pid=1846 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:14:55.481000 audit[1846]: CRED_DISP pid=1846 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 16 03:14:55.498307 sshd[1845]: Connection closed by 10.0.0.1 port 42322 Dec 16 03:14:55.498972 sshd-session[1841]: pam_unix(sshd:session): session closed for user core Dec 16 03:14:55.500000 audit[1841]: USER_END pid=1841 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:14:55.507746 kernel: audit: type=1106 audit(1765854895.500:508): pid=1841 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:14:55.501000 audit[1841]: CRED_DISP pid=1841 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:14:55.507997 systemd-logind[1586]: Session 10 logged out. Waiting for processes to exit. Dec 16 03:14:55.509569 systemd[1]: sshd@8-10.0.0.26:22-10.0.0.1:42322.service: Deactivated successfully. Dec 16 03:14:55.510000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.26:22-10.0.0.1:42322 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:55.517467 kernel: audit: type=1104 audit(1765854895.501:509): pid=1841 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:14:55.517532 kernel: audit: type=1131 audit(1765854895.510:510): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.26:22-10.0.0.1:42322 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:14:55.520732 systemd[1]: session-10.scope: Deactivated successfully. Dec 16 03:14:55.522816 systemd[1]: session-10.scope: Consumed 5.991s CPU time, 217.7M memory peak. Dec 16 03:14:55.531986 systemd-logind[1586]: Removed session 10. Dec 16 03:14:55.939000 audit[3257]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3257 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:14:55.944745 kernel: audit: type=1325 audit(1765854895.939:511): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3257 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:14:55.939000 audit[3257]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffd8e656d10 a2=0 a3=7ffd8e656cfc items=0 ppid=2997 pid=3257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:55.954124 kernel: audit: type=1300 audit(1765854895.939:511): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffd8e656d10 a2=0 a3=7ffd8e656cfc items=0 ppid=2997 pid=3257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:55.954298 kernel: audit: type=1327 audit(1765854895.939:511): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:14:55.939000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:14:55.957451 kernel: audit: type=1325 audit(1765854895.951:512): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3257 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:14:55.951000 audit[3257]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3257 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:14:55.951000 audit[3257]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd8e656d10 a2=0 a3=0 items=0 ppid=2997 pid=3257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:55.951000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:14:55.964758 kernel: audit: type=1300 audit(1765854895.951:512): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd8e656d10 a2=0 a3=0 items=0 ppid=2997 pid=3257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:55.971000 audit[3259]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3259 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:14:55.971000 audit[3259]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffea4eb9540 a2=0 a3=7ffea4eb952c items=0 ppid=2997 pid=3259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:55.971000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:14:55.976000 audit[3259]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3259 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:14:55.976000 audit[3259]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffea4eb9540 a2=0 a3=0 items=0 ppid=2997 pid=3259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:55.976000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:14:57.964000 audit[3261]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3261 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:14:57.964000 audit[3261]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7fff62913070 a2=0 a3=7fff6291305c items=0 ppid=2997 pid=3261 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:57.964000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:14:57.970000 audit[3261]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3261 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:14:57.970000 audit[3261]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff62913070 a2=0 a3=0 items=0 ppid=2997 pid=3261 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:57.970000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:14:57.995000 audit[3263]: NETFILTER_CFG table=filter:111 family=2 entries=18 op=nft_register_rule pid=3263 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:14:57.995000 audit[3263]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffea76482d0 a2=0 a3=7ffea76482bc items=0 ppid=2997 pid=3263 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:57.995000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:14:58.003000 audit[3263]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3263 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:14:58.003000 audit[3263]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffea76482d0 a2=0 a3=0 items=0 ppid=2997 pid=3263 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:58.003000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:14:59.059000 audit[3265]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3265 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:14:59.059000 audit[3265]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe358f09b0 a2=0 a3=7ffe358f099c items=0 ppid=2997 pid=3265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:59.059000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:14:59.067000 audit[3265]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3265 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:14:59.067000 audit[3265]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe358f09b0 a2=0 a3=0 items=0 ppid=2997 pid=3265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:14:59.067000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:15:00.203000 audit[3267]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3267 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:15:00.203000 audit[3267]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffcafe8baf0 a2=0 a3=7ffcafe8badc items=0 ppid=2997 pid=3267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:00.203000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:15:00.224774 kubelet[2848]: I1216 03:15:00.223324 2848 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-sk6hh" podStartSLOduration=14.518903055 podStartE2EDuration="19.223298765s" podCreationTimestamp="2025-12-16 03:14:41 +0000 UTC" firstStartedPulling="2025-12-16 03:14:42.388087204 +0000 UTC m=+5.126779624" lastFinishedPulling="2025-12-16 03:14:47.092482924 +0000 UTC m=+9.831175334" observedRunningTime="2025-12-16 03:14:47.681981526 +0000 UTC m=+10.420673936" watchObservedRunningTime="2025-12-16 03:15:00.223298765 +0000 UTC m=+22.961991185" Dec 16 03:15:00.209000 audit[3267]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3267 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:15:00.209000 audit[3267]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcafe8baf0 a2=0 a3=0 items=0 ppid=2997 pid=3267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:00.209000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:15:00.240045 systemd[1]: Created slice kubepods-besteffort-pod989356ff_8a91_470e_847a_131d2565b5bf.slice - libcontainer container kubepods-besteffort-pod989356ff_8a91_470e_847a_131d2565b5bf.slice. Dec 16 03:15:00.247523 kubelet[2848]: I1216 03:15:00.247456 2848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kcrj\" (UniqueName: \"kubernetes.io/projected/989356ff-8a91-470e-847a-131d2565b5bf-kube-api-access-7kcrj\") pod \"calico-typha-5995899b87-k5b9m\" (UID: \"989356ff-8a91-470e-847a-131d2565b5bf\") " pod="calico-system/calico-typha-5995899b87-k5b9m" Dec 16 03:15:00.247523 kubelet[2848]: I1216 03:15:00.247520 2848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/989356ff-8a91-470e-847a-131d2565b5bf-tigera-ca-bundle\") pod \"calico-typha-5995899b87-k5b9m\" (UID: \"989356ff-8a91-470e-847a-131d2565b5bf\") " pod="calico-system/calico-typha-5995899b87-k5b9m" Dec 16 03:15:00.247869 kubelet[2848]: I1216 03:15:00.247543 2848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/989356ff-8a91-470e-847a-131d2565b5bf-typha-certs\") pod \"calico-typha-5995899b87-k5b9m\" (UID: \"989356ff-8a91-470e-847a-131d2565b5bf\") " pod="calico-system/calico-typha-5995899b87-k5b9m" Dec 16 03:15:00.476491 systemd[1]: Created slice kubepods-besteffort-pod03bd91df_ae0e_4e82_a4fa_86fb80449796.slice - libcontainer container kubepods-besteffort-pod03bd91df_ae0e_4e82_a4fa_86fb80449796.slice. Dec 16 03:15:00.547020 kubelet[2848]: E1216 03:15:00.546940 2848 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:15:00.547578 containerd[1606]: time="2025-12-16T03:15:00.547534091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5995899b87-k5b9m,Uid:989356ff-8a91-470e-847a-131d2565b5bf,Namespace:calico-system,Attempt:0,}" Dec 16 03:15:00.550131 kubelet[2848]: I1216 03:15:00.550076 2848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/03bd91df-ae0e-4e82-a4fa-86fb80449796-flexvol-driver-host\") pod \"calico-node-5cptp\" (UID: \"03bd91df-ae0e-4e82-a4fa-86fb80449796\") " pod="calico-system/calico-node-5cptp" Dec 16 03:15:00.550131 kubelet[2848]: I1216 03:15:00.550138 2848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/03bd91df-ae0e-4e82-a4fa-86fb80449796-node-certs\") pod \"calico-node-5cptp\" (UID: \"03bd91df-ae0e-4e82-a4fa-86fb80449796\") " pod="calico-system/calico-node-5cptp" Dec 16 03:15:00.550376 kubelet[2848]: I1216 03:15:00.550271 2848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03bd91df-ae0e-4e82-a4fa-86fb80449796-tigera-ca-bundle\") pod \"calico-node-5cptp\" (UID: \"03bd91df-ae0e-4e82-a4fa-86fb80449796\") " pod="calico-system/calico-node-5cptp" Dec 16 03:15:00.550517 kubelet[2848]: I1216 03:15:00.550488 2848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/03bd91df-ae0e-4e82-a4fa-86fb80449796-cni-bin-dir\") pod \"calico-node-5cptp\" (UID: \"03bd91df-ae0e-4e82-a4fa-86fb80449796\") " pod="calico-system/calico-node-5cptp" Dec 16 03:15:00.550601 kubelet[2848]: I1216 03:15:00.550531 2848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/03bd91df-ae0e-4e82-a4fa-86fb80449796-cni-net-dir\") pod \"calico-node-5cptp\" (UID: \"03bd91df-ae0e-4e82-a4fa-86fb80449796\") " pod="calico-system/calico-node-5cptp" Dec 16 03:15:00.550601 kubelet[2848]: I1216 03:15:00.550569 2848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnmq5\" (UniqueName: \"kubernetes.io/projected/03bd91df-ae0e-4e82-a4fa-86fb80449796-kube-api-access-lnmq5\") pod \"calico-node-5cptp\" (UID: \"03bd91df-ae0e-4e82-a4fa-86fb80449796\") " pod="calico-system/calico-node-5cptp" Dec 16 03:15:00.551026 kubelet[2848]: I1216 03:15:00.550607 2848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/03bd91df-ae0e-4e82-a4fa-86fb80449796-var-lib-calico\") pod \"calico-node-5cptp\" (UID: \"03bd91df-ae0e-4e82-a4fa-86fb80449796\") " pod="calico-system/calico-node-5cptp" Dec 16 03:15:00.551026 kubelet[2848]: I1216 03:15:00.550644 2848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/03bd91df-ae0e-4e82-a4fa-86fb80449796-xtables-lock\") pod \"calico-node-5cptp\" (UID: \"03bd91df-ae0e-4e82-a4fa-86fb80449796\") " pod="calico-system/calico-node-5cptp" Dec 16 03:15:00.551026 kubelet[2848]: I1216 03:15:00.550681 2848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/03bd91df-ae0e-4e82-a4fa-86fb80449796-policysync\") pod \"calico-node-5cptp\" (UID: \"03bd91df-ae0e-4e82-a4fa-86fb80449796\") " pod="calico-system/calico-node-5cptp" Dec 16 03:15:00.551026 kubelet[2848]: I1216 03:15:00.550742 2848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/03bd91df-ae0e-4e82-a4fa-86fb80449796-var-run-calico\") pod \"calico-node-5cptp\" (UID: \"03bd91df-ae0e-4e82-a4fa-86fb80449796\") " pod="calico-system/calico-node-5cptp" Dec 16 03:15:00.551026 kubelet[2848]: I1216 03:15:00.550788 2848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/03bd91df-ae0e-4e82-a4fa-86fb80449796-lib-modules\") pod \"calico-node-5cptp\" (UID: \"03bd91df-ae0e-4e82-a4fa-86fb80449796\") " pod="calico-system/calico-node-5cptp" Dec 16 03:15:00.551197 kubelet[2848]: I1216 03:15:00.550938 2848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/03bd91df-ae0e-4e82-a4fa-86fb80449796-cni-log-dir\") pod \"calico-node-5cptp\" (UID: \"03bd91df-ae0e-4e82-a4fa-86fb80449796\") " pod="calico-system/calico-node-5cptp" Dec 16 03:15:00.586421 kubelet[2848]: E1216 03:15:00.586361 2848 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h4rmp" podUID="05279c13-1f07-47f3-aaa0-f3eff20006ee" Dec 16 03:15:00.591797 containerd[1606]: time="2025-12-16T03:15:00.590404555Z" level=info msg="connecting to shim b69676be4868af05dd8fcc0dd13ffd9e031b1ac0c6376dafa1ef6d87219b8a47" address="unix:///run/containerd/s/21d01645c31d1b0952485aa30a9c3485076f9b9d0e75588f1b09e2bac10284a4" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:15:00.626056 systemd[1]: Started cri-containerd-b69676be4868af05dd8fcc0dd13ffd9e031b1ac0c6376dafa1ef6d87219b8a47.scope - libcontainer container b69676be4868af05dd8fcc0dd13ffd9e031b1ac0c6376dafa1ef6d87219b8a47. Dec 16 03:15:00.643000 audit: BPF prog-id=149 op=LOAD Dec 16 03:15:00.646790 kernel: kauditd_printk_skb: 31 callbacks suppressed Dec 16 03:15:00.646875 kernel: audit: type=1334 audit(1765854900.643:523): prog-id=149 op=LOAD Dec 16 03:15:00.644000 audit: BPF prog-id=150 op=LOAD Dec 16 03:15:00.648743 kernel: audit: type=1334 audit(1765854900.644:524): prog-id=150 op=LOAD Dec 16 03:15:00.644000 audit[3289]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=3277 pid=3289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:00.654980 kernel: audit: type=1300 audit(1765854900.644:524): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=3277 pid=3289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:00.655139 kubelet[2848]: I1216 03:15:00.655110 2848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/05279c13-1f07-47f3-aaa0-f3eff20006ee-socket-dir\") pod \"csi-node-driver-h4rmp\" (UID: \"05279c13-1f07-47f3-aaa0-f3eff20006ee\") " pod="calico-system/csi-node-driver-h4rmp" Dec 16 03:15:00.655216 kubelet[2848]: I1216 03:15:00.655154 2848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvmgs\" (UniqueName: \"kubernetes.io/projected/05279c13-1f07-47f3-aaa0-f3eff20006ee-kube-api-access-qvmgs\") pod \"csi-node-driver-h4rmp\" (UID: \"05279c13-1f07-47f3-aaa0-f3eff20006ee\") " pod="calico-system/csi-node-driver-h4rmp" Dec 16 03:15:00.655250 kubelet[2848]: I1216 03:15:00.655221 2848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/05279c13-1f07-47f3-aaa0-f3eff20006ee-registration-dir\") pod \"csi-node-driver-h4rmp\" (UID: \"05279c13-1f07-47f3-aaa0-f3eff20006ee\") " pod="calico-system/csi-node-driver-h4rmp" Dec 16 03:15:00.655250 kubelet[2848]: I1216 03:15:00.655245 2848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/05279c13-1f07-47f3-aaa0-f3eff20006ee-varrun\") pod \"csi-node-driver-h4rmp\" (UID: \"05279c13-1f07-47f3-aaa0-f3eff20006ee\") " pod="calico-system/csi-node-driver-h4rmp" Dec 16 03:15:00.655310 kubelet[2848]: I1216 03:15:00.655271 2848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/05279c13-1f07-47f3-aaa0-f3eff20006ee-kubelet-dir\") pod \"csi-node-driver-h4rmp\" (UID: \"05279c13-1f07-47f3-aaa0-f3eff20006ee\") " pod="calico-system/csi-node-driver-h4rmp" Dec 16 03:15:00.644000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236393637366265343836386166303564643866636330646431336666 Dec 16 03:15:00.665738 kernel: audit: type=1327 audit(1765854900.644:524): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236393637366265343836386166303564643866636330646431336666 Dec 16 03:15:00.667628 kubelet[2848]: E1216 03:15:00.667596 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:00.667756 kubelet[2848]: W1216 03:15:00.667623 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:00.667845 kubelet[2848]: E1216 03:15:00.667791 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:00.644000 audit: BPF prog-id=150 op=UNLOAD Dec 16 03:15:00.644000 audit[3289]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3277 pid=3289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:00.676170 kubelet[2848]: E1216 03:15:00.676135 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:00.676244 kubelet[2848]: W1216 03:15:00.676168 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:00.676244 kubelet[2848]: E1216 03:15:00.676201 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:00.683256 kernel: audit: type=1334 audit(1765854900.644:525): prog-id=150 op=UNLOAD Dec 16 03:15:00.683356 kernel: audit: type=1300 audit(1765854900.644:525): arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3277 pid=3289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:00.683411 kernel: audit: type=1327 audit(1765854900.644:525): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236393637366265343836386166303564643866636330646431336666 Dec 16 03:15:00.644000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236393637366265343836386166303564643866636330646431336666 Dec 16 03:15:00.644000 audit: BPF prog-id=151 op=LOAD Dec 16 03:15:00.693370 kernel: audit: type=1334 audit(1765854900.644:526): prog-id=151 op=LOAD Dec 16 03:15:00.644000 audit[3289]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3277 pid=3289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:00.644000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236393637366265343836386166303564643866636330646431336666 Dec 16 03:15:00.708487 kernel: audit: type=1300 audit(1765854900.644:526): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3277 pid=3289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:00.708597 kernel: audit: type=1327 audit(1765854900.644:526): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236393637366265343836386166303564643866636330646431336666 Dec 16 03:15:00.644000 audit: BPF prog-id=152 op=LOAD Dec 16 03:15:00.644000 audit[3289]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3277 pid=3289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:00.644000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236393637366265343836386166303564643866636330646431336666 Dec 16 03:15:00.644000 audit: BPF prog-id=152 op=UNLOAD Dec 16 03:15:00.644000 audit[3289]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3277 pid=3289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:00.644000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236393637366265343836386166303564643866636330646431336666 Dec 16 03:15:00.644000 audit: BPF prog-id=151 op=UNLOAD Dec 16 03:15:00.644000 audit[3289]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3277 pid=3289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:00.644000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236393637366265343836386166303564643866636330646431336666 Dec 16 03:15:00.644000 audit: BPF prog-id=153 op=LOAD Dec 16 03:15:00.644000 audit[3289]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=3277 pid=3289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:00.644000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236393637366265343836386166303564643866636330646431336666 Dec 16 03:15:00.726403 containerd[1606]: time="2025-12-16T03:15:00.726295900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5995899b87-k5b9m,Uid:989356ff-8a91-470e-847a-131d2565b5bf,Namespace:calico-system,Attempt:0,} returns sandbox id \"b69676be4868af05dd8fcc0dd13ffd9e031b1ac0c6376dafa1ef6d87219b8a47\"" Dec 16 03:15:00.728356 kubelet[2848]: E1216 03:15:00.728241 2848 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:15:00.731317 containerd[1606]: time="2025-12-16T03:15:00.731261887Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 16 03:15:00.756009 kubelet[2848]: E1216 03:15:00.755951 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:00.756009 kubelet[2848]: W1216 03:15:00.755985 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:00.756009 kubelet[2848]: E1216 03:15:00.756005 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:00.756247 kubelet[2848]: E1216 03:15:00.756221 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:00.756247 kubelet[2848]: W1216 03:15:00.756233 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:00.756290 kubelet[2848]: E1216 03:15:00.756249 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:00.756501 kubelet[2848]: E1216 03:15:00.756469 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:00.756501 kubelet[2848]: W1216 03:15:00.756484 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:00.756501 kubelet[2848]: E1216 03:15:00.756500 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:00.756755 kubelet[2848]: E1216 03:15:00.756706 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:00.756755 kubelet[2848]: W1216 03:15:00.756739 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:00.756755 kubelet[2848]: E1216 03:15:00.756752 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:00.757083 kubelet[2848]: E1216 03:15:00.757054 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:00.757139 kubelet[2848]: W1216 03:15:00.757080 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:00.757139 kubelet[2848]: E1216 03:15:00.757120 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:00.757430 kubelet[2848]: E1216 03:15:00.757402 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:00.757430 kubelet[2848]: W1216 03:15:00.757414 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:00.757430 kubelet[2848]: E1216 03:15:00.757427 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:00.757672 kubelet[2848]: E1216 03:15:00.757651 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:00.757672 kubelet[2848]: W1216 03:15:00.757663 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:00.757790 kubelet[2848]: E1216 03:15:00.757699 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:00.757890 kubelet[2848]: E1216 03:15:00.757873 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:00.757890 kubelet[2848]: W1216 03:15:00.757883 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:00.757978 kubelet[2848]: E1216 03:15:00.757919 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:00.758111 kubelet[2848]: E1216 03:15:00.758093 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:00.758111 kubelet[2848]: W1216 03:15:00.758103 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:00.758192 kubelet[2848]: E1216 03:15:00.758133 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:00.758298 kubelet[2848]: E1216 03:15:00.758282 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:00.758298 kubelet[2848]: W1216 03:15:00.758291 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:00.758367 kubelet[2848]: E1216 03:15:00.758322 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:00.758484 kubelet[2848]: E1216 03:15:00.758467 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:00.758484 kubelet[2848]: W1216 03:15:00.758476 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:00.758556 kubelet[2848]: E1216 03:15:00.758502 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:00.758673 kubelet[2848]: E1216 03:15:00.758656 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:00.758673 kubelet[2848]: W1216 03:15:00.758665 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:00.758772 kubelet[2848]: E1216 03:15:00.758678 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:00.758939 kubelet[2848]: E1216 03:15:00.758919 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:00.758939 kubelet[2848]: W1216 03:15:00.758930 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:00.759042 kubelet[2848]: E1216 03:15:00.758976 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:00.759258 kubelet[2848]: E1216 03:15:00.759241 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:00.759258 kubelet[2848]: W1216 03:15:00.759251 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:00.759331 kubelet[2848]: E1216 03:15:00.759317 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:00.759489 kubelet[2848]: E1216 03:15:00.759457 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:00.759489 kubelet[2848]: W1216 03:15:00.759482 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:00.759725 kubelet[2848]: E1216 03:15:00.759522 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:00.759803 kubelet[2848]: E1216 03:15:00.759702 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:00.759803 kubelet[2848]: W1216 03:15:00.759756 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:00.759803 kubelet[2848]: E1216 03:15:00.759787 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:00.759992 kubelet[2848]: E1216 03:15:00.759978 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:00.759992 kubelet[2848]: W1216 03:15:00.759989 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:00.760081 kubelet[2848]: E1216 03:15:00.760028 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:00.760182 kubelet[2848]: E1216 03:15:00.760167 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:00.760182 kubelet[2848]: W1216 03:15:00.760178 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:00.760261 kubelet[2848]: E1216 03:15:00.760202 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:00.760376 kubelet[2848]: E1216 03:15:00.760362 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:00.760376 kubelet[2848]: W1216 03:15:00.760372 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:00.760463 kubelet[2848]: E1216 03:15:00.760387 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:00.764483 kubelet[2848]: E1216 03:15:00.764465 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:00.764483 kubelet[2848]: W1216 03:15:00.764477 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:00.764576 kubelet[2848]: E1216 03:15:00.764491 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:00.764699 kubelet[2848]: E1216 03:15:00.764685 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:00.764699 kubelet[2848]: W1216 03:15:00.764694 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:00.764823 kubelet[2848]: E1216 03:15:00.764706 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:00.764896 kubelet[2848]: E1216 03:15:00.764882 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:00.764896 kubelet[2848]: W1216 03:15:00.764891 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:00.764968 kubelet[2848]: E1216 03:15:00.764903 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:00.765113 kubelet[2848]: E1216 03:15:00.765099 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:00.765113 kubelet[2848]: W1216 03:15:00.765109 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:00.765212 kubelet[2848]: E1216 03:15:00.765134 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:00.765272 kubelet[2848]: E1216 03:15:00.765259 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:00.765272 kubelet[2848]: W1216 03:15:00.765269 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:00.765342 kubelet[2848]: E1216 03:15:00.765277 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:00.765661 kubelet[2848]: E1216 03:15:00.765646 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:00.765661 kubelet[2848]: W1216 03:15:00.765656 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:00.765761 kubelet[2848]: E1216 03:15:00.765664 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:00.769172 kubelet[2848]: E1216 03:15:00.769140 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:00.769172 kubelet[2848]: W1216 03:15:00.769162 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:00.769250 kubelet[2848]: E1216 03:15:00.769182 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:00.779341 kubelet[2848]: E1216 03:15:00.779297 2848 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:15:00.779954 containerd[1606]: time="2025-12-16T03:15:00.779911350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5cptp,Uid:03bd91df-ae0e-4e82-a4fa-86fb80449796,Namespace:calico-system,Attempt:0,}" Dec 16 03:15:00.808598 containerd[1606]: time="2025-12-16T03:15:00.808556317Z" level=info msg="connecting to shim 93308a3b75d720ff282db40194be7a4d19749e72dc34ce30a7cf3816092ce712" address="unix:///run/containerd/s/8d59605687c08a36e93f3dbfa6998c75539928ee9a83b41e4464f06878fb25c5" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:15:00.836012 systemd[1]: Started cri-containerd-93308a3b75d720ff282db40194be7a4d19749e72dc34ce30a7cf3816092ce712.scope - libcontainer container 93308a3b75d720ff282db40194be7a4d19749e72dc34ce30a7cf3816092ce712. Dec 16 03:15:00.850000 audit: BPF prog-id=154 op=LOAD Dec 16 03:15:00.851000 audit: BPF prog-id=155 op=LOAD Dec 16 03:15:00.851000 audit[3367]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3356 pid=3367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:00.851000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933333038613362373564373230666632383264623430313934626537 Dec 16 03:15:00.851000 audit: BPF prog-id=155 op=UNLOAD Dec 16 03:15:00.851000 audit[3367]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3356 pid=3367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:00.851000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933333038613362373564373230666632383264623430313934626537 Dec 16 03:15:00.851000 audit: BPF prog-id=156 op=LOAD Dec 16 03:15:00.851000 audit[3367]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3356 pid=3367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:00.851000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933333038613362373564373230666632383264623430313934626537 Dec 16 03:15:00.852000 audit: BPF prog-id=157 op=LOAD Dec 16 03:15:00.852000 audit[3367]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3356 pid=3367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:00.852000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933333038613362373564373230666632383264623430313934626537 Dec 16 03:15:00.852000 audit: BPF prog-id=157 op=UNLOAD Dec 16 03:15:00.852000 audit[3367]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3356 pid=3367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:00.852000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933333038613362373564373230666632383264623430313934626537 Dec 16 03:15:00.852000 audit: BPF prog-id=156 op=UNLOAD Dec 16 03:15:00.852000 audit[3367]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3356 pid=3367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:00.852000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933333038613362373564373230666632383264623430313934626537 Dec 16 03:15:00.852000 audit: BPF prog-id=158 op=LOAD Dec 16 03:15:00.852000 audit[3367]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3356 pid=3367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:00.852000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933333038613362373564373230666632383264623430313934626537 Dec 16 03:15:00.874114 containerd[1606]: time="2025-12-16T03:15:00.874040383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5cptp,Uid:03bd91df-ae0e-4e82-a4fa-86fb80449796,Namespace:calico-system,Attempt:0,} returns sandbox id \"93308a3b75d720ff282db40194be7a4d19749e72dc34ce30a7cf3816092ce712\"" Dec 16 03:15:00.875092 kubelet[2848]: E1216 03:15:00.875052 2848 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:15:01.241000 audit[3393]: NETFILTER_CFG table=filter:117 family=2 entries=22 op=nft_register_rule pid=3393 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:15:01.241000 audit[3393]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffd985333b0 a2=0 a3=7ffd9853339c items=0 ppid=2997 pid=3393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:01.241000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:15:01.251000 audit[3393]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=3393 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:15:01.251000 audit[3393]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd985333b0 a2=0 a3=0 items=0 ppid=2997 pid=3393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:01.251000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:15:02.501808 kubelet[2848]: E1216 03:15:02.501686 2848 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h4rmp" podUID="05279c13-1f07-47f3-aaa0-f3eff20006ee" Dec 16 03:15:02.670889 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1653389879.mount: Deactivated successfully. Dec 16 03:15:03.499407 containerd[1606]: time="2025-12-16T03:15:03.499322450Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:15:03.500556 containerd[1606]: time="2025-12-16T03:15:03.500487506Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35231371" Dec 16 03:15:03.501731 containerd[1606]: time="2025-12-16T03:15:03.501667891Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:15:03.503757 containerd[1606]: time="2025-12-16T03:15:03.503727593Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:15:03.505945 containerd[1606]: time="2025-12-16T03:15:03.504911104Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.773131201s" Dec 16 03:15:03.506024 containerd[1606]: time="2025-12-16T03:15:03.505946661Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Dec 16 03:15:03.508099 containerd[1606]: time="2025-12-16T03:15:03.508048694Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 16 03:15:03.522107 containerd[1606]: time="2025-12-16T03:15:03.522047725Z" level=info msg="CreateContainer within sandbox \"b69676be4868af05dd8fcc0dd13ffd9e031b1ac0c6376dafa1ef6d87219b8a47\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 16 03:15:03.531598 containerd[1606]: time="2025-12-16T03:15:03.531536792Z" level=info msg="Container 2c7098fb7a33453cfdba45f5d2df439766a0ebf6e99a5479b0255a34eb3453a4: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:15:03.540028 containerd[1606]: time="2025-12-16T03:15:03.539978880Z" level=info msg="CreateContainer within sandbox \"b69676be4868af05dd8fcc0dd13ffd9e031b1ac0c6376dafa1ef6d87219b8a47\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"2c7098fb7a33453cfdba45f5d2df439766a0ebf6e99a5479b0255a34eb3453a4\"" Dec 16 03:15:03.540522 containerd[1606]: time="2025-12-16T03:15:03.540459633Z" level=info msg="StartContainer for \"2c7098fb7a33453cfdba45f5d2df439766a0ebf6e99a5479b0255a34eb3453a4\"" Dec 16 03:15:03.541751 containerd[1606]: time="2025-12-16T03:15:03.541707176Z" level=info msg="connecting to shim 2c7098fb7a33453cfdba45f5d2df439766a0ebf6e99a5479b0255a34eb3453a4" address="unix:///run/containerd/s/21d01645c31d1b0952485aa30a9c3485076f9b9d0e75588f1b09e2bac10284a4" protocol=ttrpc version=3 Dec 16 03:15:03.565998 systemd[1]: Started cri-containerd-2c7098fb7a33453cfdba45f5d2df439766a0ebf6e99a5479b0255a34eb3453a4.scope - libcontainer container 2c7098fb7a33453cfdba45f5d2df439766a0ebf6e99a5479b0255a34eb3453a4. Dec 16 03:15:03.585000 audit: BPF prog-id=159 op=LOAD Dec 16 03:15:03.585000 audit: BPF prog-id=160 op=LOAD Dec 16 03:15:03.585000 audit[3404]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3277 pid=3404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:03.585000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263373039386662376133333435336366646261343566356432646634 Dec 16 03:15:03.586000 audit: BPF prog-id=160 op=UNLOAD Dec 16 03:15:03.586000 audit[3404]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3277 pid=3404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:03.586000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263373039386662376133333435336366646261343566356432646634 Dec 16 03:15:03.586000 audit: BPF prog-id=161 op=LOAD Dec 16 03:15:03.586000 audit[3404]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3277 pid=3404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:03.586000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263373039386662376133333435336366646261343566356432646634 Dec 16 03:15:03.586000 audit: BPF prog-id=162 op=LOAD Dec 16 03:15:03.586000 audit[3404]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3277 pid=3404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:03.586000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263373039386662376133333435336366646261343566356432646634 Dec 16 03:15:03.586000 audit: BPF prog-id=162 op=UNLOAD Dec 16 03:15:03.586000 audit[3404]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3277 pid=3404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:03.586000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263373039386662376133333435336366646261343566356432646634 Dec 16 03:15:03.586000 audit: BPF prog-id=161 op=UNLOAD Dec 16 03:15:03.586000 audit[3404]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3277 pid=3404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:03.586000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263373039386662376133333435336366646261343566356432646634 Dec 16 03:15:03.586000 audit: BPF prog-id=163 op=LOAD Dec 16 03:15:03.586000 audit[3404]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3277 pid=3404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:03.586000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263373039386662376133333435336366646261343566356432646634 Dec 16 03:15:03.624460 containerd[1606]: time="2025-12-16T03:15:03.624406347Z" level=info msg="StartContainer for \"2c7098fb7a33453cfdba45f5d2df439766a0ebf6e99a5479b0255a34eb3453a4\" returns successfully" Dec 16 03:15:03.692749 kubelet[2848]: E1216 03:15:03.692656 2848 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:15:03.707207 kubelet[2848]: I1216 03:15:03.706989 2848 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5995899b87-k5b9m" podStartSLOduration=0.929651437 podStartE2EDuration="3.70697189s" podCreationTimestamp="2025-12-16 03:15:00 +0000 UTC" firstStartedPulling="2025-12-16 03:15:00.730557243 +0000 UTC m=+23.469249653" lastFinishedPulling="2025-12-16 03:15:03.507877676 +0000 UTC m=+26.246570106" observedRunningTime="2025-12-16 03:15:03.706532647 +0000 UTC m=+26.445225058" watchObservedRunningTime="2025-12-16 03:15:03.70697189 +0000 UTC m=+26.445664300" Dec 16 03:15:03.764871 kubelet[2848]: E1216 03:15:03.764706 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:03.764871 kubelet[2848]: W1216 03:15:03.764759 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:03.764871 kubelet[2848]: E1216 03:15:03.764784 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:03.765077 kubelet[2848]: E1216 03:15:03.765000 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:03.765077 kubelet[2848]: W1216 03:15:03.765008 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:03.765077 kubelet[2848]: E1216 03:15:03.765027 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:03.765730 kubelet[2848]: E1216 03:15:03.765269 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:03.765730 kubelet[2848]: W1216 03:15:03.765281 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:03.765730 kubelet[2848]: E1216 03:15:03.765293 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:03.765730 kubelet[2848]: E1216 03:15:03.765516 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:03.765730 kubelet[2848]: W1216 03:15:03.765524 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:03.765730 kubelet[2848]: E1216 03:15:03.765532 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:03.765897 kubelet[2848]: E1216 03:15:03.765747 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:03.765897 kubelet[2848]: W1216 03:15:03.765755 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:03.765897 kubelet[2848]: E1216 03:15:03.765764 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:03.766088 kubelet[2848]: E1216 03:15:03.765941 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:03.766088 kubelet[2848]: W1216 03:15:03.765991 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:03.766088 kubelet[2848]: E1216 03:15:03.765999 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:03.766229 kubelet[2848]: E1216 03:15:03.766214 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:03.766229 kubelet[2848]: W1216 03:15:03.766225 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:03.766288 kubelet[2848]: E1216 03:15:03.766235 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:03.766454 kubelet[2848]: E1216 03:15:03.766372 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:03.766454 kubelet[2848]: W1216 03:15:03.766382 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:03.766454 kubelet[2848]: E1216 03:15:03.766389 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:03.766682 kubelet[2848]: E1216 03:15:03.766641 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:03.766682 kubelet[2848]: W1216 03:15:03.766654 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:03.766682 kubelet[2848]: E1216 03:15:03.766663 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:03.766987 kubelet[2848]: E1216 03:15:03.766863 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:03.766987 kubelet[2848]: W1216 03:15:03.766873 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:03.766987 kubelet[2848]: E1216 03:15:03.766881 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:03.767203 kubelet[2848]: E1216 03:15:03.767175 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:03.767203 kubelet[2848]: W1216 03:15:03.767190 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:03.767203 kubelet[2848]: E1216 03:15:03.767199 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:03.767435 kubelet[2848]: E1216 03:15:03.767364 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:03.767435 kubelet[2848]: W1216 03:15:03.767373 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:03.767435 kubelet[2848]: E1216 03:15:03.767380 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:03.767890 kubelet[2848]: E1216 03:15:03.767851 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:03.767937 kubelet[2848]: W1216 03:15:03.767905 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:03.767975 kubelet[2848]: E1216 03:15:03.767936 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:03.768884 kubelet[2848]: E1216 03:15:03.768858 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:03.768884 kubelet[2848]: W1216 03:15:03.768875 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:03.768957 kubelet[2848]: E1216 03:15:03.768891 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:03.769161 kubelet[2848]: E1216 03:15:03.769130 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:03.769161 kubelet[2848]: W1216 03:15:03.769148 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:03.769161 kubelet[2848]: E1216 03:15:03.769157 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:03.777273 kubelet[2848]: E1216 03:15:03.777230 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:03.777273 kubelet[2848]: W1216 03:15:03.777258 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:03.777273 kubelet[2848]: E1216 03:15:03.777281 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:03.777530 kubelet[2848]: E1216 03:15:03.777503 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:03.777530 kubelet[2848]: W1216 03:15:03.777515 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:03.777530 kubelet[2848]: E1216 03:15:03.777528 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:03.777807 kubelet[2848]: E1216 03:15:03.777753 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:03.777807 kubelet[2848]: W1216 03:15:03.777770 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:03.777807 kubelet[2848]: E1216 03:15:03.777788 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:03.778526 kubelet[2848]: E1216 03:15:03.778497 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:03.778526 kubelet[2848]: W1216 03:15:03.778512 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:03.778526 kubelet[2848]: E1216 03:15:03.778530 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:03.778864 kubelet[2848]: E1216 03:15:03.778829 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:03.778864 kubelet[2848]: W1216 03:15:03.778857 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:03.779049 kubelet[2848]: E1216 03:15:03.779028 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:03.779485 kubelet[2848]: E1216 03:15:03.779461 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:03.779485 kubelet[2848]: W1216 03:15:03.779477 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:03.779554 kubelet[2848]: E1216 03:15:03.779527 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:03.779777 kubelet[2848]: E1216 03:15:03.779753 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:03.779777 kubelet[2848]: W1216 03:15:03.779771 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:03.779861 kubelet[2848]: E1216 03:15:03.779813 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:03.780071 kubelet[2848]: E1216 03:15:03.780001 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:03.780071 kubelet[2848]: W1216 03:15:03.780026 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:03.780239 kubelet[2848]: E1216 03:15:03.780193 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:03.780421 kubelet[2848]: E1216 03:15:03.780400 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:03.780492 kubelet[2848]: W1216 03:15:03.780470 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:03.780618 kubelet[2848]: E1216 03:15:03.780544 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:03.780906 kubelet[2848]: E1216 03:15:03.780893 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:03.781200 kubelet[2848]: W1216 03:15:03.780961 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:03.781200 kubelet[2848]: E1216 03:15:03.780975 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:03.781200 kubelet[2848]: E1216 03:15:03.781174 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:03.781200 kubelet[2848]: W1216 03:15:03.781184 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:03.781311 kubelet[2848]: E1216 03:15:03.781265 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:03.781370 kubelet[2848]: E1216 03:15:03.781350 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:03.781370 kubelet[2848]: W1216 03:15:03.781362 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:03.781465 kubelet[2848]: E1216 03:15:03.781440 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:03.781607 kubelet[2848]: E1216 03:15:03.781584 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:03.781607 kubelet[2848]: W1216 03:15:03.781598 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:03.781683 kubelet[2848]: E1216 03:15:03.781677 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:03.782047 kubelet[2848]: E1216 03:15:03.782031 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:03.782047 kubelet[2848]: W1216 03:15:03.782042 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:03.782135 kubelet[2848]: E1216 03:15:03.782065 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:03.782311 kubelet[2848]: E1216 03:15:03.782297 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:03.782311 kubelet[2848]: W1216 03:15:03.782307 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:03.782368 kubelet[2848]: E1216 03:15:03.782320 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:03.782826 kubelet[2848]: E1216 03:15:03.782810 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:03.782826 kubelet[2848]: W1216 03:15:03.782820 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:03.782891 kubelet[2848]: E1216 03:15:03.782834 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:03.783093 kubelet[2848]: E1216 03:15:03.783078 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:03.783093 kubelet[2848]: W1216 03:15:03.783089 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:03.783149 kubelet[2848]: E1216 03:15:03.783101 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:03.783272 kubelet[2848]: E1216 03:15:03.783258 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:03.783272 kubelet[2848]: W1216 03:15:03.783268 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:03.783321 kubelet[2848]: E1216 03:15:03.783276 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:04.501596 kubelet[2848]: E1216 03:15:04.501515 2848 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h4rmp" podUID="05279c13-1f07-47f3-aaa0-f3eff20006ee" Dec 16 03:15:04.693209 kubelet[2848]: I1216 03:15:04.693155 2848 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 03:15:04.693785 kubelet[2848]: E1216 03:15:04.693603 2848 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:15:04.776684 kubelet[2848]: E1216 03:15:04.776553 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:04.776684 kubelet[2848]: W1216 03:15:04.776580 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:04.776684 kubelet[2848]: E1216 03:15:04.776605 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:04.776895 kubelet[2848]: E1216 03:15:04.776829 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:04.776895 kubelet[2848]: W1216 03:15:04.776838 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:04.776895 kubelet[2848]: E1216 03:15:04.776848 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:04.777062 kubelet[2848]: E1216 03:15:04.777037 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:04.777062 kubelet[2848]: W1216 03:15:04.777048 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:04.777062 kubelet[2848]: E1216 03:15:04.777056 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:04.777224 kubelet[2848]: E1216 03:15:04.777204 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:04.777224 kubelet[2848]: W1216 03:15:04.777214 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:04.777224 kubelet[2848]: E1216 03:15:04.777222 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:04.777398 kubelet[2848]: E1216 03:15:04.777374 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:04.777398 kubelet[2848]: W1216 03:15:04.777389 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:04.777398 kubelet[2848]: E1216 03:15:04.777397 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:04.777548 kubelet[2848]: E1216 03:15:04.777534 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:04.777548 kubelet[2848]: W1216 03:15:04.777544 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:04.777591 kubelet[2848]: E1216 03:15:04.777551 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:04.777708 kubelet[2848]: E1216 03:15:04.777694 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:04.777708 kubelet[2848]: W1216 03:15:04.777701 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:04.777765 kubelet[2848]: E1216 03:15:04.777708 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:04.777885 kubelet[2848]: E1216 03:15:04.777871 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:04.777885 kubelet[2848]: W1216 03:15:04.777880 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:04.777934 kubelet[2848]: E1216 03:15:04.777887 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:04.778057 kubelet[2848]: E1216 03:15:04.778042 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:04.778057 kubelet[2848]: W1216 03:15:04.778052 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:04.778106 kubelet[2848]: E1216 03:15:04.778059 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:04.778223 kubelet[2848]: E1216 03:15:04.778204 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:04.778223 kubelet[2848]: W1216 03:15:04.778215 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:04.778223 kubelet[2848]: E1216 03:15:04.778227 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:04.778405 kubelet[2848]: E1216 03:15:04.778390 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:04.778405 kubelet[2848]: W1216 03:15:04.778401 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:04.778449 kubelet[2848]: E1216 03:15:04.778409 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:04.778591 kubelet[2848]: E1216 03:15:04.778573 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:04.778591 kubelet[2848]: W1216 03:15:04.778588 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:04.778591 kubelet[2848]: E1216 03:15:04.778599 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:04.778805 kubelet[2848]: E1216 03:15:04.778791 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:04.778805 kubelet[2848]: W1216 03:15:04.778801 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:04.778868 kubelet[2848]: E1216 03:15:04.778809 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:04.778984 kubelet[2848]: E1216 03:15:04.778969 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:04.778984 kubelet[2848]: W1216 03:15:04.778979 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:04.779063 kubelet[2848]: E1216 03:15:04.778988 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:04.779196 kubelet[2848]: E1216 03:15:04.779162 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:04.779196 kubelet[2848]: W1216 03:15:04.779182 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:04.779196 kubelet[2848]: E1216 03:15:04.779189 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:04.785734 kubelet[2848]: E1216 03:15:04.785687 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:04.785805 kubelet[2848]: W1216 03:15:04.785735 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:04.785805 kubelet[2848]: E1216 03:15:04.785765 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:04.786048 kubelet[2848]: E1216 03:15:04.786006 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:04.786048 kubelet[2848]: W1216 03:15:04.786019 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:04.786120 kubelet[2848]: E1216 03:15:04.786057 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:04.786283 kubelet[2848]: E1216 03:15:04.786264 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:04.786283 kubelet[2848]: W1216 03:15:04.786273 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:04.786341 kubelet[2848]: E1216 03:15:04.786287 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:04.786481 kubelet[2848]: E1216 03:15:04.786456 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:04.786481 kubelet[2848]: W1216 03:15:04.786471 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:04.786536 kubelet[2848]: E1216 03:15:04.786484 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:04.786680 kubelet[2848]: E1216 03:15:04.786652 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:04.786680 kubelet[2848]: W1216 03:15:04.786666 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:04.786776 kubelet[2848]: E1216 03:15:04.786682 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:04.786873 kubelet[2848]: E1216 03:15:04.786857 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:04.786873 kubelet[2848]: W1216 03:15:04.786868 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:04.786916 kubelet[2848]: E1216 03:15:04.786882 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:04.787099 kubelet[2848]: E1216 03:15:04.787086 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:04.787099 kubelet[2848]: W1216 03:15:04.787097 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:04.787163 kubelet[2848]: E1216 03:15:04.787111 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:04.787298 kubelet[2848]: E1216 03:15:04.787282 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:04.787298 kubelet[2848]: W1216 03:15:04.787296 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:04.787343 kubelet[2848]: E1216 03:15:04.787309 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:04.787492 kubelet[2848]: E1216 03:15:04.787477 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:04.787492 kubelet[2848]: W1216 03:15:04.787489 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:04.787542 kubelet[2848]: E1216 03:15:04.787503 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:04.787699 kubelet[2848]: E1216 03:15:04.787684 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:04.787699 kubelet[2848]: W1216 03:15:04.787695 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:04.787767 kubelet[2848]: E1216 03:15:04.787708 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:04.787901 kubelet[2848]: E1216 03:15:04.787891 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:04.787901 kubelet[2848]: W1216 03:15:04.787899 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:04.787948 kubelet[2848]: E1216 03:15:04.787913 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:04.788094 kubelet[2848]: E1216 03:15:04.788078 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:04.788094 kubelet[2848]: W1216 03:15:04.788089 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:04.788141 kubelet[2848]: E1216 03:15:04.788101 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:04.788306 kubelet[2848]: E1216 03:15:04.788290 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:04.788306 kubelet[2848]: W1216 03:15:04.788300 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:04.788391 kubelet[2848]: E1216 03:15:04.788313 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:04.788497 kubelet[2848]: E1216 03:15:04.788485 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:04.788497 kubelet[2848]: W1216 03:15:04.788494 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:04.788543 kubelet[2848]: E1216 03:15:04.788506 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:04.788704 kubelet[2848]: E1216 03:15:04.788693 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:04.788704 kubelet[2848]: W1216 03:15:04.788702 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:04.788774 kubelet[2848]: E1216 03:15:04.788727 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:04.789042 kubelet[2848]: E1216 03:15:04.789005 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:04.789042 kubelet[2848]: W1216 03:15:04.789024 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:04.789097 kubelet[2848]: E1216 03:15:04.789052 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:04.789305 kubelet[2848]: E1216 03:15:04.789285 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:04.789305 kubelet[2848]: W1216 03:15:04.789299 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:04.789357 kubelet[2848]: E1216 03:15:04.789312 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:04.789500 kubelet[2848]: E1216 03:15:04.789489 2848 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 03:15:04.789500 kubelet[2848]: W1216 03:15:04.789498 2848 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 03:15:04.789543 kubelet[2848]: E1216 03:15:04.789506 2848 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 03:15:05.030959 containerd[1606]: time="2025-12-16T03:15:05.030826927Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:15:05.032117 containerd[1606]: time="2025-12-16T03:15:05.031893812Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Dec 16 03:15:05.033567 containerd[1606]: time="2025-12-16T03:15:05.033530940Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:15:05.035814 containerd[1606]: time="2025-12-16T03:15:05.035767206Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:15:05.036520 containerd[1606]: time="2025-12-16T03:15:05.036471035Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.527865131s" Dec 16 03:15:05.036520 containerd[1606]: time="2025-12-16T03:15:05.036510971Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Dec 16 03:15:05.038681 containerd[1606]: time="2025-12-16T03:15:05.038363471Z" level=info msg="CreateContainer within sandbox \"93308a3b75d720ff282db40194be7a4d19749e72dc34ce30a7cf3816092ce712\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 16 03:15:05.049396 containerd[1606]: time="2025-12-16T03:15:05.049332735Z" level=info msg="Container 78066d0bb5d226dcd6252b74b7c227b52e498ce5ffd2c63c65f3c4510a797b0f: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:15:05.059131 containerd[1606]: time="2025-12-16T03:15:05.059079765Z" level=info msg="CreateContainer within sandbox \"93308a3b75d720ff282db40194be7a4d19749e72dc34ce30a7cf3816092ce712\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"78066d0bb5d226dcd6252b74b7c227b52e498ce5ffd2c63c65f3c4510a797b0f\"" Dec 16 03:15:05.059817 containerd[1606]: time="2025-12-16T03:15:05.059778003Z" level=info msg="StartContainer for \"78066d0bb5d226dcd6252b74b7c227b52e498ce5ffd2c63c65f3c4510a797b0f\"" Dec 16 03:15:05.061621 containerd[1606]: time="2025-12-16T03:15:05.061590957Z" level=info msg="connecting to shim 78066d0bb5d226dcd6252b74b7c227b52e498ce5ffd2c63c65f3c4510a797b0f" address="unix:///run/containerd/s/8d59605687c08a36e93f3dbfa6998c75539928ee9a83b41e4464f06878fb25c5" protocol=ttrpc version=3 Dec 16 03:15:05.084931 systemd[1]: Started cri-containerd-78066d0bb5d226dcd6252b74b7c227b52e498ce5ffd2c63c65f3c4510a797b0f.scope - libcontainer container 78066d0bb5d226dcd6252b74b7c227b52e498ce5ffd2c63c65f3c4510a797b0f. Dec 16 03:15:05.179000 audit: BPF prog-id=164 op=LOAD Dec 16 03:15:05.179000 audit[3515]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3356 pid=3515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:05.179000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738303636643062623564323236646364363235326237346237633232 Dec 16 03:15:05.179000 audit: BPF prog-id=165 op=LOAD Dec 16 03:15:05.179000 audit[3515]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3356 pid=3515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:05.179000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738303636643062623564323236646364363235326237346237633232 Dec 16 03:15:05.180000 audit: BPF prog-id=165 op=UNLOAD Dec 16 03:15:05.180000 audit[3515]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3356 pid=3515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:05.180000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738303636643062623564323236646364363235326237346237633232 Dec 16 03:15:05.180000 audit: BPF prog-id=164 op=UNLOAD Dec 16 03:15:05.180000 audit[3515]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3356 pid=3515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:05.180000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738303636643062623564323236646364363235326237346237633232 Dec 16 03:15:05.180000 audit: BPF prog-id=166 op=LOAD Dec 16 03:15:05.180000 audit[3515]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3356 pid=3515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:05.180000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738303636643062623564323236646364363235326237346237633232 Dec 16 03:15:05.201064 containerd[1606]: time="2025-12-16T03:15:05.201006961Z" level=info msg="StartContainer for \"78066d0bb5d226dcd6252b74b7c227b52e498ce5ffd2c63c65f3c4510a797b0f\" returns successfully" Dec 16 03:15:05.214099 systemd[1]: cri-containerd-78066d0bb5d226dcd6252b74b7c227b52e498ce5ffd2c63c65f3c4510a797b0f.scope: Deactivated successfully. Dec 16 03:15:05.217525 containerd[1606]: time="2025-12-16T03:15:05.217455696Z" level=info msg="received container exit event container_id:\"78066d0bb5d226dcd6252b74b7c227b52e498ce5ffd2c63c65f3c4510a797b0f\" id:\"78066d0bb5d226dcd6252b74b7c227b52e498ce5ffd2c63c65f3c4510a797b0f\" pid:3528 exited_at:{seconds:1765854905 nanos:216874963}" Dec 16 03:15:05.220000 audit: BPF prog-id=166 op=UNLOAD Dec 16 03:15:05.249995 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-78066d0bb5d226dcd6252b74b7c227b52e498ce5ffd2c63c65f3c4510a797b0f-rootfs.mount: Deactivated successfully. Dec 16 03:15:05.751435 kubelet[2848]: E1216 03:15:05.696214 2848 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:15:06.501922 kubelet[2848]: E1216 03:15:06.501847 2848 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h4rmp" podUID="05279c13-1f07-47f3-aaa0-f3eff20006ee" Dec 16 03:15:06.699913 kubelet[2848]: E1216 03:15:06.699854 2848 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:15:06.700916 containerd[1606]: time="2025-12-16T03:15:06.700877546Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 16 03:15:08.501996 kubelet[2848]: E1216 03:15:08.501920 2848 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h4rmp" podUID="05279c13-1f07-47f3-aaa0-f3eff20006ee" Dec 16 03:15:10.501909 kubelet[2848]: E1216 03:15:10.501845 2848 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h4rmp" podUID="05279c13-1f07-47f3-aaa0-f3eff20006ee" Dec 16 03:15:10.710703 containerd[1606]: time="2025-12-16T03:15:10.710616009Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:15:10.711851 containerd[1606]: time="2025-12-16T03:15:10.711777299Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Dec 16 03:15:10.713219 containerd[1606]: time="2025-12-16T03:15:10.713178317Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:15:10.717974 containerd[1606]: time="2025-12-16T03:15:10.717883761Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:15:10.720479 containerd[1606]: time="2025-12-16T03:15:10.720429367Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 4.018723657s" Dec 16 03:15:10.720577 containerd[1606]: time="2025-12-16T03:15:10.720483290Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Dec 16 03:15:10.723438 containerd[1606]: time="2025-12-16T03:15:10.723366923Z" level=info msg="CreateContainer within sandbox \"93308a3b75d720ff282db40194be7a4d19749e72dc34ce30a7cf3816092ce712\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 16 03:15:10.733790 containerd[1606]: time="2025-12-16T03:15:10.733692811Z" level=info msg="Container 2013c0f523186f7fd546dbd75957762b1582d35d01976a70b0c9869e9df0b0ef: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:15:10.747754 containerd[1606]: time="2025-12-16T03:15:10.747638227Z" level=info msg="CreateContainer within sandbox \"93308a3b75d720ff282db40194be7a4d19749e72dc34ce30a7cf3816092ce712\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"2013c0f523186f7fd546dbd75957762b1582d35d01976a70b0c9869e9df0b0ef\"" Dec 16 03:15:10.748749 containerd[1606]: time="2025-12-16T03:15:10.748659119Z" level=info msg="StartContainer for \"2013c0f523186f7fd546dbd75957762b1582d35d01976a70b0c9869e9df0b0ef\"" Dec 16 03:15:10.751312 containerd[1606]: time="2025-12-16T03:15:10.751275039Z" level=info msg="connecting to shim 2013c0f523186f7fd546dbd75957762b1582d35d01976a70b0c9869e9df0b0ef" address="unix:///run/containerd/s/8d59605687c08a36e93f3dbfa6998c75539928ee9a83b41e4464f06878fb25c5" protocol=ttrpc version=3 Dec 16 03:15:10.781996 systemd[1]: Started cri-containerd-2013c0f523186f7fd546dbd75957762b1582d35d01976a70b0c9869e9df0b0ef.scope - libcontainer container 2013c0f523186f7fd546dbd75957762b1582d35d01976a70b0c9869e9df0b0ef. Dec 16 03:15:10.850000 audit: BPF prog-id=167 op=LOAD Dec 16 03:15:10.853849 kernel: kauditd_printk_skb: 78 callbacks suppressed Dec 16 03:15:10.853917 kernel: audit: type=1334 audit(1765854910.850:555): prog-id=167 op=LOAD Dec 16 03:15:10.850000 audit[3576]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3356 pid=3576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:10.861876 kernel: audit: type=1300 audit(1765854910.850:555): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3356 pid=3576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:10.861942 kernel: audit: type=1327 audit(1765854910.850:555): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230313363306635323331383666376664353436646264373539353737 Dec 16 03:15:10.850000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230313363306635323331383666376664353436646264373539353737 Dec 16 03:15:10.850000 audit: BPF prog-id=168 op=LOAD Dec 16 03:15:10.869765 kernel: audit: type=1334 audit(1765854910.850:556): prog-id=168 op=LOAD Dec 16 03:15:10.869828 kernel: audit: type=1300 audit(1765854910.850:556): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3356 pid=3576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:10.850000 audit[3576]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3356 pid=3576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:10.850000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230313363306635323331383666376664353436646264373539353737 Dec 16 03:15:10.882217 kernel: audit: type=1327 audit(1765854910.850:556): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230313363306635323331383666376664353436646264373539353737 Dec 16 03:15:10.882352 kernel: audit: type=1334 audit(1765854910.850:557): prog-id=168 op=UNLOAD Dec 16 03:15:10.850000 audit: BPF prog-id=168 op=UNLOAD Dec 16 03:15:10.850000 audit[3576]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3356 pid=3576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:10.889598 kernel: audit: type=1300 audit(1765854910.850:557): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3356 pid=3576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:10.889878 kernel: audit: type=1327 audit(1765854910.850:557): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230313363306635323331383666376664353436646264373539353737 Dec 16 03:15:10.850000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230313363306635323331383666376664353436646264373539353737 Dec 16 03:15:10.850000 audit: BPF prog-id=167 op=UNLOAD Dec 16 03:15:10.897064 kernel: audit: type=1334 audit(1765854910.850:558): prog-id=167 op=UNLOAD Dec 16 03:15:10.850000 audit[3576]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3356 pid=3576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:10.850000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230313363306635323331383666376664353436646264373539353737 Dec 16 03:15:10.850000 audit: BPF prog-id=169 op=LOAD Dec 16 03:15:10.850000 audit[3576]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3356 pid=3576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:10.850000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230313363306635323331383666376664353436646264373539353737 Dec 16 03:15:10.903741 containerd[1606]: time="2025-12-16T03:15:10.903656820Z" level=info msg="StartContainer for \"2013c0f523186f7fd546dbd75957762b1582d35d01976a70b0c9869e9df0b0ef\" returns successfully" Dec 16 03:15:11.715660 kubelet[2848]: E1216 03:15:11.715608 2848 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:15:12.241964 systemd[1]: cri-containerd-2013c0f523186f7fd546dbd75957762b1582d35d01976a70b0c9869e9df0b0ef.scope: Deactivated successfully. Dec 16 03:15:12.242461 systemd[1]: cri-containerd-2013c0f523186f7fd546dbd75957762b1582d35d01976a70b0c9869e9df0b0ef.scope: Consumed 675ms CPU time, 178.9M memory peak, 1.1M read from disk, 171.3M written to disk. Dec 16 03:15:12.245673 containerd[1606]: time="2025-12-16T03:15:12.244947023Z" level=info msg="received container exit event container_id:\"2013c0f523186f7fd546dbd75957762b1582d35d01976a70b0c9869e9df0b0ef\" id:\"2013c0f523186f7fd546dbd75957762b1582d35d01976a70b0c9869e9df0b0ef\" pid:3589 exited_at:{seconds:1765854912 nanos:244284587}" Dec 16 03:15:12.246000 audit: BPF prog-id=169 op=UNLOAD Dec 16 03:15:12.271553 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2013c0f523186f7fd546dbd75957762b1582d35d01976a70b0c9869e9df0b0ef-rootfs.mount: Deactivated successfully. Dec 16 03:15:12.310020 kubelet[2848]: I1216 03:15:12.309971 2848 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 16 03:15:12.440589 systemd[1]: Created slice kubepods-burstable-pod7167e9f6_6a0b_43eb_9986_460671ce7a4d.slice - libcontainer container kubepods-burstable-pod7167e9f6_6a0b_43eb_9986_460671ce7a4d.slice. Dec 16 03:15:12.447165 systemd[1]: Created slice kubepods-burstable-pod534f1053_c823_4a53_90bf_e8b0c562d2fe.slice - libcontainer container kubepods-burstable-pod534f1053_c823_4a53_90bf_e8b0c562d2fe.slice. Dec 16 03:15:12.454148 systemd[1]: Created slice kubepods-besteffort-pod9e2da91d_bd6f_474c_851c_2fd9d9db86f3.slice - libcontainer container kubepods-besteffort-pod9e2da91d_bd6f_474c_851c_2fd9d9db86f3.slice. Dec 16 03:15:12.458836 systemd[1]: Created slice kubepods-besteffort-pod629371f1_f66b_44ce_8151_2d326255465b.slice - libcontainer container kubepods-besteffort-pod629371f1_f66b_44ce_8151_2d326255465b.slice. Dec 16 03:15:12.465878 systemd[1]: Created slice kubepods-besteffort-podf4b3ad00_5810_4c4f_99d3_1bda488b3dc0.slice - libcontainer container kubepods-besteffort-podf4b3ad00_5810_4c4f_99d3_1bda488b3dc0.slice. Dec 16 03:15:12.470563 systemd[1]: Created slice kubepods-besteffort-podf6a2c05c_26b5_45cc_94cd_f96ac9ec6971.slice - libcontainer container kubepods-besteffort-podf6a2c05c_26b5_45cc_94cd_f96ac9ec6971.slice. Dec 16 03:15:12.475986 systemd[1]: Created slice kubepods-besteffort-pod5b1e8305_d364_4bc6_9a3a_e97daf2d06ed.slice - libcontainer container kubepods-besteffort-pod5b1e8305_d364_4bc6_9a3a_e97daf2d06ed.slice. Dec 16 03:15:12.493666 kubelet[2848]: I1216 03:15:12.493518 2848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7167e9f6-6a0b-43eb-9986-460671ce7a4d-config-volume\") pod \"coredns-668d6bf9bc-l2dsg\" (UID: \"7167e9f6-6a0b-43eb-9986-460671ce7a4d\") " pod="kube-system/coredns-668d6bf9bc-l2dsg" Dec 16 03:15:12.493666 kubelet[2848]: I1216 03:15:12.493581 2848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b1e8305-d364-4bc6-9a3a-e97daf2d06ed-config\") pod \"goldmane-666569f655-vwjmk\" (UID: \"5b1e8305-d364-4bc6-9a3a-e97daf2d06ed\") " pod="calico-system/goldmane-666569f655-vwjmk" Dec 16 03:15:12.493666 kubelet[2848]: I1216 03:15:12.493612 2848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f4b3ad00-5810-4c4f-99d3-1bda488b3dc0-whisker-backend-key-pair\") pod \"whisker-59c569b967-f9bm2\" (UID: \"f4b3ad00-5810-4c4f-99d3-1bda488b3dc0\") " pod="calico-system/whisker-59c569b967-f9bm2" Dec 16 03:15:12.493666 kubelet[2848]: I1216 03:15:12.493637 2848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjlbs\" (UniqueName: \"kubernetes.io/projected/f4b3ad00-5810-4c4f-99d3-1bda488b3dc0-kube-api-access-wjlbs\") pod \"whisker-59c569b967-f9bm2\" (UID: \"f4b3ad00-5810-4c4f-99d3-1bda488b3dc0\") " pod="calico-system/whisker-59c569b967-f9bm2" Dec 16 03:15:12.493666 kubelet[2848]: I1216 03:15:12.493666 2848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f6a2c05c-26b5-45cc-94cd-f96ac9ec6971-calico-apiserver-certs\") pod \"calico-apiserver-6d7bb69b54-5cccw\" (UID: \"f6a2c05c-26b5-45cc-94cd-f96ac9ec6971\") " pod="calico-apiserver/calico-apiserver-6d7bb69b54-5cccw" Dec 16 03:15:12.494025 kubelet[2848]: I1216 03:15:12.493764 2848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9e2da91d-bd6f-474c-851c-2fd9d9db86f3-calico-apiserver-certs\") pod \"calico-apiserver-6d7bb69b54-c7929\" (UID: \"9e2da91d-bd6f-474c-851c-2fd9d9db86f3\") " pod="calico-apiserver/calico-apiserver-6d7bb69b54-c7929" Dec 16 03:15:12.494025 kubelet[2848]: I1216 03:15:12.493798 2848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2x5g\" (UniqueName: \"kubernetes.io/projected/7167e9f6-6a0b-43eb-9986-460671ce7a4d-kube-api-access-d2x5g\") pod \"coredns-668d6bf9bc-l2dsg\" (UID: \"7167e9f6-6a0b-43eb-9986-460671ce7a4d\") " pod="kube-system/coredns-668d6bf9bc-l2dsg" Dec 16 03:15:12.494025 kubelet[2848]: I1216 03:15:12.493823 2848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/534f1053-c823-4a53-90bf-e8b0c562d2fe-config-volume\") pod \"coredns-668d6bf9bc-dvxtw\" (UID: \"534f1053-c823-4a53-90bf-e8b0c562d2fe\") " pod="kube-system/coredns-668d6bf9bc-dvxtw" Dec 16 03:15:12.494025 kubelet[2848]: I1216 03:15:12.493862 2848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zplnm\" (UniqueName: \"kubernetes.io/projected/5b1e8305-d364-4bc6-9a3a-e97daf2d06ed-kube-api-access-zplnm\") pod \"goldmane-666569f655-vwjmk\" (UID: \"5b1e8305-d364-4bc6-9a3a-e97daf2d06ed\") " pod="calico-system/goldmane-666569f655-vwjmk" Dec 16 03:15:12.494025 kubelet[2848]: I1216 03:15:12.493892 2848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/629371f1-f66b-44ce-8151-2d326255465b-tigera-ca-bundle\") pod \"calico-kube-controllers-79777bd46b-pgqh8\" (UID: \"629371f1-f66b-44ce-8151-2d326255465b\") " pod="calico-system/calico-kube-controllers-79777bd46b-pgqh8" Dec 16 03:15:12.494194 kubelet[2848]: I1216 03:15:12.493922 2848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtrdk\" (UniqueName: \"kubernetes.io/projected/629371f1-f66b-44ce-8151-2d326255465b-kube-api-access-jtrdk\") pod \"calico-kube-controllers-79777bd46b-pgqh8\" (UID: \"629371f1-f66b-44ce-8151-2d326255465b\") " pod="calico-system/calico-kube-controllers-79777bd46b-pgqh8" Dec 16 03:15:12.494194 kubelet[2848]: I1216 03:15:12.493948 2848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtxvk\" (UniqueName: \"kubernetes.io/projected/9e2da91d-bd6f-474c-851c-2fd9d9db86f3-kube-api-access-gtxvk\") pod \"calico-apiserver-6d7bb69b54-c7929\" (UID: \"9e2da91d-bd6f-474c-851c-2fd9d9db86f3\") " pod="calico-apiserver/calico-apiserver-6d7bb69b54-c7929" Dec 16 03:15:12.494194 kubelet[2848]: I1216 03:15:12.493975 2848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6cjb\" (UniqueName: \"kubernetes.io/projected/f6a2c05c-26b5-45cc-94cd-f96ac9ec6971-kube-api-access-s6cjb\") pod \"calico-apiserver-6d7bb69b54-5cccw\" (UID: \"f6a2c05c-26b5-45cc-94cd-f96ac9ec6971\") " pod="calico-apiserver/calico-apiserver-6d7bb69b54-5cccw" Dec 16 03:15:12.494194 kubelet[2848]: I1216 03:15:12.493995 2848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/5b1e8305-d364-4bc6-9a3a-e97daf2d06ed-goldmane-key-pair\") pod \"goldmane-666569f655-vwjmk\" (UID: \"5b1e8305-d364-4bc6-9a3a-e97daf2d06ed\") " pod="calico-system/goldmane-666569f655-vwjmk" Dec 16 03:15:12.494194 kubelet[2848]: I1216 03:15:12.494015 2848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f4b3ad00-5810-4c4f-99d3-1bda488b3dc0-whisker-ca-bundle\") pod \"whisker-59c569b967-f9bm2\" (UID: \"f4b3ad00-5810-4c4f-99d3-1bda488b3dc0\") " pod="calico-system/whisker-59c569b967-f9bm2" Dec 16 03:15:12.494349 kubelet[2848]: I1216 03:15:12.494037 2848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24pgq\" (UniqueName: \"kubernetes.io/projected/534f1053-c823-4a53-90bf-e8b0c562d2fe-kube-api-access-24pgq\") pod \"coredns-668d6bf9bc-dvxtw\" (UID: \"534f1053-c823-4a53-90bf-e8b0c562d2fe\") " pod="kube-system/coredns-668d6bf9bc-dvxtw" Dec 16 03:15:12.494349 kubelet[2848]: I1216 03:15:12.494059 2848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b1e8305-d364-4bc6-9a3a-e97daf2d06ed-goldmane-ca-bundle\") pod \"goldmane-666569f655-vwjmk\" (UID: \"5b1e8305-d364-4bc6-9a3a-e97daf2d06ed\") " pod="calico-system/goldmane-666569f655-vwjmk" Dec 16 03:15:12.508438 systemd[1]: Created slice kubepods-besteffort-pod05279c13_1f07_47f3_aaa0_f3eff20006ee.slice - libcontainer container kubepods-besteffort-pod05279c13_1f07_47f3_aaa0_f3eff20006ee.slice. Dec 16 03:15:12.511255 containerd[1606]: time="2025-12-16T03:15:12.511185376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h4rmp,Uid:05279c13-1f07-47f3-aaa0-f3eff20006ee,Namespace:calico-system,Attempt:0,}" Dec 16 03:15:12.720361 kubelet[2848]: E1216 03:15:12.720319 2848 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:15:12.721883 containerd[1606]: time="2025-12-16T03:15:12.721815794Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 16 03:15:12.746416 kubelet[2848]: E1216 03:15:12.745892 2848 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:15:12.748303 containerd[1606]: time="2025-12-16T03:15:12.748247422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-l2dsg,Uid:7167e9f6-6a0b-43eb-9986-460671ce7a4d,Namespace:kube-system,Attempt:0,}" Dec 16 03:15:12.750559 kubelet[2848]: E1216 03:15:12.750528 2848 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:15:12.753272 containerd[1606]: time="2025-12-16T03:15:12.753233594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dvxtw,Uid:534f1053-c823-4a53-90bf-e8b0c562d2fe,Namespace:kube-system,Attempt:0,}" Dec 16 03:15:12.758610 containerd[1606]: time="2025-12-16T03:15:12.758205549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d7bb69b54-c7929,Uid:9e2da91d-bd6f-474c-851c-2fd9d9db86f3,Namespace:calico-apiserver,Attempt:0,}" Dec 16 03:15:12.775977 containerd[1606]: time="2025-12-16T03:15:12.775906954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-59c569b967-f9bm2,Uid:f4b3ad00-5810-4c4f-99d3-1bda488b3dc0,Namespace:calico-system,Attempt:0,}" Dec 16 03:15:12.776183 containerd[1606]: time="2025-12-16T03:15:12.776106805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79777bd46b-pgqh8,Uid:629371f1-f66b-44ce-8151-2d326255465b,Namespace:calico-system,Attempt:0,}" Dec 16 03:15:12.781335 containerd[1606]: time="2025-12-16T03:15:12.780940066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d7bb69b54-5cccw,Uid:f6a2c05c-26b5-45cc-94cd-f96ac9ec6971,Namespace:calico-apiserver,Attempt:0,}" Dec 16 03:15:12.782455 containerd[1606]: time="2025-12-16T03:15:12.781015950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-vwjmk,Uid:5b1e8305-d364-4bc6-9a3a-e97daf2d06ed,Namespace:calico-system,Attempt:0,}" Dec 16 03:15:12.852638 containerd[1606]: time="2025-12-16T03:15:12.852376440Z" level=error msg="Failed to destroy network for sandbox \"eb404a1b9450b98242c31d9ddbc9396e7ef3ec81c120e52b6e5e46df69038493\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:15:12.854586 containerd[1606]: time="2025-12-16T03:15:12.854523451Z" level=error msg="Failed to destroy network for sandbox \"83715a3fa49a3c595a61a8af71d5a4ebb5ba272bc35f7619049ecb4d48a5e193\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:15:12.875039 containerd[1606]: time="2025-12-16T03:15:12.874965059Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-l2dsg,Uid:7167e9f6-6a0b-43eb-9986-460671ce7a4d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"83715a3fa49a3c595a61a8af71d5a4ebb5ba272bc35f7619049ecb4d48a5e193\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:15:12.875407 containerd[1606]: time="2025-12-16T03:15:12.874980928Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h4rmp,Uid:05279c13-1f07-47f3-aaa0-f3eff20006ee,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb404a1b9450b98242c31d9ddbc9396e7ef3ec81c120e52b6e5e46df69038493\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:15:12.875837 kubelet[2848]: E1216 03:15:12.875791 2848 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83715a3fa49a3c595a61a8af71d5a4ebb5ba272bc35f7619049ecb4d48a5e193\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:15:12.876895 kubelet[2848]: E1216 03:15:12.875974 2848 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83715a3fa49a3c595a61a8af71d5a4ebb5ba272bc35f7619049ecb4d48a5e193\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-l2dsg" Dec 16 03:15:12.876895 kubelet[2848]: E1216 03:15:12.876006 2848 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83715a3fa49a3c595a61a8af71d5a4ebb5ba272bc35f7619049ecb4d48a5e193\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-l2dsg" Dec 16 03:15:12.876895 kubelet[2848]: E1216 03:15:12.876064 2848 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-l2dsg_kube-system(7167e9f6-6a0b-43eb-9986-460671ce7a4d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-l2dsg_kube-system(7167e9f6-6a0b-43eb-9986-460671ce7a4d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"83715a3fa49a3c595a61a8af71d5a4ebb5ba272bc35f7619049ecb4d48a5e193\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-l2dsg" podUID="7167e9f6-6a0b-43eb-9986-460671ce7a4d" Dec 16 03:15:12.878281 kubelet[2848]: E1216 03:15:12.877962 2848 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb404a1b9450b98242c31d9ddbc9396e7ef3ec81c120e52b6e5e46df69038493\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:15:12.878281 kubelet[2848]: E1216 03:15:12.878008 2848 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb404a1b9450b98242c31d9ddbc9396e7ef3ec81c120e52b6e5e46df69038493\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h4rmp" Dec 16 03:15:12.878281 kubelet[2848]: E1216 03:15:12.878028 2848 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb404a1b9450b98242c31d9ddbc9396e7ef3ec81c120e52b6e5e46df69038493\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h4rmp" Dec 16 03:15:12.878542 kubelet[2848]: E1216 03:15:12.878063 2848 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-h4rmp_calico-system(05279c13-1f07-47f3-aaa0-f3eff20006ee)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-h4rmp_calico-system(05279c13-1f07-47f3-aaa0-f3eff20006ee)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eb404a1b9450b98242c31d9ddbc9396e7ef3ec81c120e52b6e5e46df69038493\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-h4rmp" podUID="05279c13-1f07-47f3-aaa0-f3eff20006ee" Dec 16 03:15:12.921503 containerd[1606]: time="2025-12-16T03:15:12.921424634Z" level=error msg="Failed to destroy network for sandbox \"e732cdc5736c3dcdca55db7518d8b5ef2cdfc845e4952ea2a391432f5750efc5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:15:12.926988 containerd[1606]: time="2025-12-16T03:15:12.926797525Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dvxtw,Uid:534f1053-c823-4a53-90bf-e8b0c562d2fe,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e732cdc5736c3dcdca55db7518d8b5ef2cdfc845e4952ea2a391432f5750efc5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:15:12.927201 kubelet[2848]: E1216 03:15:12.927115 2848 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e732cdc5736c3dcdca55db7518d8b5ef2cdfc845e4952ea2a391432f5750efc5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:15:12.927286 kubelet[2848]: E1216 03:15:12.927213 2848 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e732cdc5736c3dcdca55db7518d8b5ef2cdfc845e4952ea2a391432f5750efc5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dvxtw" Dec 16 03:15:12.927286 kubelet[2848]: E1216 03:15:12.927243 2848 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e732cdc5736c3dcdca55db7518d8b5ef2cdfc845e4952ea2a391432f5750efc5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dvxtw" Dec 16 03:15:12.927459 kubelet[2848]: E1216 03:15:12.927322 2848 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-dvxtw_kube-system(534f1053-c823-4a53-90bf-e8b0c562d2fe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-dvxtw_kube-system(534f1053-c823-4a53-90bf-e8b0c562d2fe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e732cdc5736c3dcdca55db7518d8b5ef2cdfc845e4952ea2a391432f5750efc5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-dvxtw" podUID="534f1053-c823-4a53-90bf-e8b0c562d2fe" Dec 16 03:15:12.964602 containerd[1606]: time="2025-12-16T03:15:12.964518031Z" level=error msg="Failed to destroy network for sandbox \"1799bddf62d433b9dbb110f669ee6f8ea0a8b8e9dac806a56fb6ccaf1c01a97d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:15:12.968957 containerd[1606]: time="2025-12-16T03:15:12.968884390Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79777bd46b-pgqh8,Uid:629371f1-f66b-44ce-8151-2d326255465b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1799bddf62d433b9dbb110f669ee6f8ea0a8b8e9dac806a56fb6ccaf1c01a97d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:15:12.969274 containerd[1606]: time="2025-12-16T03:15:12.968910719Z" level=error msg="Failed to destroy network for sandbox \"10f07aced78abbcb7a1a8e4536c695ff46695f8cda4666c8b940a9c6c9927e2f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:15:12.969303 kubelet[2848]: E1216 03:15:12.969261 2848 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1799bddf62d433b9dbb110f669ee6f8ea0a8b8e9dac806a56fb6ccaf1c01a97d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:15:12.969359 kubelet[2848]: E1216 03:15:12.969338 2848 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1799bddf62d433b9dbb110f669ee6f8ea0a8b8e9dac806a56fb6ccaf1c01a97d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-79777bd46b-pgqh8" Dec 16 03:15:12.970568 kubelet[2848]: E1216 03:15:12.970468 2848 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1799bddf62d433b9dbb110f669ee6f8ea0a8b8e9dac806a56fb6ccaf1c01a97d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-79777bd46b-pgqh8" Dec 16 03:15:12.970748 kubelet[2848]: E1216 03:15:12.970589 2848 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-79777bd46b-pgqh8_calico-system(629371f1-f66b-44ce-8151-2d326255465b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-79777bd46b-pgqh8_calico-system(629371f1-f66b-44ce-8151-2d326255465b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1799bddf62d433b9dbb110f669ee6f8ea0a8b8e9dac806a56fb6ccaf1c01a97d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-79777bd46b-pgqh8" podUID="629371f1-f66b-44ce-8151-2d326255465b" Dec 16 03:15:12.976400 containerd[1606]: time="2025-12-16T03:15:12.976341240Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d7bb69b54-5cccw,Uid:f6a2c05c-26b5-45cc-94cd-f96ac9ec6971,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"10f07aced78abbcb7a1a8e4536c695ff46695f8cda4666c8b940a9c6c9927e2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:15:12.976636 containerd[1606]: time="2025-12-16T03:15:12.976559427Z" level=error msg="Failed to destroy network for sandbox \"99534a9aac2a2cfd1c13cfdd020207701cb3b850da18c8f4e37157dabaceb9f6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:15:12.976858 kubelet[2848]: E1216 03:15:12.976593 2848 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10f07aced78abbcb7a1a8e4536c695ff46695f8cda4666c8b940a9c6c9927e2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:15:12.976858 kubelet[2848]: E1216 03:15:12.976648 2848 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10f07aced78abbcb7a1a8e4536c695ff46695f8cda4666c8b940a9c6c9927e2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d7bb69b54-5cccw" Dec 16 03:15:12.976858 kubelet[2848]: E1216 03:15:12.976673 2848 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10f07aced78abbcb7a1a8e4536c695ff46695f8cda4666c8b940a9c6c9927e2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d7bb69b54-5cccw" Dec 16 03:15:12.977042 kubelet[2848]: E1216 03:15:12.976733 2848 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6d7bb69b54-5cccw_calico-apiserver(f6a2c05c-26b5-45cc-94cd-f96ac9ec6971)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6d7bb69b54-5cccw_calico-apiserver(f6a2c05c-26b5-45cc-94cd-f96ac9ec6971)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"10f07aced78abbcb7a1a8e4536c695ff46695f8cda4666c8b940a9c6c9927e2f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6d7bb69b54-5cccw" podUID="f6a2c05c-26b5-45cc-94cd-f96ac9ec6971" Dec 16 03:15:12.979689 containerd[1606]: time="2025-12-16T03:15:12.979542093Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d7bb69b54-c7929,Uid:9e2da91d-bd6f-474c-851c-2fd9d9db86f3,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"99534a9aac2a2cfd1c13cfdd020207701cb3b850da18c8f4e37157dabaceb9f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:15:12.980572 kubelet[2848]: E1216 03:15:12.979879 2848 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99534a9aac2a2cfd1c13cfdd020207701cb3b850da18c8f4e37157dabaceb9f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:15:12.980572 kubelet[2848]: E1216 03:15:12.979960 2848 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99534a9aac2a2cfd1c13cfdd020207701cb3b850da18c8f4e37157dabaceb9f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d7bb69b54-c7929" Dec 16 03:15:12.980572 kubelet[2848]: E1216 03:15:12.979991 2848 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99534a9aac2a2cfd1c13cfdd020207701cb3b850da18c8f4e37157dabaceb9f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d7bb69b54-c7929" Dec 16 03:15:12.980698 kubelet[2848]: E1216 03:15:12.980047 2848 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6d7bb69b54-c7929_calico-apiserver(9e2da91d-bd6f-474c-851c-2fd9d9db86f3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6d7bb69b54-c7929_calico-apiserver(9e2da91d-bd6f-474c-851c-2fd9d9db86f3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"99534a9aac2a2cfd1c13cfdd020207701cb3b850da18c8f4e37157dabaceb9f6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6d7bb69b54-c7929" podUID="9e2da91d-bd6f-474c-851c-2fd9d9db86f3" Dec 16 03:15:12.984046 containerd[1606]: time="2025-12-16T03:15:12.983958738Z" level=error msg="Failed to destroy network for sandbox \"18ccc0623fff2726c767db48a1925aeed5481f0336e3a657b43c2e47048ec7c5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:15:12.985245 containerd[1606]: time="2025-12-16T03:15:12.985173107Z" level=error msg="Failed to destroy network for sandbox \"f9d69338cd9d037458747f779a31382835832fb6bc0001a31859984bde8fe289\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:15:12.992216 containerd[1606]: time="2025-12-16T03:15:12.992090236Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-vwjmk,Uid:5b1e8305-d364-4bc6-9a3a-e97daf2d06ed,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9d69338cd9d037458747f779a31382835832fb6bc0001a31859984bde8fe289\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:15:12.992466 kubelet[2848]: E1216 03:15:12.992392 2848 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9d69338cd9d037458747f779a31382835832fb6bc0001a31859984bde8fe289\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:15:12.992561 kubelet[2848]: E1216 03:15:12.992470 2848 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9d69338cd9d037458747f779a31382835832fb6bc0001a31859984bde8fe289\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-vwjmk" Dec 16 03:15:12.992561 kubelet[2848]: E1216 03:15:12.992493 2848 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9d69338cd9d037458747f779a31382835832fb6bc0001a31859984bde8fe289\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-vwjmk" Dec 16 03:15:12.992642 kubelet[2848]: E1216 03:15:12.992545 2848 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-vwjmk_calico-system(5b1e8305-d364-4bc6-9a3a-e97daf2d06ed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-vwjmk_calico-system(5b1e8305-d364-4bc6-9a3a-e97daf2d06ed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f9d69338cd9d037458747f779a31382835832fb6bc0001a31859984bde8fe289\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-vwjmk" podUID="5b1e8305-d364-4bc6-9a3a-e97daf2d06ed" Dec 16 03:15:12.994091 containerd[1606]: time="2025-12-16T03:15:12.994008590Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-59c569b967-f9bm2,Uid:f4b3ad00-5810-4c4f-99d3-1bda488b3dc0,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"18ccc0623fff2726c767db48a1925aeed5481f0336e3a657b43c2e47048ec7c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:15:12.994327 kubelet[2848]: E1216 03:15:12.994216 2848 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18ccc0623fff2726c767db48a1925aeed5481f0336e3a657b43c2e47048ec7c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 03:15:12.994327 kubelet[2848]: E1216 03:15:12.994242 2848 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18ccc0623fff2726c767db48a1925aeed5481f0336e3a657b43c2e47048ec7c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-59c569b967-f9bm2" Dec 16 03:15:12.994327 kubelet[2848]: E1216 03:15:12.994256 2848 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18ccc0623fff2726c767db48a1925aeed5481f0336e3a657b43c2e47048ec7c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-59c569b967-f9bm2" Dec 16 03:15:12.994511 kubelet[2848]: E1216 03:15:12.994286 2848 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-59c569b967-f9bm2_calico-system(f4b3ad00-5810-4c4f-99d3-1bda488b3dc0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-59c569b967-f9bm2_calico-system(f4b3ad00-5810-4c4f-99d3-1bda488b3dc0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"18ccc0623fff2726c767db48a1925aeed5481f0336e3a657b43c2e47048ec7c5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-59c569b967-f9bm2" podUID="f4b3ad00-5810-4c4f-99d3-1bda488b3dc0" Dec 16 03:15:13.284004 systemd[1]: run-netns-cni\x2db5b61d9c\x2dc46f\x2de89d\x2dd580\x2df6a964250d34.mount: Deactivated successfully. Dec 16 03:15:15.052779 kubelet[2848]: I1216 03:15:15.052655 2848 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 03:15:15.053438 kubelet[2848]: E1216 03:15:15.053182 2848 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:15:15.086000 audit[3897]: NETFILTER_CFG table=filter:119 family=2 entries=21 op=nft_register_rule pid=3897 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:15:15.086000 audit[3897]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff1c760410 a2=0 a3=7fff1c7603fc items=0 ppid=2997 pid=3897 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:15.086000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:15:15.094000 audit[3897]: NETFILTER_CFG table=nat:120 family=2 entries=19 op=nft_register_chain pid=3897 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:15:15.094000 audit[3897]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7fff1c760410 a2=0 a3=7fff1c7603fc items=0 ppid=2997 pid=3897 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:15.094000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:15:15.729796 kubelet[2848]: E1216 03:15:15.729748 2848 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:15:20.593023 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4205048892.mount: Deactivated successfully. Dec 16 03:15:23.269768 containerd[1606]: time="2025-12-16T03:15:23.269487854Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:15:23.271180 containerd[1606]: time="2025-12-16T03:15:23.270977998Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Dec 16 03:15:23.276600 containerd[1606]: time="2025-12-16T03:15:23.276374330Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:15:23.283036 systemd[1]: Started sshd@9-10.0.0.26:22-10.0.0.1:36334.service - OpenSSH per-connection server daemon (10.0.0.1:36334). Dec 16 03:15:23.288334 kernel: kauditd_printk_skb: 12 callbacks suppressed Dec 16 03:15:23.288436 kernel: audit: type=1130 audit(1765854923.282:563): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.26:22-10.0.0.1:36334 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:23.282000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.26:22-10.0.0.1:36334 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:23.302581 containerd[1606]: time="2025-12-16T03:15:23.302197899Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 03:15:23.307480 containerd[1606]: time="2025-12-16T03:15:23.307434617Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 10.585552135s" Dec 16 03:15:23.307480 containerd[1606]: time="2025-12-16T03:15:23.307475855Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Dec 16 03:15:23.344151 containerd[1606]: time="2025-12-16T03:15:23.344057031Z" level=info msg="CreateContainer within sandbox \"93308a3b75d720ff282db40194be7a4d19749e72dc34ce30a7cf3816092ce712\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 16 03:15:23.379739 containerd[1606]: time="2025-12-16T03:15:23.373900684Z" level=info msg="Container 207af5ca4e5717357c9320de29b6918d4a5e1df28329fc29d18d8b1b122d3984: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:15:23.396561 containerd[1606]: time="2025-12-16T03:15:23.396506494Z" level=info msg="CreateContainer within sandbox \"93308a3b75d720ff282db40194be7a4d19749e72dc34ce30a7cf3816092ce712\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"207af5ca4e5717357c9320de29b6918d4a5e1df28329fc29d18d8b1b122d3984\"" Dec 16 03:15:23.404507 containerd[1606]: time="2025-12-16T03:15:23.402896485Z" level=info msg="StartContainer for \"207af5ca4e5717357c9320de29b6918d4a5e1df28329fc29d18d8b1b122d3984\"" Dec 16 03:15:23.405629 containerd[1606]: time="2025-12-16T03:15:23.405595287Z" level=info msg="connecting to shim 207af5ca4e5717357c9320de29b6918d4a5e1df28329fc29d18d8b1b122d3984" address="unix:///run/containerd/s/8d59605687c08a36e93f3dbfa6998c75539928ee9a83b41e4464f06878fb25c5" protocol=ttrpc version=3 Dec 16 03:15:23.436000 audit[3903]: USER_ACCT pid=3903 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:23.438203 sshd[3903]: Accepted publickey for core from 10.0.0.1 port 36334 ssh2: RSA SHA256:GhpAgDjPDQKTYeqxTnKpUWsy+dD7djTvvXmspUjCjIY Dec 16 03:15:23.440165 sshd-session[3903]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:15:23.438000 audit[3903]: CRED_ACQ pid=3903 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:23.446320 systemd-logind[1586]: New session 11 of user core. Dec 16 03:15:23.450564 kernel: audit: type=1101 audit(1765854923.436:564): pid=3903 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:23.450632 kernel: audit: type=1103 audit(1765854923.438:565): pid=3903 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:23.454190 kernel: audit: type=1006 audit(1765854923.438:566): pid=3903 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Dec 16 03:15:23.438000 audit[3903]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe2a124c00 a2=3 a3=0 items=0 ppid=1 pid=3903 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:23.460889 kernel: audit: type=1300 audit(1765854923.438:566): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe2a124c00 a2=3 a3=0 items=0 ppid=1 pid=3903 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:23.438000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:15:23.463013 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 16 03:15:23.464905 kernel: audit: type=1327 audit(1765854923.438:566): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:15:23.468000 audit[3903]: USER_START pid=3903 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:23.478744 kernel: audit: type=1105 audit(1765854923.468:567): pid=3903 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:23.482000 audit[3909]: CRED_ACQ pid=3909 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:23.488759 kernel: audit: type=1103 audit(1765854923.482:568): pid=3909 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:23.545992 systemd[1]: Started cri-containerd-207af5ca4e5717357c9320de29b6918d4a5e1df28329fc29d18d8b1b122d3984.scope - libcontainer container 207af5ca4e5717357c9320de29b6918d4a5e1df28329fc29d18d8b1b122d3984. Dec 16 03:15:23.610000 audit: BPF prog-id=170 op=LOAD Dec 16 03:15:23.610000 audit[3910]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3356 pid=3910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:23.619185 kernel: audit: type=1334 audit(1765854923.610:569): prog-id=170 op=LOAD Dec 16 03:15:23.619250 kernel: audit: type=1300 audit(1765854923.610:569): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3356 pid=3910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:23.610000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230376166356361346535373137333537633933323064653239623639 Dec 16 03:15:23.611000 audit: BPF prog-id=171 op=LOAD Dec 16 03:15:23.611000 audit[3910]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3356 pid=3910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:23.611000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230376166356361346535373137333537633933323064653239623639 Dec 16 03:15:23.611000 audit: BPF prog-id=171 op=UNLOAD Dec 16 03:15:23.611000 audit[3910]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3356 pid=3910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:23.611000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230376166356361346535373137333537633933323064653239623639 Dec 16 03:15:23.611000 audit: BPF prog-id=170 op=UNLOAD Dec 16 03:15:23.611000 audit[3910]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3356 pid=3910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:23.611000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230376166356361346535373137333537633933323064653239623639 Dec 16 03:15:23.611000 audit: BPF prog-id=172 op=LOAD Dec 16 03:15:23.611000 audit[3910]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3356 pid=3910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:23.611000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230376166356361346535373137333537633933323064653239623639 Dec 16 03:15:23.704977 containerd[1606]: time="2025-12-16T03:15:23.704911764Z" level=info msg="StartContainer for \"207af5ca4e5717357c9320de29b6918d4a5e1df28329fc29d18d8b1b122d3984\" returns successfully" Dec 16 03:15:23.706739 sshd[3909]: Connection closed by 10.0.0.1 port 36334 Dec 16 03:15:23.707183 sshd-session[3903]: pam_unix(sshd:session): session closed for user core Dec 16 03:15:23.708000 audit[3903]: USER_END pid=3903 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:23.708000 audit[3903]: CRED_DISP pid=3903 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:23.712905 systemd[1]: sshd@9-10.0.0.26:22-10.0.0.1:36334.service: Deactivated successfully. Dec 16 03:15:23.712000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.26:22-10.0.0.1:36334 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:23.715778 systemd[1]: session-11.scope: Deactivated successfully. Dec 16 03:15:23.718770 systemd-logind[1586]: Session 11 logged out. Waiting for processes to exit. Dec 16 03:15:23.719828 systemd-logind[1586]: Removed session 11. Dec 16 03:15:23.756226 kubelet[2848]: E1216 03:15:23.756189 2848 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:15:23.759267 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 16 03:15:23.759383 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 16 03:15:23.782235 kubelet[2848]: I1216 03:15:23.781499 2848 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-5cptp" podStartSLOduration=1.345024742 podStartE2EDuration="23.781482748s" podCreationTimestamp="2025-12-16 03:15:00 +0000 UTC" firstStartedPulling="2025-12-16 03:15:00.876163124 +0000 UTC m=+23.614855534" lastFinishedPulling="2025-12-16 03:15:23.31262112 +0000 UTC m=+46.051313540" observedRunningTime="2025-12-16 03:15:23.776505293 +0000 UTC m=+46.515197703" watchObservedRunningTime="2025-12-16 03:15:23.781482748 +0000 UTC m=+46.520175158" Dec 16 03:15:24.072119 kubelet[2848]: I1216 03:15:24.072042 2848 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f4b3ad00-5810-4c4f-99d3-1bda488b3dc0-whisker-ca-bundle\") pod \"f4b3ad00-5810-4c4f-99d3-1bda488b3dc0\" (UID: \"f4b3ad00-5810-4c4f-99d3-1bda488b3dc0\") " Dec 16 03:15:24.072119 kubelet[2848]: I1216 03:15:24.072116 2848 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f4b3ad00-5810-4c4f-99d3-1bda488b3dc0-whisker-backend-key-pair\") pod \"f4b3ad00-5810-4c4f-99d3-1bda488b3dc0\" (UID: \"f4b3ad00-5810-4c4f-99d3-1bda488b3dc0\") " Dec 16 03:15:24.072119 kubelet[2848]: I1216 03:15:24.072138 2848 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjlbs\" (UniqueName: \"kubernetes.io/projected/f4b3ad00-5810-4c4f-99d3-1bda488b3dc0-kube-api-access-wjlbs\") pod \"f4b3ad00-5810-4c4f-99d3-1bda488b3dc0\" (UID: \"f4b3ad00-5810-4c4f-99d3-1bda488b3dc0\") " Dec 16 03:15:24.073577 kubelet[2848]: I1216 03:15:24.073255 2848 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4b3ad00-5810-4c4f-99d3-1bda488b3dc0-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "f4b3ad00-5810-4c4f-99d3-1bda488b3dc0" (UID: "f4b3ad00-5810-4c4f-99d3-1bda488b3dc0"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 03:15:24.082537 kubelet[2848]: I1216 03:15:24.082488 2848 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4b3ad00-5810-4c4f-99d3-1bda488b3dc0-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "f4b3ad00-5810-4c4f-99d3-1bda488b3dc0" (UID: "f4b3ad00-5810-4c4f-99d3-1bda488b3dc0"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 16 03:15:24.083271 systemd[1]: var-lib-kubelet-pods-f4b3ad00\x2d5810\x2d4c4f\x2d99d3\x2d1bda488b3dc0-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 16 03:15:24.083444 kubelet[2848]: I1216 03:15:24.083378 2848 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4b3ad00-5810-4c4f-99d3-1bda488b3dc0-kube-api-access-wjlbs" (OuterVolumeSpecName: "kube-api-access-wjlbs") pod "f4b3ad00-5810-4c4f-99d3-1bda488b3dc0" (UID: "f4b3ad00-5810-4c4f-99d3-1bda488b3dc0"). InnerVolumeSpecName "kube-api-access-wjlbs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 03:15:24.172753 kubelet[2848]: I1216 03:15:24.172632 2848 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f4b3ad00-5810-4c4f-99d3-1bda488b3dc0-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Dec 16 03:15:24.172753 kubelet[2848]: I1216 03:15:24.172682 2848 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f4b3ad00-5810-4c4f-99d3-1bda488b3dc0-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Dec 16 03:15:24.172753 kubelet[2848]: I1216 03:15:24.172692 2848 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wjlbs\" (UniqueName: \"kubernetes.io/projected/f4b3ad00-5810-4c4f-99d3-1bda488b3dc0-kube-api-access-wjlbs\") on node \"localhost\" DevicePath \"\"" Dec 16 03:15:24.320536 systemd[1]: var-lib-kubelet-pods-f4b3ad00\x2d5810\x2d4c4f\x2d99d3\x2d1bda488b3dc0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwjlbs.mount: Deactivated successfully. Dec 16 03:15:24.758575 kubelet[2848]: E1216 03:15:24.758231 2848 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:15:24.764141 systemd[1]: Removed slice kubepods-besteffort-podf4b3ad00_5810_4c4f_99d3_1bda488b3dc0.slice - libcontainer container kubepods-besteffort-podf4b3ad00_5810_4c4f_99d3_1bda488b3dc0.slice. Dec 16 03:15:24.837449 systemd[1]: Created slice kubepods-besteffort-poda5002d96_1b90_443a_9926_1ad68bf4babc.slice - libcontainer container kubepods-besteffort-poda5002d96_1b90_443a_9926_1ad68bf4babc.slice. Dec 16 03:15:24.877796 kubelet[2848]: I1216 03:15:24.877732 2848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wv6n9\" (UniqueName: \"kubernetes.io/projected/a5002d96-1b90-443a-9926-1ad68bf4babc-kube-api-access-wv6n9\") pod \"whisker-66c75bc767-db58s\" (UID: \"a5002d96-1b90-443a-9926-1ad68bf4babc\") " pod="calico-system/whisker-66c75bc767-db58s" Dec 16 03:15:24.877796 kubelet[2848]: I1216 03:15:24.877791 2848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a5002d96-1b90-443a-9926-1ad68bf4babc-whisker-backend-key-pair\") pod \"whisker-66c75bc767-db58s\" (UID: \"a5002d96-1b90-443a-9926-1ad68bf4babc\") " pod="calico-system/whisker-66c75bc767-db58s" Dec 16 03:15:24.878017 kubelet[2848]: I1216 03:15:24.877823 2848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a5002d96-1b90-443a-9926-1ad68bf4babc-whisker-ca-bundle\") pod \"whisker-66c75bc767-db58s\" (UID: \"a5002d96-1b90-443a-9926-1ad68bf4babc\") " pod="calico-system/whisker-66c75bc767-db58s" Dec 16 03:15:25.145034 containerd[1606]: time="2025-12-16T03:15:25.144974201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66c75bc767-db58s,Uid:a5002d96-1b90-443a-9926-1ad68bf4babc,Namespace:calico-system,Attempt:0,}" Dec 16 03:15:25.351000 audit: BPF prog-id=173 op=LOAD Dec 16 03:15:25.351000 audit[4168]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe8e826660 a2=98 a3=1fffffffffffffff items=0 ppid=4062 pid=4168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:25.351000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 03:15:25.351000 audit: BPF prog-id=173 op=UNLOAD Dec 16 03:15:25.351000 audit[4168]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe8e826630 a3=0 items=0 ppid=4062 pid=4168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:25.351000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 03:15:25.351000 audit: BPF prog-id=174 op=LOAD Dec 16 03:15:25.351000 audit[4168]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe8e826540 a2=94 a3=3 items=0 ppid=4062 pid=4168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:25.351000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 03:15:25.351000 audit: BPF prog-id=174 op=UNLOAD Dec 16 03:15:25.351000 audit[4168]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe8e826540 a2=94 a3=3 items=0 ppid=4062 pid=4168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:25.351000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 03:15:25.351000 audit: BPF prog-id=175 op=LOAD Dec 16 03:15:25.351000 audit[4168]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe8e826580 a2=94 a3=7ffe8e826760 items=0 ppid=4062 pid=4168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:25.351000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 03:15:25.351000 audit: BPF prog-id=175 op=UNLOAD Dec 16 03:15:25.351000 audit[4168]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe8e826580 a2=94 a3=7ffe8e826760 items=0 ppid=4062 pid=4168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:25.351000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 16 03:15:25.353000 audit: BPF prog-id=176 op=LOAD Dec 16 03:15:25.353000 audit[4169]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd3c870fe0 a2=98 a3=3 items=0 ppid=4062 pid=4169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:25.353000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:15:25.353000 audit: BPF prog-id=176 op=UNLOAD Dec 16 03:15:25.353000 audit[4169]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd3c870fb0 a3=0 items=0 ppid=4062 pid=4169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:25.353000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:15:25.353000 audit: BPF prog-id=177 op=LOAD Dec 16 03:15:25.353000 audit[4169]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd3c870dd0 a2=94 a3=54428f items=0 ppid=4062 pid=4169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:25.353000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:15:25.353000 audit: BPF prog-id=177 op=UNLOAD Dec 16 03:15:25.353000 audit[4169]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd3c870dd0 a2=94 a3=54428f items=0 ppid=4062 pid=4169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:25.353000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:15:25.353000 audit: BPF prog-id=178 op=LOAD Dec 16 03:15:25.353000 audit[4169]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd3c870e00 a2=94 a3=2 items=0 ppid=4062 pid=4169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:25.353000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:15:25.353000 audit: BPF prog-id=178 op=UNLOAD Dec 16 03:15:25.353000 audit[4169]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd3c870e00 a2=0 a3=2 items=0 ppid=4062 pid=4169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:25.353000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:15:25.502798 containerd[1606]: time="2025-12-16T03:15:25.502592375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79777bd46b-pgqh8,Uid:629371f1-f66b-44ce-8151-2d326255465b,Namespace:calico-system,Attempt:0,}" Dec 16 03:15:25.506910 kubelet[2848]: I1216 03:15:25.506862 2848 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b3ad00-5810-4c4f-99d3-1bda488b3dc0" path="/var/lib/kubelet/pods/f4b3ad00-5810-4c4f-99d3-1bda488b3dc0/volumes" Dec 16 03:15:25.576000 audit: BPF prog-id=179 op=LOAD Dec 16 03:15:25.576000 audit[4169]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd3c870cc0 a2=94 a3=1 items=0 ppid=4062 pid=4169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:25.576000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:15:25.576000 audit: BPF prog-id=179 op=UNLOAD Dec 16 03:15:25.576000 audit[4169]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd3c870cc0 a2=94 a3=1 items=0 ppid=4062 pid=4169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:25.576000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:15:25.585000 audit: BPF prog-id=180 op=LOAD Dec 16 03:15:25.585000 audit[4169]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd3c870cb0 a2=94 a3=4 items=0 ppid=4062 pid=4169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:25.585000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:15:25.585000 audit: BPF prog-id=180 op=UNLOAD Dec 16 03:15:25.585000 audit[4169]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffd3c870cb0 a2=0 a3=4 items=0 ppid=4062 pid=4169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:25.585000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:15:25.586000 audit: BPF prog-id=181 op=LOAD Dec 16 03:15:25.586000 audit[4169]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd3c870b10 a2=94 a3=5 items=0 ppid=4062 pid=4169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:25.586000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:15:25.586000 audit: BPF prog-id=181 op=UNLOAD Dec 16 03:15:25.586000 audit[4169]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd3c870b10 a2=0 a3=5 items=0 ppid=4062 pid=4169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:25.586000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:15:25.586000 audit: BPF prog-id=182 op=LOAD Dec 16 03:15:25.586000 audit[4169]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd3c870d30 a2=94 a3=6 items=0 ppid=4062 pid=4169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:25.586000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:15:25.586000 audit: BPF prog-id=182 op=UNLOAD Dec 16 03:15:25.586000 audit[4169]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffd3c870d30 a2=0 a3=6 items=0 ppid=4062 pid=4169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:25.586000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:15:25.586000 audit: BPF prog-id=183 op=LOAD Dec 16 03:15:25.586000 audit[4169]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd3c8704e0 a2=94 a3=88 items=0 ppid=4062 pid=4169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:25.586000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:15:25.586000 audit: BPF prog-id=184 op=LOAD Dec 16 03:15:25.586000 audit[4169]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffd3c870360 a2=94 a3=2 items=0 ppid=4062 pid=4169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:25.586000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:15:25.587000 audit: BPF prog-id=184 op=UNLOAD Dec 16 03:15:25.587000 audit[4169]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffd3c870390 a2=0 a3=7ffd3c870490 items=0 ppid=4062 pid=4169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:25.587000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:15:25.587000 audit: BPF prog-id=183 op=UNLOAD Dec 16 03:15:25.587000 audit[4169]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=2a90fd10 a2=0 a3=3d315a680f1a9ac5 items=0 ppid=4062 pid=4169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:25.587000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 16 03:15:25.595000 audit: BPF prog-id=185 op=LOAD Dec 16 03:15:25.595000 audit[4172]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe7c2a6840 a2=98 a3=1999999999999999 items=0 ppid=4062 pid=4172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:25.595000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 03:15:25.595000 audit: BPF prog-id=185 op=UNLOAD Dec 16 03:15:25.595000 audit[4172]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe7c2a6810 a3=0 items=0 ppid=4062 pid=4172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:25.595000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 03:15:25.595000 audit: BPF prog-id=186 op=LOAD Dec 16 03:15:25.595000 audit[4172]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe7c2a6720 a2=94 a3=ffff items=0 ppid=4062 pid=4172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:25.595000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 03:15:25.595000 audit: BPF prog-id=186 op=UNLOAD Dec 16 03:15:25.595000 audit[4172]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe7c2a6720 a2=94 a3=ffff items=0 ppid=4062 pid=4172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:25.595000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 03:15:25.595000 audit: BPF prog-id=187 op=LOAD Dec 16 03:15:25.595000 audit[4172]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe7c2a6760 a2=94 a3=7ffe7c2a6940 items=0 ppid=4062 pid=4172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:25.595000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 03:15:25.595000 audit: BPF prog-id=187 op=UNLOAD Dec 16 03:15:25.595000 audit[4172]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe7c2a6760 a2=94 a3=7ffe7c2a6940 items=0 ppid=4062 pid=4172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:25.595000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 16 03:15:25.726447 systemd-networkd[1500]: vxlan.calico: Link UP Dec 16 03:15:25.726462 systemd-networkd[1500]: vxlan.calico: Gained carrier Dec 16 03:15:25.738000 audit: BPF prog-id=188 op=LOAD Dec 16 03:15:25.738000 audit[4198]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc8fd49e40 a2=98 a3=0 items=0 ppid=4062 pid=4198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:25.738000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:15:25.738000 audit: BPF prog-id=188 op=UNLOAD Dec 16 03:15:25.738000 audit[4198]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffc8fd49e10 a3=0 items=0 ppid=4062 pid=4198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:25.738000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:15:25.738000 audit: BPF prog-id=189 op=LOAD Dec 16 03:15:25.738000 audit[4198]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc8fd49c50 a2=94 a3=54428f items=0 ppid=4062 pid=4198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:25.738000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:15:25.738000 audit: BPF prog-id=189 op=UNLOAD Dec 16 03:15:25.738000 audit[4198]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffc8fd49c50 a2=94 a3=54428f items=0 ppid=4062 pid=4198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:25.738000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:15:25.738000 audit: BPF prog-id=190 op=LOAD Dec 16 03:15:25.738000 audit[4198]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc8fd49c80 a2=94 a3=2 items=0 ppid=4062 pid=4198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:25.738000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:15:25.738000 audit: BPF prog-id=190 op=UNLOAD Dec 16 03:15:25.738000 audit[4198]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffc8fd49c80 a2=0 a3=2 items=0 ppid=4062 pid=4198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:25.738000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:15:25.738000 audit: BPF prog-id=191 op=LOAD Dec 16 03:15:25.738000 audit[4198]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc8fd49a30 a2=94 a3=4 items=0 ppid=4062 pid=4198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:25.738000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:15:25.739000 audit: BPF prog-id=191 op=UNLOAD Dec 16 03:15:25.739000 audit[4198]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc8fd49a30 a2=94 a3=4 items=0 ppid=4062 pid=4198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:25.739000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:15:25.739000 audit: BPF prog-id=192 op=LOAD Dec 16 03:15:25.739000 audit[4198]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc8fd49b30 a2=94 a3=7ffc8fd49cb0 items=0 ppid=4062 pid=4198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:25.739000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:15:25.739000 audit: BPF prog-id=192 op=UNLOAD Dec 16 03:15:25.739000 audit[4198]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc8fd49b30 a2=0 a3=7ffc8fd49cb0 items=0 ppid=4062 pid=4198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:25.739000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:15:25.741000 audit: BPF prog-id=193 op=LOAD Dec 16 03:15:25.741000 audit[4198]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc8fd49260 a2=94 a3=2 items=0 ppid=4062 pid=4198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:25.741000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:15:25.741000 audit: BPF prog-id=193 op=UNLOAD Dec 16 03:15:25.741000 audit[4198]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc8fd49260 a2=0 a3=2 items=0 ppid=4062 pid=4198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:25.741000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:15:25.741000 audit: BPF prog-id=194 op=LOAD Dec 16 03:15:25.741000 audit[4198]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc8fd49360 a2=94 a3=30 items=0 ppid=4062 pid=4198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:25.741000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 16 03:15:25.753000 audit: BPF prog-id=195 op=LOAD Dec 16 03:15:25.753000 audit[4201]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff0f2f2970 a2=98 a3=0 items=0 ppid=4062 pid=4201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:25.753000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:15:25.753000 audit: BPF prog-id=195 op=UNLOAD Dec 16 03:15:25.753000 audit[4201]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff0f2f2940 a3=0 items=0 ppid=4062 pid=4201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:25.753000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:15:25.753000 audit: BPF prog-id=196 op=LOAD Dec 16 03:15:25.753000 audit[4201]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff0f2f2760 a2=94 a3=54428f items=0 ppid=4062 pid=4201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:25.753000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:15:25.753000 audit: BPF prog-id=196 op=UNLOAD Dec 16 03:15:25.753000 audit[4201]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff0f2f2760 a2=94 a3=54428f items=0 ppid=4062 pid=4201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:25.753000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:15:25.753000 audit: BPF prog-id=197 op=LOAD Dec 16 03:15:25.753000 audit[4201]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff0f2f2790 a2=94 a3=2 items=0 ppid=4062 pid=4201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:25.753000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:15:25.753000 audit: BPF prog-id=197 op=UNLOAD Dec 16 03:15:25.753000 audit[4201]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff0f2f2790 a2=0 a3=2 items=0 ppid=4062 pid=4201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:25.753000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:15:25.760567 kubelet[2848]: E1216 03:15:25.760510 2848 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:15:26.005000 audit: BPF prog-id=198 op=LOAD Dec 16 03:15:26.005000 audit[4201]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff0f2f2650 a2=94 a3=1 items=0 ppid=4062 pid=4201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:26.005000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:15:26.005000 audit: BPF prog-id=198 op=UNLOAD Dec 16 03:15:26.005000 audit[4201]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7fff0f2f2650 a2=94 a3=1 items=0 ppid=4062 pid=4201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:26.005000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:15:26.014000 audit: BPF prog-id=199 op=LOAD Dec 16 03:15:26.014000 audit[4201]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff0f2f2640 a2=94 a3=4 items=0 ppid=4062 pid=4201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:26.014000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:15:26.014000 audit: BPF prog-id=199 op=UNLOAD Dec 16 03:15:26.014000 audit[4201]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff0f2f2640 a2=0 a3=4 items=0 ppid=4062 pid=4201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:26.014000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:15:26.015000 audit: BPF prog-id=200 op=LOAD Dec 16 03:15:26.015000 audit[4201]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff0f2f24a0 a2=94 a3=5 items=0 ppid=4062 pid=4201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:26.015000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:15:26.015000 audit: BPF prog-id=200 op=UNLOAD Dec 16 03:15:26.015000 audit[4201]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7fff0f2f24a0 a2=0 a3=5 items=0 ppid=4062 pid=4201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:26.015000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:15:26.015000 audit: BPF prog-id=201 op=LOAD Dec 16 03:15:26.015000 audit[4201]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff0f2f26c0 a2=94 a3=6 items=0 ppid=4062 pid=4201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:26.015000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:15:26.015000 audit: BPF prog-id=201 op=UNLOAD Dec 16 03:15:26.015000 audit[4201]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7fff0f2f26c0 a2=0 a3=6 items=0 ppid=4062 pid=4201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:26.015000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:15:26.015000 audit: BPF prog-id=202 op=LOAD Dec 16 03:15:26.015000 audit[4201]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7fff0f2f1e70 a2=94 a3=88 items=0 ppid=4062 pid=4201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:26.015000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:15:26.016000 audit: BPF prog-id=203 op=LOAD Dec 16 03:15:26.016000 audit[4201]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7fff0f2f1cf0 a2=94 a3=2 items=0 ppid=4062 pid=4201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:26.016000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:15:26.016000 audit: BPF prog-id=203 op=UNLOAD Dec 16 03:15:26.016000 audit[4201]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7fff0f2f1d20 a2=0 a3=7fff0f2f1e20 items=0 ppid=4062 pid=4201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:26.016000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:15:26.016000 audit: BPF prog-id=202 op=UNLOAD Dec 16 03:15:26.016000 audit[4201]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=ef9fd10 a2=0 a3=48d153d6bf509def items=0 ppid=4062 pid=4201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:26.016000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 16 03:15:26.028000 audit: BPF prog-id=194 op=UNLOAD Dec 16 03:15:26.028000 audit[4062]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c000a02140 a2=0 a3=0 items=0 ppid=4045 pid=4062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:26.028000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Dec 16 03:15:26.182000 audit[4274]: NETFILTER_CFG table=nat:121 family=2 entries=15 op=nft_register_chain pid=4274 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:15:26.182000 audit[4274]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffcff5b4000 a2=0 a3=7ffcff5b3fec items=0 ppid=4062 pid=4274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:26.182000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:15:26.184000 audit[4276]: NETFILTER_CFG table=mangle:122 family=2 entries=16 op=nft_register_chain pid=4276 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:15:26.184000 audit[4276]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffe670adfb0 a2=0 a3=7ffe670adf9c items=0 ppid=4062 pid=4276 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:26.184000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:15:26.197000 audit[4272]: NETFILTER_CFG table=raw:123 family=2 entries=21 op=nft_register_chain pid=4272 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:15:26.197000 audit[4272]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffd085f4b60 a2=0 a3=7ffd085f4b4c items=0 ppid=4062 pid=4272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:26.197000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:15:26.197000 audit[4294]: NETFILTER_CFG table=filter:124 family=2 entries=39 op=nft_register_chain pid=4294 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:15:26.197000 audit[4294]: SYSCALL arch=c000003e syscall=46 success=yes exit=18968 a0=3 a1=7ffd1203a120 a2=0 a3=5561ed17c000 items=0 ppid=4062 pid=4294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:26.197000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:15:26.327234 systemd-networkd[1500]: calib4d5c351b47: Link UP Dec 16 03:15:26.327987 systemd-networkd[1500]: calib4d5c351b47: Gained carrier Dec 16 03:15:26.353316 containerd[1606]: 2025-12-16 03:15:26.148 [INFO][4214] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--66c75bc767--db58s-eth0 whisker-66c75bc767- calico-system a5002d96-1b90-443a-9926-1ad68bf4babc 1017 0 2025-12-16 03:15:24 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:66c75bc767 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-66c75bc767-db58s eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calib4d5c351b47 [] [] }} ContainerID="7f2f9205123d22e5821e34abf88dba7fc946c92496f44b5de0b693e7e0adfdd6" Namespace="calico-system" Pod="whisker-66c75bc767-db58s" WorkloadEndpoint="localhost-k8s-whisker--66c75bc767--db58s-" Dec 16 03:15:26.353316 containerd[1606]: 2025-12-16 03:15:26.148 [INFO][4214] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7f2f9205123d22e5821e34abf88dba7fc946c92496f44b5de0b693e7e0adfdd6" Namespace="calico-system" Pod="whisker-66c75bc767-db58s" WorkloadEndpoint="localhost-k8s-whisker--66c75bc767--db58s-eth0" Dec 16 03:15:26.353316 containerd[1606]: 2025-12-16 03:15:26.267 [INFO][4266] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7f2f9205123d22e5821e34abf88dba7fc946c92496f44b5de0b693e7e0adfdd6" HandleID="k8s-pod-network.7f2f9205123d22e5821e34abf88dba7fc946c92496f44b5de0b693e7e0adfdd6" Workload="localhost-k8s-whisker--66c75bc767--db58s-eth0" Dec 16 03:15:26.353973 containerd[1606]: 2025-12-16 03:15:26.268 [INFO][4266] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7f2f9205123d22e5821e34abf88dba7fc946c92496f44b5de0b693e7e0adfdd6" HandleID="k8s-pod-network.7f2f9205123d22e5821e34abf88dba7fc946c92496f44b5de0b693e7e0adfdd6" Workload="localhost-k8s-whisker--66c75bc767--db58s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00041cfe0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-66c75bc767-db58s", "timestamp":"2025-12-16 03:15:26.267769957 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 03:15:26.353973 containerd[1606]: 2025-12-16 03:15:26.268 [INFO][4266] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 03:15:26.353973 containerd[1606]: 2025-12-16 03:15:26.268 [INFO][4266] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 03:15:26.353973 containerd[1606]: 2025-12-16 03:15:26.269 [INFO][4266] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 03:15:26.353973 containerd[1606]: 2025-12-16 03:15:26.278 [INFO][4266] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7f2f9205123d22e5821e34abf88dba7fc946c92496f44b5de0b693e7e0adfdd6" host="localhost" Dec 16 03:15:26.353973 containerd[1606]: 2025-12-16 03:15:26.288 [INFO][4266] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 03:15:26.353973 containerd[1606]: 2025-12-16 03:15:26.294 [INFO][4266] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 03:15:26.353973 containerd[1606]: 2025-12-16 03:15:26.296 [INFO][4266] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 03:15:26.353973 containerd[1606]: 2025-12-16 03:15:26.298 [INFO][4266] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 03:15:26.353973 containerd[1606]: 2025-12-16 03:15:26.298 [INFO][4266] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7f2f9205123d22e5821e34abf88dba7fc946c92496f44b5de0b693e7e0adfdd6" host="localhost" Dec 16 03:15:26.354279 containerd[1606]: 2025-12-16 03:15:26.300 [INFO][4266] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7f2f9205123d22e5821e34abf88dba7fc946c92496f44b5de0b693e7e0adfdd6 Dec 16 03:15:26.354279 containerd[1606]: 2025-12-16 03:15:26.309 [INFO][4266] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7f2f9205123d22e5821e34abf88dba7fc946c92496f44b5de0b693e7e0adfdd6" host="localhost" Dec 16 03:15:26.354279 containerd[1606]: 2025-12-16 03:15:26.319 [INFO][4266] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.7f2f9205123d22e5821e34abf88dba7fc946c92496f44b5de0b693e7e0adfdd6" host="localhost" Dec 16 03:15:26.354279 containerd[1606]: 2025-12-16 03:15:26.319 [INFO][4266] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.7f2f9205123d22e5821e34abf88dba7fc946c92496f44b5de0b693e7e0adfdd6" host="localhost" Dec 16 03:15:26.354279 containerd[1606]: 2025-12-16 03:15:26.319 [INFO][4266] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 03:15:26.354279 containerd[1606]: 2025-12-16 03:15:26.319 [INFO][4266] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="7f2f9205123d22e5821e34abf88dba7fc946c92496f44b5de0b693e7e0adfdd6" HandleID="k8s-pod-network.7f2f9205123d22e5821e34abf88dba7fc946c92496f44b5de0b693e7e0adfdd6" Workload="localhost-k8s-whisker--66c75bc767--db58s-eth0" Dec 16 03:15:26.354433 containerd[1606]: 2025-12-16 03:15:26.322 [INFO][4214] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7f2f9205123d22e5821e34abf88dba7fc946c92496f44b5de0b693e7e0adfdd6" Namespace="calico-system" Pod="whisker-66c75bc767-db58s" WorkloadEndpoint="localhost-k8s-whisker--66c75bc767--db58s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--66c75bc767--db58s-eth0", GenerateName:"whisker-66c75bc767-", Namespace:"calico-system", SelfLink:"", UID:"a5002d96-1b90-443a-9926-1ad68bf4babc", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 15, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"66c75bc767", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-66c75bc767-db58s", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calib4d5c351b47", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:15:26.354433 containerd[1606]: 2025-12-16 03:15:26.323 [INFO][4214] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="7f2f9205123d22e5821e34abf88dba7fc946c92496f44b5de0b693e7e0adfdd6" Namespace="calico-system" Pod="whisker-66c75bc767-db58s" WorkloadEndpoint="localhost-k8s-whisker--66c75bc767--db58s-eth0" Dec 16 03:15:26.354528 containerd[1606]: 2025-12-16 03:15:26.323 [INFO][4214] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib4d5c351b47 ContainerID="7f2f9205123d22e5821e34abf88dba7fc946c92496f44b5de0b693e7e0adfdd6" Namespace="calico-system" Pod="whisker-66c75bc767-db58s" WorkloadEndpoint="localhost-k8s-whisker--66c75bc767--db58s-eth0" Dec 16 03:15:26.354528 containerd[1606]: 2025-12-16 03:15:26.334 [INFO][4214] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7f2f9205123d22e5821e34abf88dba7fc946c92496f44b5de0b693e7e0adfdd6" Namespace="calico-system" Pod="whisker-66c75bc767-db58s" WorkloadEndpoint="localhost-k8s-whisker--66c75bc767--db58s-eth0" Dec 16 03:15:26.354580 containerd[1606]: 2025-12-16 03:15:26.334 [INFO][4214] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7f2f9205123d22e5821e34abf88dba7fc946c92496f44b5de0b693e7e0adfdd6" Namespace="calico-system" Pod="whisker-66c75bc767-db58s" WorkloadEndpoint="localhost-k8s-whisker--66c75bc767--db58s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--66c75bc767--db58s-eth0", GenerateName:"whisker-66c75bc767-", Namespace:"calico-system", SelfLink:"", UID:"a5002d96-1b90-443a-9926-1ad68bf4babc", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 15, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"66c75bc767", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7f2f9205123d22e5821e34abf88dba7fc946c92496f44b5de0b693e7e0adfdd6", Pod:"whisker-66c75bc767-db58s", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calib4d5c351b47", MAC:"56:8b:ec:92:1c:25", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:15:26.354656 containerd[1606]: 2025-12-16 03:15:26.349 [INFO][4214] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7f2f9205123d22e5821e34abf88dba7fc946c92496f44b5de0b693e7e0adfdd6" Namespace="calico-system" Pod="whisker-66c75bc767-db58s" WorkloadEndpoint="localhost-k8s-whisker--66c75bc767--db58s-eth0" Dec 16 03:15:26.380000 audit[4322]: NETFILTER_CFG table=filter:125 family=2 entries=59 op=nft_register_chain pid=4322 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:15:26.380000 audit[4322]: SYSCALL arch=c000003e syscall=46 success=yes exit=35860 a0=3 a1=7ffc5231e4e0 a2=0 a3=7ffc5231e4cc items=0 ppid=4062 pid=4322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:26.380000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:15:26.442378 systemd-networkd[1500]: calide655afb685: Link UP Dec 16 03:15:26.443817 systemd-networkd[1500]: calide655afb685: Gained carrier Dec 16 03:15:26.464604 containerd[1606]: 2025-12-16 03:15:26.228 [INFO][4278] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--79777bd46b--pgqh8-eth0 calico-kube-controllers-79777bd46b- calico-system 629371f1-f66b-44ce-8151-2d326255465b 896 0 2025-12-16 03:15:00 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:79777bd46b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-79777bd46b-pgqh8 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calide655afb685 [] [] }} ContainerID="a854c5526133b3163d52ede5a79bd86759633033988d8e12180c41e25ddb6bc0" Namespace="calico-system" Pod="calico-kube-controllers-79777bd46b-pgqh8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--79777bd46b--pgqh8-" Dec 16 03:15:26.464604 containerd[1606]: 2025-12-16 03:15:26.228 [INFO][4278] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a854c5526133b3163d52ede5a79bd86759633033988d8e12180c41e25ddb6bc0" Namespace="calico-system" Pod="calico-kube-controllers-79777bd46b-pgqh8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--79777bd46b--pgqh8-eth0" Dec 16 03:15:26.464604 containerd[1606]: 2025-12-16 03:15:26.268 [INFO][4301] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a854c5526133b3163d52ede5a79bd86759633033988d8e12180c41e25ddb6bc0" HandleID="k8s-pod-network.a854c5526133b3163d52ede5a79bd86759633033988d8e12180c41e25ddb6bc0" Workload="localhost-k8s-calico--kube--controllers--79777bd46b--pgqh8-eth0" Dec 16 03:15:26.464961 containerd[1606]: 2025-12-16 03:15:26.268 [INFO][4301] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a854c5526133b3163d52ede5a79bd86759633033988d8e12180c41e25ddb6bc0" HandleID="k8s-pod-network.a854c5526133b3163d52ede5a79bd86759633033988d8e12180c41e25ddb6bc0" Workload="localhost-k8s-calico--kube--controllers--79777bd46b--pgqh8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00034d590), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-79777bd46b-pgqh8", "timestamp":"2025-12-16 03:15:26.268066811 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 03:15:26.464961 containerd[1606]: 2025-12-16 03:15:26.268 [INFO][4301] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 03:15:26.464961 containerd[1606]: 2025-12-16 03:15:26.319 [INFO][4301] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 03:15:26.464961 containerd[1606]: 2025-12-16 03:15:26.319 [INFO][4301] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 03:15:26.464961 containerd[1606]: 2025-12-16 03:15:26.390 [INFO][4301] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a854c5526133b3163d52ede5a79bd86759633033988d8e12180c41e25ddb6bc0" host="localhost" Dec 16 03:15:26.464961 containerd[1606]: 2025-12-16 03:15:26.399 [INFO][4301] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 03:15:26.464961 containerd[1606]: 2025-12-16 03:15:26.404 [INFO][4301] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 03:15:26.464961 containerd[1606]: 2025-12-16 03:15:26.406 [INFO][4301] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 03:15:26.464961 containerd[1606]: 2025-12-16 03:15:26.408 [INFO][4301] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 03:15:26.464961 containerd[1606]: 2025-12-16 03:15:26.409 [INFO][4301] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a854c5526133b3163d52ede5a79bd86759633033988d8e12180c41e25ddb6bc0" host="localhost" Dec 16 03:15:26.467860 containerd[1606]: 2025-12-16 03:15:26.410 [INFO][4301] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a854c5526133b3163d52ede5a79bd86759633033988d8e12180c41e25ddb6bc0 Dec 16 03:15:26.467860 containerd[1606]: 2025-12-16 03:15:26.421 [INFO][4301] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a854c5526133b3163d52ede5a79bd86759633033988d8e12180c41e25ddb6bc0" host="localhost" Dec 16 03:15:26.467860 containerd[1606]: 2025-12-16 03:15:26.431 [INFO][4301] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.a854c5526133b3163d52ede5a79bd86759633033988d8e12180c41e25ddb6bc0" host="localhost" Dec 16 03:15:26.467860 containerd[1606]: 2025-12-16 03:15:26.431 [INFO][4301] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.a854c5526133b3163d52ede5a79bd86759633033988d8e12180c41e25ddb6bc0" host="localhost" Dec 16 03:15:26.467860 containerd[1606]: 2025-12-16 03:15:26.431 [INFO][4301] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 03:15:26.467860 containerd[1606]: 2025-12-16 03:15:26.431 [INFO][4301] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="a854c5526133b3163d52ede5a79bd86759633033988d8e12180c41e25ddb6bc0" HandleID="k8s-pod-network.a854c5526133b3163d52ede5a79bd86759633033988d8e12180c41e25ddb6bc0" Workload="localhost-k8s-calico--kube--controllers--79777bd46b--pgqh8-eth0" Dec 16 03:15:26.467996 containerd[1606]: 2025-12-16 03:15:26.435 [INFO][4278] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a854c5526133b3163d52ede5a79bd86759633033988d8e12180c41e25ddb6bc0" Namespace="calico-system" Pod="calico-kube-controllers-79777bd46b-pgqh8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--79777bd46b--pgqh8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--79777bd46b--pgqh8-eth0", GenerateName:"calico-kube-controllers-79777bd46b-", Namespace:"calico-system", SelfLink:"", UID:"629371f1-f66b-44ce-8151-2d326255465b", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 15, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"79777bd46b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-79777bd46b-pgqh8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calide655afb685", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:15:26.468067 containerd[1606]: 2025-12-16 03:15:26.436 [INFO][4278] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="a854c5526133b3163d52ede5a79bd86759633033988d8e12180c41e25ddb6bc0" Namespace="calico-system" Pod="calico-kube-controllers-79777bd46b-pgqh8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--79777bd46b--pgqh8-eth0" Dec 16 03:15:26.468067 containerd[1606]: 2025-12-16 03:15:26.436 [INFO][4278] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calide655afb685 ContainerID="a854c5526133b3163d52ede5a79bd86759633033988d8e12180c41e25ddb6bc0" Namespace="calico-system" Pod="calico-kube-controllers-79777bd46b-pgqh8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--79777bd46b--pgqh8-eth0" Dec 16 03:15:26.468067 containerd[1606]: 2025-12-16 03:15:26.445 [INFO][4278] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a854c5526133b3163d52ede5a79bd86759633033988d8e12180c41e25ddb6bc0" Namespace="calico-system" Pod="calico-kube-controllers-79777bd46b-pgqh8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--79777bd46b--pgqh8-eth0" Dec 16 03:15:26.468137 containerd[1606]: 2025-12-16 03:15:26.445 [INFO][4278] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a854c5526133b3163d52ede5a79bd86759633033988d8e12180c41e25ddb6bc0" Namespace="calico-system" Pod="calico-kube-controllers-79777bd46b-pgqh8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--79777bd46b--pgqh8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--79777bd46b--pgqh8-eth0", GenerateName:"calico-kube-controllers-79777bd46b-", Namespace:"calico-system", SelfLink:"", UID:"629371f1-f66b-44ce-8151-2d326255465b", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 15, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"79777bd46b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a854c5526133b3163d52ede5a79bd86759633033988d8e12180c41e25ddb6bc0", Pod:"calico-kube-controllers-79777bd46b-pgqh8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calide655afb685", MAC:"0a:fe:1d:f7:bd:35", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:15:26.468205 containerd[1606]: 2025-12-16 03:15:26.457 [INFO][4278] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a854c5526133b3163d52ede5a79bd86759633033988d8e12180c41e25ddb6bc0" Namespace="calico-system" Pod="calico-kube-controllers-79777bd46b-pgqh8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--79777bd46b--pgqh8-eth0" Dec 16 03:15:26.498118 containerd[1606]: time="2025-12-16T03:15:26.498027542Z" level=info msg="connecting to shim 7f2f9205123d22e5821e34abf88dba7fc946c92496f44b5de0b693e7e0adfdd6" address="unix:///run/containerd/s/8a1f998b3cbdce8004d818ec982e0cdce34438ba79a88fad690a8aa2d543c087" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:15:26.502495 kubelet[2848]: E1216 03:15:26.502432 2848 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:15:26.503026 kubelet[2848]: E1216 03:15:26.503006 2848 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:15:26.505120 containerd[1606]: time="2025-12-16T03:15:26.505017182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d7bb69b54-c7929,Uid:9e2da91d-bd6f-474c-851c-2fd9d9db86f3,Namespace:calico-apiserver,Attempt:0,}" Dec 16 03:15:26.506389 containerd[1606]: time="2025-12-16T03:15:26.506348341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-l2dsg,Uid:7167e9f6-6a0b-43eb-9986-460671ce7a4d,Namespace:kube-system,Attempt:0,}" Dec 16 03:15:26.506502 containerd[1606]: time="2025-12-16T03:15:26.506446728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dvxtw,Uid:534f1053-c823-4a53-90bf-e8b0c562d2fe,Namespace:kube-system,Attempt:0,}" Dec 16 03:15:26.506542 containerd[1606]: time="2025-12-16T03:15:26.506520919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-vwjmk,Uid:5b1e8305-d364-4bc6-9a3a-e97daf2d06ed,Namespace:calico-system,Attempt:0,}" Dec 16 03:15:26.506624 containerd[1606]: time="2025-12-16T03:15:26.506580753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d7bb69b54-5cccw,Uid:f6a2c05c-26b5-45cc-94cd-f96ac9ec6971,Namespace:calico-apiserver,Attempt:0,}" Dec 16 03:15:26.506761 containerd[1606]: time="2025-12-16T03:15:26.506649402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h4rmp,Uid:05279c13-1f07-47f3-aaa0-f3eff20006ee,Namespace:calico-system,Attempt:0,}" Dec 16 03:15:26.513660 containerd[1606]: time="2025-12-16T03:15:26.512707513Z" level=info msg="connecting to shim a854c5526133b3163d52ede5a79bd86759633033988d8e12180c41e25ddb6bc0" address="unix:///run/containerd/s/ae07d91dfdad6c5c3a6e5093862966796ae3c4c56acea886f0cbd0c0ff0a8a5b" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:15:26.563361 systemd[1]: Started cri-containerd-7f2f9205123d22e5821e34abf88dba7fc946c92496f44b5de0b693e7e0adfdd6.scope - libcontainer container 7f2f9205123d22e5821e34abf88dba7fc946c92496f44b5de0b693e7e0adfdd6. Dec 16 03:15:26.585304 systemd[1]: Started cri-containerd-a854c5526133b3163d52ede5a79bd86759633033988d8e12180c41e25ddb6bc0.scope - libcontainer container a854c5526133b3163d52ede5a79bd86759633033988d8e12180c41e25ddb6bc0. Dec 16 03:15:26.598000 audit[4421]: NETFILTER_CFG table=filter:126 family=2 entries=36 op=nft_register_chain pid=4421 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:15:26.598000 audit[4421]: SYSCALL arch=c000003e syscall=46 success=yes exit=19576 a0=3 a1=7ffd29eb6870 a2=0 a3=7ffd29eb685c items=0 ppid=4062 pid=4421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:26.598000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:15:26.636000 audit: BPF prog-id=204 op=LOAD Dec 16 03:15:26.636000 audit: BPF prog-id=205 op=LOAD Dec 16 03:15:26.636000 audit[4369]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4343 pid=4369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:26.636000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766326639323035313233643232653538323165333461626638386462 Dec 16 03:15:26.637000 audit: BPF prog-id=205 op=UNLOAD Dec 16 03:15:26.637000 audit[4369]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4343 pid=4369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:26.637000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766326639323035313233643232653538323165333461626638386462 Dec 16 03:15:26.637000 audit: BPF prog-id=206 op=LOAD Dec 16 03:15:26.637000 audit[4369]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4343 pid=4369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:26.637000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766326639323035313233643232653538323165333461626638386462 Dec 16 03:15:26.638000 audit: BPF prog-id=207 op=LOAD Dec 16 03:15:26.638000 audit[4369]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4343 pid=4369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:26.638000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766326639323035313233643232653538323165333461626638386462 Dec 16 03:15:26.638000 audit: BPF prog-id=207 op=UNLOAD Dec 16 03:15:26.638000 audit[4369]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4343 pid=4369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:26.638000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766326639323035313233643232653538323165333461626638386462 Dec 16 03:15:26.638000 audit: BPF prog-id=206 op=UNLOAD Dec 16 03:15:26.638000 audit[4369]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4343 pid=4369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:26.638000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766326639323035313233643232653538323165333461626638386462 Dec 16 03:15:26.638000 audit: BPF prog-id=208 op=LOAD Dec 16 03:15:26.639000 audit: BPF prog-id=209 op=LOAD Dec 16 03:15:26.638000 audit[4369]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4343 pid=4369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:26.638000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766326639323035313233643232653538323165333461626638386462 Dec 16 03:15:26.639000 audit: BPF prog-id=210 op=LOAD Dec 16 03:15:26.639000 audit[4384]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4351 pid=4384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:26.639000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138353463353532363133336233313633643532656465356137396264 Dec 16 03:15:26.639000 audit: BPF prog-id=210 op=UNLOAD Dec 16 03:15:26.639000 audit[4384]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4351 pid=4384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:26.639000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138353463353532363133336233313633643532656465356137396264 Dec 16 03:15:26.639000 audit: BPF prog-id=211 op=LOAD Dec 16 03:15:26.639000 audit[4384]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4351 pid=4384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:26.639000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138353463353532363133336233313633643532656465356137396264 Dec 16 03:15:26.639000 audit: BPF prog-id=212 op=LOAD Dec 16 03:15:26.639000 audit[4384]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4351 pid=4384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:26.639000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138353463353532363133336233313633643532656465356137396264 Dec 16 03:15:26.640000 audit: BPF prog-id=212 op=UNLOAD Dec 16 03:15:26.640000 audit[4384]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4351 pid=4384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:26.640000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138353463353532363133336233313633643532656465356137396264 Dec 16 03:15:26.640000 audit: BPF prog-id=211 op=UNLOAD Dec 16 03:15:26.640000 audit[4384]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4351 pid=4384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:26.640000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138353463353532363133336233313633643532656465356137396264 Dec 16 03:15:26.640000 audit: BPF prog-id=213 op=LOAD Dec 16 03:15:26.640000 audit[4384]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4351 pid=4384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:26.640000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138353463353532363133336233313633643532656465356137396264 Dec 16 03:15:26.641007 systemd-resolved[1283]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 03:15:26.642345 systemd-resolved[1283]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 03:15:26.742603 containerd[1606]: time="2025-12-16T03:15:26.742322999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79777bd46b-pgqh8,Uid:629371f1-f66b-44ce-8151-2d326255465b,Namespace:calico-system,Attempt:0,} returns sandbox id \"a854c5526133b3163d52ede5a79bd86759633033988d8e12180c41e25ddb6bc0\"" Dec 16 03:15:26.752491 containerd[1606]: time="2025-12-16T03:15:26.752450460Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 03:15:26.786681 containerd[1606]: time="2025-12-16T03:15:26.786577589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66c75bc767-db58s,Uid:a5002d96-1b90-443a-9926-1ad68bf4babc,Namespace:calico-system,Attempt:0,} returns sandbox id \"7f2f9205123d22e5821e34abf88dba7fc946c92496f44b5de0b693e7e0adfdd6\"" Dec 16 03:15:26.857556 systemd-networkd[1500]: cali18a16bf3740: Link UP Dec 16 03:15:26.868864 systemd-networkd[1500]: cali18a16bf3740: Gained carrier Dec 16 03:15:26.909169 containerd[1606]: 2025-12-16 03:15:26.655 [INFO][4391] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--l2dsg-eth0 coredns-668d6bf9bc- kube-system 7167e9f6-6a0b-43eb-9986-460671ce7a4d 884 0 2025-12-16 03:14:41 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-l2dsg eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali18a16bf3740 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="8efe7f3b93e46c29b9b0c27ede950cc801261973e5ba935997d17159d8c156b6" Namespace="kube-system" Pod="coredns-668d6bf9bc-l2dsg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--l2dsg-" Dec 16 03:15:26.909169 containerd[1606]: 2025-12-16 03:15:26.656 [INFO][4391] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8efe7f3b93e46c29b9b0c27ede950cc801261973e5ba935997d17159d8c156b6" Namespace="kube-system" Pod="coredns-668d6bf9bc-l2dsg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--l2dsg-eth0" Dec 16 03:15:26.909169 containerd[1606]: 2025-12-16 03:15:26.762 [INFO][4442] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8efe7f3b93e46c29b9b0c27ede950cc801261973e5ba935997d17159d8c156b6" HandleID="k8s-pod-network.8efe7f3b93e46c29b9b0c27ede950cc801261973e5ba935997d17159d8c156b6" Workload="localhost-k8s-coredns--668d6bf9bc--l2dsg-eth0" Dec 16 03:15:26.909914 containerd[1606]: 2025-12-16 03:15:26.762 [INFO][4442] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8efe7f3b93e46c29b9b0c27ede950cc801261973e5ba935997d17159d8c156b6" HandleID="k8s-pod-network.8efe7f3b93e46c29b9b0c27ede950cc801261973e5ba935997d17159d8c156b6" Workload="localhost-k8s-coredns--668d6bf9bc--l2dsg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000325f60), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-l2dsg", "timestamp":"2025-12-16 03:15:26.762055289 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 03:15:26.909914 containerd[1606]: 2025-12-16 03:15:26.762 [INFO][4442] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 03:15:26.909914 containerd[1606]: 2025-12-16 03:15:26.762 [INFO][4442] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 03:15:26.909914 containerd[1606]: 2025-12-16 03:15:26.762 [INFO][4442] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 03:15:26.909914 containerd[1606]: 2025-12-16 03:15:26.779 [INFO][4442] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8efe7f3b93e46c29b9b0c27ede950cc801261973e5ba935997d17159d8c156b6" host="localhost" Dec 16 03:15:26.909914 containerd[1606]: 2025-12-16 03:15:26.790 [INFO][4442] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 03:15:26.909914 containerd[1606]: 2025-12-16 03:15:26.800 [INFO][4442] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 03:15:26.909914 containerd[1606]: 2025-12-16 03:15:26.804 [INFO][4442] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 03:15:26.909914 containerd[1606]: 2025-12-16 03:15:26.809 [INFO][4442] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 03:15:26.909914 containerd[1606]: 2025-12-16 03:15:26.809 [INFO][4442] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8efe7f3b93e46c29b9b0c27ede950cc801261973e5ba935997d17159d8c156b6" host="localhost" Dec 16 03:15:26.912971 containerd[1606]: 2025-12-16 03:15:26.811 [INFO][4442] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8efe7f3b93e46c29b9b0c27ede950cc801261973e5ba935997d17159d8c156b6 Dec 16 03:15:26.912971 containerd[1606]: 2025-12-16 03:15:26.818 [INFO][4442] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8efe7f3b93e46c29b9b0c27ede950cc801261973e5ba935997d17159d8c156b6" host="localhost" Dec 16 03:15:26.912971 containerd[1606]: 2025-12-16 03:15:26.830 [INFO][4442] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.8efe7f3b93e46c29b9b0c27ede950cc801261973e5ba935997d17159d8c156b6" host="localhost" Dec 16 03:15:26.912971 containerd[1606]: 2025-12-16 03:15:26.831 [INFO][4442] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.8efe7f3b93e46c29b9b0c27ede950cc801261973e5ba935997d17159d8c156b6" host="localhost" Dec 16 03:15:26.912971 containerd[1606]: 2025-12-16 03:15:26.831 [INFO][4442] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 03:15:26.912971 containerd[1606]: 2025-12-16 03:15:26.831 [INFO][4442] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="8efe7f3b93e46c29b9b0c27ede950cc801261973e5ba935997d17159d8c156b6" HandleID="k8s-pod-network.8efe7f3b93e46c29b9b0c27ede950cc801261973e5ba935997d17159d8c156b6" Workload="localhost-k8s-coredns--668d6bf9bc--l2dsg-eth0" Dec 16 03:15:26.913216 containerd[1606]: 2025-12-16 03:15:26.842 [INFO][4391] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8efe7f3b93e46c29b9b0c27ede950cc801261973e5ba935997d17159d8c156b6" Namespace="kube-system" Pod="coredns-668d6bf9bc-l2dsg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--l2dsg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--l2dsg-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7167e9f6-6a0b-43eb-9986-460671ce7a4d", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 14, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-l2dsg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali18a16bf3740", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:15:26.913332 containerd[1606]: 2025-12-16 03:15:26.842 [INFO][4391] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="8efe7f3b93e46c29b9b0c27ede950cc801261973e5ba935997d17159d8c156b6" Namespace="kube-system" Pod="coredns-668d6bf9bc-l2dsg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--l2dsg-eth0" Dec 16 03:15:26.913332 containerd[1606]: 2025-12-16 03:15:26.842 [INFO][4391] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali18a16bf3740 ContainerID="8efe7f3b93e46c29b9b0c27ede950cc801261973e5ba935997d17159d8c156b6" Namespace="kube-system" Pod="coredns-668d6bf9bc-l2dsg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--l2dsg-eth0" Dec 16 03:15:26.913332 containerd[1606]: 2025-12-16 03:15:26.874 [INFO][4391] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8efe7f3b93e46c29b9b0c27ede950cc801261973e5ba935997d17159d8c156b6" Namespace="kube-system" Pod="coredns-668d6bf9bc-l2dsg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--l2dsg-eth0" Dec 16 03:15:26.913426 containerd[1606]: 2025-12-16 03:15:26.878 [INFO][4391] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8efe7f3b93e46c29b9b0c27ede950cc801261973e5ba935997d17159d8c156b6" Namespace="kube-system" Pod="coredns-668d6bf9bc-l2dsg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--l2dsg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--l2dsg-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7167e9f6-6a0b-43eb-9986-460671ce7a4d", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 14, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8efe7f3b93e46c29b9b0c27ede950cc801261973e5ba935997d17159d8c156b6", Pod:"coredns-668d6bf9bc-l2dsg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali18a16bf3740", MAC:"62:bb:10:c8:3b:64", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:15:26.913426 containerd[1606]: 2025-12-16 03:15:26.898 [INFO][4391] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8efe7f3b93e46c29b9b0c27ede950cc801261973e5ba935997d17159d8c156b6" Namespace="kube-system" Pod="coredns-668d6bf9bc-l2dsg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--l2dsg-eth0" Dec 16 03:15:26.923938 systemd-networkd[1500]: vxlan.calico: Gained IPv6LL Dec 16 03:15:26.926000 audit[4574]: NETFILTER_CFG table=filter:127 family=2 entries=46 op=nft_register_chain pid=4574 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:15:26.926000 audit[4574]: SYSCALL arch=c000003e syscall=46 success=yes exit=23740 a0=3 a1=7fff9c961760 a2=0 a3=7fff9c96174c items=0 ppid=4062 pid=4574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:26.926000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:15:26.954837 systemd-networkd[1500]: cali4c8a9d146c5: Link UP Dec 16 03:15:26.958489 systemd-networkd[1500]: cali4c8a9d146c5: Gained carrier Dec 16 03:15:26.976774 containerd[1606]: time="2025-12-16T03:15:26.976582808Z" level=info msg="connecting to shim 8efe7f3b93e46c29b9b0c27ede950cc801261973e5ba935997d17159d8c156b6" address="unix:///run/containerd/s/d34b5084a4946d6503c488798cd7276748bf16bf15d067a3d6c38d6a91f07a05" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:15:26.980824 containerd[1606]: 2025-12-16 03:15:26.763 [INFO][4430] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6d7bb69b54--c7929-eth0 calico-apiserver-6d7bb69b54- calico-apiserver 9e2da91d-bd6f-474c-851c-2fd9d9db86f3 898 0 2025-12-16 03:14:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6d7bb69b54 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6d7bb69b54-c7929 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4c8a9d146c5 [] [] }} ContainerID="8e50fe3ae2f6e2029123360a85ae540a4e0aa5aa87349d0c3666f32fb704108a" Namespace="calico-apiserver" Pod="calico-apiserver-6d7bb69b54-c7929" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d7bb69b54--c7929-" Dec 16 03:15:26.980824 containerd[1606]: 2025-12-16 03:15:26.763 [INFO][4430] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8e50fe3ae2f6e2029123360a85ae540a4e0aa5aa87349d0c3666f32fb704108a" Namespace="calico-apiserver" Pod="calico-apiserver-6d7bb69b54-c7929" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d7bb69b54--c7929-eth0" Dec 16 03:15:26.980824 containerd[1606]: 2025-12-16 03:15:26.806 [INFO][4525] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8e50fe3ae2f6e2029123360a85ae540a4e0aa5aa87349d0c3666f32fb704108a" HandleID="k8s-pod-network.8e50fe3ae2f6e2029123360a85ae540a4e0aa5aa87349d0c3666f32fb704108a" Workload="localhost-k8s-calico--apiserver--6d7bb69b54--c7929-eth0" Dec 16 03:15:26.980824 containerd[1606]: 2025-12-16 03:15:26.806 [INFO][4525] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8e50fe3ae2f6e2029123360a85ae540a4e0aa5aa87349d0c3666f32fb704108a" HandleID="k8s-pod-network.8e50fe3ae2f6e2029123360a85ae540a4e0aa5aa87349d0c3666f32fb704108a" Workload="localhost-k8s-calico--apiserver--6d7bb69b54--c7929-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad390), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6d7bb69b54-c7929", "timestamp":"2025-12-16 03:15:26.806187777 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 03:15:26.980824 containerd[1606]: 2025-12-16 03:15:26.806 [INFO][4525] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 03:15:26.980824 containerd[1606]: 2025-12-16 03:15:26.831 [INFO][4525] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 03:15:26.980824 containerd[1606]: 2025-12-16 03:15:26.832 [INFO][4525] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 03:15:26.980824 containerd[1606]: 2025-12-16 03:15:26.880 [INFO][4525] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8e50fe3ae2f6e2029123360a85ae540a4e0aa5aa87349d0c3666f32fb704108a" host="localhost" Dec 16 03:15:26.980824 containerd[1606]: 2025-12-16 03:15:26.890 [INFO][4525] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 03:15:26.980824 containerd[1606]: 2025-12-16 03:15:26.904 [INFO][4525] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 03:15:26.980824 containerd[1606]: 2025-12-16 03:15:26.907 [INFO][4525] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 03:15:26.980824 containerd[1606]: 2025-12-16 03:15:26.911 [INFO][4525] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 03:15:26.980824 containerd[1606]: 2025-12-16 03:15:26.911 [INFO][4525] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8e50fe3ae2f6e2029123360a85ae540a4e0aa5aa87349d0c3666f32fb704108a" host="localhost" Dec 16 03:15:26.980824 containerd[1606]: 2025-12-16 03:15:26.919 [INFO][4525] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8e50fe3ae2f6e2029123360a85ae540a4e0aa5aa87349d0c3666f32fb704108a Dec 16 03:15:26.980824 containerd[1606]: 2025-12-16 03:15:26.932 [INFO][4525] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8e50fe3ae2f6e2029123360a85ae540a4e0aa5aa87349d0c3666f32fb704108a" host="localhost" Dec 16 03:15:26.980824 containerd[1606]: 2025-12-16 03:15:26.944 [INFO][4525] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.8e50fe3ae2f6e2029123360a85ae540a4e0aa5aa87349d0c3666f32fb704108a" host="localhost" Dec 16 03:15:26.980824 containerd[1606]: 2025-12-16 03:15:26.944 [INFO][4525] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.8e50fe3ae2f6e2029123360a85ae540a4e0aa5aa87349d0c3666f32fb704108a" host="localhost" Dec 16 03:15:26.980824 containerd[1606]: 2025-12-16 03:15:26.944 [INFO][4525] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 03:15:26.980824 containerd[1606]: 2025-12-16 03:15:26.944 [INFO][4525] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="8e50fe3ae2f6e2029123360a85ae540a4e0aa5aa87349d0c3666f32fb704108a" HandleID="k8s-pod-network.8e50fe3ae2f6e2029123360a85ae540a4e0aa5aa87349d0c3666f32fb704108a" Workload="localhost-k8s-calico--apiserver--6d7bb69b54--c7929-eth0" Dec 16 03:15:26.981867 containerd[1606]: 2025-12-16 03:15:26.951 [INFO][4430] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8e50fe3ae2f6e2029123360a85ae540a4e0aa5aa87349d0c3666f32fb704108a" Namespace="calico-apiserver" Pod="calico-apiserver-6d7bb69b54-c7929" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d7bb69b54--c7929-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6d7bb69b54--c7929-eth0", GenerateName:"calico-apiserver-6d7bb69b54-", Namespace:"calico-apiserver", SelfLink:"", UID:"9e2da91d-bd6f-474c-851c-2fd9d9db86f3", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 14, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d7bb69b54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6d7bb69b54-c7929", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4c8a9d146c5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:15:26.981867 containerd[1606]: 2025-12-16 03:15:26.951 [INFO][4430] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="8e50fe3ae2f6e2029123360a85ae540a4e0aa5aa87349d0c3666f32fb704108a" Namespace="calico-apiserver" Pod="calico-apiserver-6d7bb69b54-c7929" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d7bb69b54--c7929-eth0" Dec 16 03:15:26.981867 containerd[1606]: 2025-12-16 03:15:26.951 [INFO][4430] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4c8a9d146c5 ContainerID="8e50fe3ae2f6e2029123360a85ae540a4e0aa5aa87349d0c3666f32fb704108a" Namespace="calico-apiserver" Pod="calico-apiserver-6d7bb69b54-c7929" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d7bb69b54--c7929-eth0" Dec 16 03:15:26.981867 containerd[1606]: 2025-12-16 03:15:26.960 [INFO][4430] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8e50fe3ae2f6e2029123360a85ae540a4e0aa5aa87349d0c3666f32fb704108a" Namespace="calico-apiserver" Pod="calico-apiserver-6d7bb69b54-c7929" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d7bb69b54--c7929-eth0" Dec 16 03:15:26.981867 containerd[1606]: 2025-12-16 03:15:26.961 [INFO][4430] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8e50fe3ae2f6e2029123360a85ae540a4e0aa5aa87349d0c3666f32fb704108a" Namespace="calico-apiserver" Pod="calico-apiserver-6d7bb69b54-c7929" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d7bb69b54--c7929-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6d7bb69b54--c7929-eth0", GenerateName:"calico-apiserver-6d7bb69b54-", Namespace:"calico-apiserver", SelfLink:"", UID:"9e2da91d-bd6f-474c-851c-2fd9d9db86f3", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 14, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d7bb69b54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8e50fe3ae2f6e2029123360a85ae540a4e0aa5aa87349d0c3666f32fb704108a", Pod:"calico-apiserver-6d7bb69b54-c7929", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4c8a9d146c5", MAC:"b6:fa:a7:29:06:13", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:15:26.981867 containerd[1606]: 2025-12-16 03:15:26.971 [INFO][4430] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8e50fe3ae2f6e2029123360a85ae540a4e0aa5aa87349d0c3666f32fb704108a" Namespace="calico-apiserver" Pod="calico-apiserver-6d7bb69b54-c7929" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d7bb69b54--c7929-eth0" Dec 16 03:15:27.025071 systemd[1]: Started cri-containerd-8efe7f3b93e46c29b9b0c27ede950cc801261973e5ba935997d17159d8c156b6.scope - libcontainer container 8efe7f3b93e46c29b9b0c27ede950cc801261973e5ba935997d17159d8c156b6. Dec 16 03:15:27.042304 containerd[1606]: time="2025-12-16T03:15:27.042205023Z" level=info msg="connecting to shim 8e50fe3ae2f6e2029123360a85ae540a4e0aa5aa87349d0c3666f32fb704108a" address="unix:///run/containerd/s/6a3f144ac62bd9309e6cdbd9e4594e6175125d8ec0a0c2cd6583f7c775fd35b7" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:15:27.043000 audit: BPF prog-id=214 op=LOAD Dec 16 03:15:27.044000 audit: BPF prog-id=215 op=LOAD Dec 16 03:15:27.044000 audit[4609]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4592 pid=4609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:27.044000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865666537663362393365343663323962396230633237656465393530 Dec 16 03:15:27.044000 audit: BPF prog-id=215 op=UNLOAD Dec 16 03:15:27.044000 audit[4609]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4592 pid=4609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:27.044000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865666537663362393365343663323962396230633237656465393530 Dec 16 03:15:27.044000 audit: BPF prog-id=216 op=LOAD Dec 16 03:15:27.044000 audit[4609]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4592 pid=4609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:27.044000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865666537663362393365343663323962396230633237656465393530 Dec 16 03:15:27.044000 audit: BPF prog-id=217 op=LOAD Dec 16 03:15:27.044000 audit[4609]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4592 pid=4609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:27.044000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865666537663362393365343663323962396230633237656465393530 Dec 16 03:15:27.045000 audit: BPF prog-id=217 op=UNLOAD Dec 16 03:15:27.045000 audit[4609]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4592 pid=4609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:27.045000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865666537663362393365343663323962396230633237656465393530 Dec 16 03:15:27.045000 audit: BPF prog-id=216 op=UNLOAD Dec 16 03:15:27.045000 audit[4609]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4592 pid=4609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:27.045000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865666537663362393365343663323962396230633237656465393530 Dec 16 03:15:27.045000 audit: BPF prog-id=218 op=LOAD Dec 16 03:15:27.045000 audit[4609]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4592 pid=4609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:27.045000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865666537663362393365343663323962396230633237656465393530 Dec 16 03:15:27.047932 systemd-resolved[1283]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 03:15:27.074215 systemd[1]: Started cri-containerd-8e50fe3ae2f6e2029123360a85ae540a4e0aa5aa87349d0c3666f32fb704108a.scope - libcontainer container 8e50fe3ae2f6e2029123360a85ae540a4e0aa5aa87349d0c3666f32fb704108a. Dec 16 03:15:27.084225 containerd[1606]: time="2025-12-16T03:15:27.084171133Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:15:27.089966 containerd[1606]: time="2025-12-16T03:15:27.089778864Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 03:15:27.091812 containerd[1606]: time="2025-12-16T03:15:27.091777921Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 03:15:27.092317 kubelet[2848]: E1216 03:15:27.092276 2848 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 03:15:27.093662 kubelet[2848]: E1216 03:15:27.092894 2848 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 03:15:27.096127 containerd[1606]: time="2025-12-16T03:15:27.095949755Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 03:15:27.102525 kubelet[2848]: E1216 03:15:27.102033 2848 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jtrdk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-79777bd46b-pgqh8_calico-system(629371f1-f66b-44ce-8151-2d326255465b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 03:15:27.104061 kubelet[2848]: E1216 03:15:27.103510 2848 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-79777bd46b-pgqh8" podUID="629371f1-f66b-44ce-8151-2d326255465b" Dec 16 03:15:27.105000 audit: BPF prog-id=219 op=LOAD Dec 16 03:15:27.106000 audit: BPF prog-id=220 op=LOAD Dec 16 03:15:27.106000 audit[4648]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4636 pid=4648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:27.106000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865353066653361653266366532303239313233333630613835616535 Dec 16 03:15:27.106000 audit: BPF prog-id=220 op=UNLOAD Dec 16 03:15:27.106000 audit[4648]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4636 pid=4648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:27.106000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865353066653361653266366532303239313233333630613835616535 Dec 16 03:15:27.107000 audit: BPF prog-id=221 op=LOAD Dec 16 03:15:27.107000 audit[4648]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4636 pid=4648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:27.107000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865353066653361653266366532303239313233333630613835616535 Dec 16 03:15:27.108000 audit: BPF prog-id=222 op=LOAD Dec 16 03:15:27.108000 audit[4648]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4636 pid=4648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:27.108000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865353066653361653266366532303239313233333630613835616535 Dec 16 03:15:27.108000 audit: BPF prog-id=222 op=UNLOAD Dec 16 03:15:27.108000 audit[4648]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4636 pid=4648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:27.108000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865353066653361653266366532303239313233333630613835616535 Dec 16 03:15:27.108000 audit: BPF prog-id=221 op=UNLOAD Dec 16 03:15:27.109793 systemd-networkd[1500]: cali9170dfe05aa: Link UP Dec 16 03:15:27.108000 audit[4671]: NETFILTER_CFG table=filter:128 family=2 entries=58 op=nft_register_chain pid=4671 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:15:27.108000 audit[4648]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4636 pid=4648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:27.108000 audit[4671]: SYSCALL arch=c000003e syscall=46 success=yes exit=30584 a0=3 a1=7ffc368f77e0 a2=0 a3=7ffc368f77cc items=0 ppid=4062 pid=4671 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:27.108000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:15:27.108000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865353066653361653266366532303239313233333630613835616535 Dec 16 03:15:27.109000 audit: BPF prog-id=223 op=LOAD Dec 16 03:15:27.109000 audit[4648]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4636 pid=4648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:27.109000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865353066653361653266366532303239313233333630613835616535 Dec 16 03:15:27.115017 systemd-networkd[1500]: cali9170dfe05aa: Gained carrier Dec 16 03:15:27.119330 systemd-resolved[1283]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 03:15:27.128743 containerd[1606]: time="2025-12-16T03:15:27.127632603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-l2dsg,Uid:7167e9f6-6a0b-43eb-9986-460671ce7a4d,Namespace:kube-system,Attempt:0,} returns sandbox id \"8efe7f3b93e46c29b9b0c27ede950cc801261973e5ba935997d17159d8c156b6\"" Dec 16 03:15:27.131207 kubelet[2848]: E1216 03:15:27.131172 2848 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:15:27.136878 containerd[1606]: time="2025-12-16T03:15:27.136830945Z" level=info msg="CreateContainer within sandbox \"8efe7f3b93e46c29b9b0c27ede950cc801261973e5ba935997d17159d8c156b6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 03:15:27.142759 containerd[1606]: 2025-12-16 03:15:26.804 [INFO][4474] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--h4rmp-eth0 csi-node-driver- calico-system 05279c13-1f07-47f3-aaa0-f3eff20006ee 768 0 2025-12-16 03:15:00 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-h4rmp eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali9170dfe05aa [] [] }} ContainerID="6a93cbf1b6d5dfebf9c8d8781ed4b86b044062e6c451dd0316e9fc74200e0f28" Namespace="calico-system" Pod="csi-node-driver-h4rmp" WorkloadEndpoint="localhost-k8s-csi--node--driver--h4rmp-" Dec 16 03:15:27.142759 containerd[1606]: 2025-12-16 03:15:26.805 [INFO][4474] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6a93cbf1b6d5dfebf9c8d8781ed4b86b044062e6c451dd0316e9fc74200e0f28" Namespace="calico-system" Pod="csi-node-driver-h4rmp" WorkloadEndpoint="localhost-k8s-csi--node--driver--h4rmp-eth0" Dec 16 03:15:27.142759 containerd[1606]: 2025-12-16 03:15:26.880 [INFO][4540] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6a93cbf1b6d5dfebf9c8d8781ed4b86b044062e6c451dd0316e9fc74200e0f28" HandleID="k8s-pod-network.6a93cbf1b6d5dfebf9c8d8781ed4b86b044062e6c451dd0316e9fc74200e0f28" Workload="localhost-k8s-csi--node--driver--h4rmp-eth0" Dec 16 03:15:27.142759 containerd[1606]: 2025-12-16 03:15:26.880 [INFO][4540] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6a93cbf1b6d5dfebf9c8d8781ed4b86b044062e6c451dd0316e9fc74200e0f28" HandleID="k8s-pod-network.6a93cbf1b6d5dfebf9c8d8781ed4b86b044062e6c451dd0316e9fc74200e0f28" Workload="localhost-k8s-csi--node--driver--h4rmp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00032d330), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-h4rmp", "timestamp":"2025-12-16 03:15:26.880045813 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 03:15:27.142759 containerd[1606]: 2025-12-16 03:15:26.880 [INFO][4540] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 03:15:27.142759 containerd[1606]: 2025-12-16 03:15:26.944 [INFO][4540] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 03:15:27.142759 containerd[1606]: 2025-12-16 03:15:26.944 [INFO][4540] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 03:15:27.142759 containerd[1606]: 2025-12-16 03:15:26.978 [INFO][4540] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6a93cbf1b6d5dfebf9c8d8781ed4b86b044062e6c451dd0316e9fc74200e0f28" host="localhost" Dec 16 03:15:27.142759 containerd[1606]: 2025-12-16 03:15:26.990 [INFO][4540] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 03:15:27.142759 containerd[1606]: 2025-12-16 03:15:26.999 [INFO][4540] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 03:15:27.142759 containerd[1606]: 2025-12-16 03:15:27.001 [INFO][4540] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 03:15:27.142759 containerd[1606]: 2025-12-16 03:15:27.004 [INFO][4540] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 03:15:27.142759 containerd[1606]: 2025-12-16 03:15:27.004 [INFO][4540] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6a93cbf1b6d5dfebf9c8d8781ed4b86b044062e6c451dd0316e9fc74200e0f28" host="localhost" Dec 16 03:15:27.142759 containerd[1606]: 2025-12-16 03:15:27.006 [INFO][4540] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6a93cbf1b6d5dfebf9c8d8781ed4b86b044062e6c451dd0316e9fc74200e0f28 Dec 16 03:15:27.142759 containerd[1606]: 2025-12-16 03:15:27.025 [INFO][4540] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6a93cbf1b6d5dfebf9c8d8781ed4b86b044062e6c451dd0316e9fc74200e0f28" host="localhost" Dec 16 03:15:27.142759 containerd[1606]: 2025-12-16 03:15:27.067 [INFO][4540] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.6a93cbf1b6d5dfebf9c8d8781ed4b86b044062e6c451dd0316e9fc74200e0f28" host="localhost" Dec 16 03:15:27.142759 containerd[1606]: 2025-12-16 03:15:27.067 [INFO][4540] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.6a93cbf1b6d5dfebf9c8d8781ed4b86b044062e6c451dd0316e9fc74200e0f28" host="localhost" Dec 16 03:15:27.142759 containerd[1606]: 2025-12-16 03:15:27.067 [INFO][4540] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 03:15:27.142759 containerd[1606]: 2025-12-16 03:15:27.067 [INFO][4540] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="6a93cbf1b6d5dfebf9c8d8781ed4b86b044062e6c451dd0316e9fc74200e0f28" HandleID="k8s-pod-network.6a93cbf1b6d5dfebf9c8d8781ed4b86b044062e6c451dd0316e9fc74200e0f28" Workload="localhost-k8s-csi--node--driver--h4rmp-eth0" Dec 16 03:15:27.143347 containerd[1606]: 2025-12-16 03:15:27.087 [INFO][4474] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6a93cbf1b6d5dfebf9c8d8781ed4b86b044062e6c451dd0316e9fc74200e0f28" Namespace="calico-system" Pod="csi-node-driver-h4rmp" WorkloadEndpoint="localhost-k8s-csi--node--driver--h4rmp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--h4rmp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"05279c13-1f07-47f3-aaa0-f3eff20006ee", ResourceVersion:"768", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 15, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-h4rmp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9170dfe05aa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:15:27.143347 containerd[1606]: 2025-12-16 03:15:27.087 [INFO][4474] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="6a93cbf1b6d5dfebf9c8d8781ed4b86b044062e6c451dd0316e9fc74200e0f28" Namespace="calico-system" Pod="csi-node-driver-h4rmp" WorkloadEndpoint="localhost-k8s-csi--node--driver--h4rmp-eth0" Dec 16 03:15:27.143347 containerd[1606]: 2025-12-16 03:15:27.087 [INFO][4474] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9170dfe05aa ContainerID="6a93cbf1b6d5dfebf9c8d8781ed4b86b044062e6c451dd0316e9fc74200e0f28" Namespace="calico-system" Pod="csi-node-driver-h4rmp" WorkloadEndpoint="localhost-k8s-csi--node--driver--h4rmp-eth0" Dec 16 03:15:27.143347 containerd[1606]: 2025-12-16 03:15:27.114 [INFO][4474] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6a93cbf1b6d5dfebf9c8d8781ed4b86b044062e6c451dd0316e9fc74200e0f28" Namespace="calico-system" Pod="csi-node-driver-h4rmp" WorkloadEndpoint="localhost-k8s-csi--node--driver--h4rmp-eth0" Dec 16 03:15:27.143347 containerd[1606]: 2025-12-16 03:15:27.121 [INFO][4474] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6a93cbf1b6d5dfebf9c8d8781ed4b86b044062e6c451dd0316e9fc74200e0f28" Namespace="calico-system" Pod="csi-node-driver-h4rmp" WorkloadEndpoint="localhost-k8s-csi--node--driver--h4rmp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--h4rmp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"05279c13-1f07-47f3-aaa0-f3eff20006ee", ResourceVersion:"768", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 15, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6a93cbf1b6d5dfebf9c8d8781ed4b86b044062e6c451dd0316e9fc74200e0f28", Pod:"csi-node-driver-h4rmp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9170dfe05aa", MAC:"b6:b5:59:28:e8:f2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:15:27.143347 containerd[1606]: 2025-12-16 03:15:27.137 [INFO][4474] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6a93cbf1b6d5dfebf9c8d8781ed4b86b044062e6c451dd0316e9fc74200e0f28" Namespace="calico-system" Pod="csi-node-driver-h4rmp" WorkloadEndpoint="localhost-k8s-csi--node--driver--h4rmp-eth0" Dec 16 03:15:27.168842 containerd[1606]: time="2025-12-16T03:15:27.168764439Z" level=info msg="Container 6a82776cffec5745ee5cc27b2de4e330cffdbb28120d42d513d87efa1568bafe: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:15:27.172801 systemd-networkd[1500]: cali74696440517: Link UP Dec 16 03:15:27.173908 systemd-networkd[1500]: cali74696440517: Gained carrier Dec 16 03:15:27.181893 containerd[1606]: time="2025-12-16T03:15:27.181651346Z" level=info msg="CreateContainer within sandbox \"8efe7f3b93e46c29b9b0c27ede950cc801261973e5ba935997d17159d8c156b6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6a82776cffec5745ee5cc27b2de4e330cffdbb28120d42d513d87efa1568bafe\"" Dec 16 03:15:27.186884 containerd[1606]: time="2025-12-16T03:15:27.186825875Z" level=info msg="StartContainer for \"6a82776cffec5745ee5cc27b2de4e330cffdbb28120d42d513d87efa1568bafe\"" Dec 16 03:15:27.188577 containerd[1606]: time="2025-12-16T03:15:27.187064057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d7bb69b54-c7929,Uid:9e2da91d-bd6f-474c-851c-2fd9d9db86f3,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"8e50fe3ae2f6e2029123360a85ae540a4e0aa5aa87349d0c3666f32fb704108a\"" Dec 16 03:15:27.190622 containerd[1606]: 2025-12-16 03:15:26.810 [INFO][4460] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6d7bb69b54--5cccw-eth0 calico-apiserver-6d7bb69b54- calico-apiserver f6a2c05c-26b5-45cc-94cd-f96ac9ec6971 894 0 2025-12-16 03:14:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6d7bb69b54 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6d7bb69b54-5cccw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali74696440517 [] [] }} ContainerID="43aa028f7fa201a12a4a704ddf9982d3217b3816ef04bbaa0e10574748531f3d" Namespace="calico-apiserver" Pod="calico-apiserver-6d7bb69b54-5cccw" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d7bb69b54--5cccw-" Dec 16 03:15:27.190622 containerd[1606]: 2025-12-16 03:15:26.811 [INFO][4460] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="43aa028f7fa201a12a4a704ddf9982d3217b3816ef04bbaa0e10574748531f3d" Namespace="calico-apiserver" Pod="calico-apiserver-6d7bb69b54-5cccw" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d7bb69b54--5cccw-eth0" Dec 16 03:15:27.190622 containerd[1606]: 2025-12-16 03:15:26.880 [INFO][4546] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="43aa028f7fa201a12a4a704ddf9982d3217b3816ef04bbaa0e10574748531f3d" HandleID="k8s-pod-network.43aa028f7fa201a12a4a704ddf9982d3217b3816ef04bbaa0e10574748531f3d" Workload="localhost-k8s-calico--apiserver--6d7bb69b54--5cccw-eth0" Dec 16 03:15:27.190622 containerd[1606]: 2025-12-16 03:15:26.880 [INFO][4546] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="43aa028f7fa201a12a4a704ddf9982d3217b3816ef04bbaa0e10574748531f3d" HandleID="k8s-pod-network.43aa028f7fa201a12a4a704ddf9982d3217b3816ef04bbaa0e10574748531f3d" Workload="localhost-k8s-calico--apiserver--6d7bb69b54--5cccw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00026d590), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6d7bb69b54-5cccw", "timestamp":"2025-12-16 03:15:26.88040207 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 03:15:27.190622 containerd[1606]: 2025-12-16 03:15:26.880 [INFO][4546] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 03:15:27.190622 containerd[1606]: 2025-12-16 03:15:27.067 [INFO][4546] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 03:15:27.190622 containerd[1606]: 2025-12-16 03:15:27.067 [INFO][4546] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 03:15:27.190622 containerd[1606]: 2025-12-16 03:15:27.104 [INFO][4546] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.43aa028f7fa201a12a4a704ddf9982d3217b3816ef04bbaa0e10574748531f3d" host="localhost" Dec 16 03:15:27.190622 containerd[1606]: 2025-12-16 03:15:27.119 [INFO][4546] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 03:15:27.190622 containerd[1606]: 2025-12-16 03:15:27.125 [INFO][4546] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 03:15:27.190622 containerd[1606]: 2025-12-16 03:15:27.130 [INFO][4546] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 03:15:27.190622 containerd[1606]: 2025-12-16 03:15:27.138 [INFO][4546] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 03:15:27.190622 containerd[1606]: 2025-12-16 03:15:27.138 [INFO][4546] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.43aa028f7fa201a12a4a704ddf9982d3217b3816ef04bbaa0e10574748531f3d" host="localhost" Dec 16 03:15:27.190622 containerd[1606]: 2025-12-16 03:15:27.144 [INFO][4546] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.43aa028f7fa201a12a4a704ddf9982d3217b3816ef04bbaa0e10574748531f3d Dec 16 03:15:27.190622 containerd[1606]: 2025-12-16 03:15:27.151 [INFO][4546] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.43aa028f7fa201a12a4a704ddf9982d3217b3816ef04bbaa0e10574748531f3d" host="localhost" Dec 16 03:15:27.190622 containerd[1606]: 2025-12-16 03:15:27.161 [INFO][4546] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.43aa028f7fa201a12a4a704ddf9982d3217b3816ef04bbaa0e10574748531f3d" host="localhost" Dec 16 03:15:27.190622 containerd[1606]: 2025-12-16 03:15:27.161 [INFO][4546] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.43aa028f7fa201a12a4a704ddf9982d3217b3816ef04bbaa0e10574748531f3d" host="localhost" Dec 16 03:15:27.190622 containerd[1606]: 2025-12-16 03:15:27.161 [INFO][4546] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 03:15:27.190622 containerd[1606]: 2025-12-16 03:15:27.161 [INFO][4546] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="43aa028f7fa201a12a4a704ddf9982d3217b3816ef04bbaa0e10574748531f3d" HandleID="k8s-pod-network.43aa028f7fa201a12a4a704ddf9982d3217b3816ef04bbaa0e10574748531f3d" Workload="localhost-k8s-calico--apiserver--6d7bb69b54--5cccw-eth0" Dec 16 03:15:27.191300 containerd[1606]: 2025-12-16 03:15:27.166 [INFO][4460] cni-plugin/k8s.go 418: Populated endpoint ContainerID="43aa028f7fa201a12a4a704ddf9982d3217b3816ef04bbaa0e10574748531f3d" Namespace="calico-apiserver" Pod="calico-apiserver-6d7bb69b54-5cccw" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d7bb69b54--5cccw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6d7bb69b54--5cccw-eth0", GenerateName:"calico-apiserver-6d7bb69b54-", Namespace:"calico-apiserver", SelfLink:"", UID:"f6a2c05c-26b5-45cc-94cd-f96ac9ec6971", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 14, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d7bb69b54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6d7bb69b54-5cccw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali74696440517", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:15:27.191300 containerd[1606]: 2025-12-16 03:15:27.166 [INFO][4460] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="43aa028f7fa201a12a4a704ddf9982d3217b3816ef04bbaa0e10574748531f3d" Namespace="calico-apiserver" Pod="calico-apiserver-6d7bb69b54-5cccw" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d7bb69b54--5cccw-eth0" Dec 16 03:15:27.191300 containerd[1606]: 2025-12-16 03:15:27.167 [INFO][4460] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali74696440517 ContainerID="43aa028f7fa201a12a4a704ddf9982d3217b3816ef04bbaa0e10574748531f3d" Namespace="calico-apiserver" Pod="calico-apiserver-6d7bb69b54-5cccw" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d7bb69b54--5cccw-eth0" Dec 16 03:15:27.191300 containerd[1606]: 2025-12-16 03:15:27.173 [INFO][4460] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="43aa028f7fa201a12a4a704ddf9982d3217b3816ef04bbaa0e10574748531f3d" Namespace="calico-apiserver" Pod="calico-apiserver-6d7bb69b54-5cccw" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d7bb69b54--5cccw-eth0" Dec 16 03:15:27.191300 containerd[1606]: 2025-12-16 03:15:27.174 [INFO][4460] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="43aa028f7fa201a12a4a704ddf9982d3217b3816ef04bbaa0e10574748531f3d" Namespace="calico-apiserver" Pod="calico-apiserver-6d7bb69b54-5cccw" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d7bb69b54--5cccw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6d7bb69b54--5cccw-eth0", GenerateName:"calico-apiserver-6d7bb69b54-", Namespace:"calico-apiserver", SelfLink:"", UID:"f6a2c05c-26b5-45cc-94cd-f96ac9ec6971", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 14, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d7bb69b54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"43aa028f7fa201a12a4a704ddf9982d3217b3816ef04bbaa0e10574748531f3d", Pod:"calico-apiserver-6d7bb69b54-5cccw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali74696440517", MAC:"aa:3d:25:18:74:f1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:15:27.191300 containerd[1606]: 2025-12-16 03:15:27.185 [INFO][4460] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="43aa028f7fa201a12a4a704ddf9982d3217b3816ef04bbaa0e10574748531f3d" Namespace="calico-apiserver" Pod="calico-apiserver-6d7bb69b54-5cccw" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d7bb69b54--5cccw-eth0" Dec 16 03:15:27.194007 containerd[1606]: time="2025-12-16T03:15:27.191073192Z" level=info msg="connecting to shim 6a82776cffec5745ee5cc27b2de4e330cffdbb28120d42d513d87efa1568bafe" address="unix:///run/containerd/s/d34b5084a4946d6503c488798cd7276748bf16bf15d067a3d6c38d6a91f07a05" protocol=ttrpc version=3 Dec 16 03:15:27.194007 containerd[1606]: time="2025-12-16T03:15:27.193966017Z" level=info msg="connecting to shim 6a93cbf1b6d5dfebf9c8d8781ed4b86b044062e6c451dd0316e9fc74200e0f28" address="unix:///run/containerd/s/c6f7d41d6d81f8424925c1d8bc1d099354ba2bc61cac8286b4285ce153e3bc4a" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:15:27.215000 audit[4729]: NETFILTER_CFG table=filter:129 family=2 entries=48 op=nft_register_chain pid=4729 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:15:27.215000 audit[4729]: SYSCALL arch=c000003e syscall=46 success=yes exit=23140 a0=3 a1=7ffff1db75d0 a2=0 a3=7ffff1db75bc items=0 ppid=4062 pid=4729 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:27.215000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:15:27.230039 systemd[1]: Started cri-containerd-6a82776cffec5745ee5cc27b2de4e330cffdbb28120d42d513d87efa1568bafe.scope - libcontainer container 6a82776cffec5745ee5cc27b2de4e330cffdbb28120d42d513d87efa1568bafe. Dec 16 03:15:27.236903 systemd[1]: Started cri-containerd-6a93cbf1b6d5dfebf9c8d8781ed4b86b044062e6c451dd0316e9fc74200e0f28.scope - libcontainer container 6a93cbf1b6d5dfebf9c8d8781ed4b86b044062e6c451dd0316e9fc74200e0f28. Dec 16 03:15:27.242765 containerd[1606]: time="2025-12-16T03:15:27.242673471Z" level=info msg="connecting to shim 43aa028f7fa201a12a4a704ddf9982d3217b3816ef04bbaa0e10574748531f3d" address="unix:///run/containerd/s/00106a7bfedd9158c674cb8bfd38e2496b7f7cfe39b7e8305e4499ada7a005b2" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:15:27.261000 audit: BPF prog-id=224 op=LOAD Dec 16 03:15:27.262000 audit: BPF prog-id=225 op=LOAD Dec 16 03:15:27.262000 audit[4714]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=4592 pid=4714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:27.262000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661383237373663666665633537343565653563633237623264653465 Dec 16 03:15:27.262000 audit: BPF prog-id=225 op=UNLOAD Dec 16 03:15:27.262000 audit[4714]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4592 pid=4714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:27.262000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661383237373663666665633537343565653563633237623264653465 Dec 16 03:15:27.262000 audit: BPF prog-id=226 op=LOAD Dec 16 03:15:27.262000 audit[4714]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=4592 pid=4714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:27.262000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661383237373663666665633537343565653563633237623264653465 Dec 16 03:15:27.262000 audit: BPF prog-id=227 op=LOAD Dec 16 03:15:27.262000 audit[4714]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=4592 pid=4714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:27.262000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661383237373663666665633537343565653563633237623264653465 Dec 16 03:15:27.262000 audit: BPF prog-id=227 op=UNLOAD Dec 16 03:15:27.262000 audit[4714]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4592 pid=4714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:27.262000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661383237373663666665633537343565653563633237623264653465 Dec 16 03:15:27.262000 audit: BPF prog-id=226 op=UNLOAD Dec 16 03:15:27.262000 audit[4714]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4592 pid=4714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:27.262000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661383237373663666665633537343565653563633237623264653465 Dec 16 03:15:27.262000 audit: BPF prog-id=228 op=LOAD Dec 16 03:15:27.262000 audit[4714]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=4592 pid=4714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:27.262000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661383237373663666665633537343565653563633237623264653465 Dec 16 03:15:27.267000 audit: BPF prog-id=229 op=LOAD Dec 16 03:15:27.273000 audit: BPF prog-id=230 op=LOAD Dec 16 03:15:27.273000 audit[4721]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=4702 pid=4721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:27.273000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661393363626631623664356466656266396338643837383165643462 Dec 16 03:15:27.273000 audit: BPF prog-id=230 op=UNLOAD Dec 16 03:15:27.273000 audit[4721]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4702 pid=4721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:27.273000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661393363626631623664356466656266396338643837383165643462 Dec 16 03:15:27.273000 audit: BPF prog-id=231 op=LOAD Dec 16 03:15:27.273000 audit[4721]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=4702 pid=4721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:27.273000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661393363626631623664356466656266396338643837383165643462 Dec 16 03:15:27.273000 audit: BPF prog-id=232 op=LOAD Dec 16 03:15:27.273000 audit[4721]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=4702 pid=4721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:27.273000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661393363626631623664356466656266396338643837383165643462 Dec 16 03:15:27.273000 audit: BPF prog-id=232 op=UNLOAD Dec 16 03:15:27.273000 audit[4721]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4702 pid=4721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:27.273000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661393363626631623664356466656266396338643837383165643462 Dec 16 03:15:27.273000 audit: BPF prog-id=231 op=UNLOAD Dec 16 03:15:27.273000 audit[4721]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4702 pid=4721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:27.273000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661393363626631623664356466656266396338643837383165643462 Dec 16 03:15:27.273792 systemd-networkd[1500]: cali413a78e32d4: Link UP Dec 16 03:15:27.276951 systemd-networkd[1500]: cali413a78e32d4: Gained carrier Dec 16 03:15:27.276000 audit: BPF prog-id=233 op=LOAD Dec 16 03:15:27.276000 audit[4721]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=4702 pid=4721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:27.276000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661393363626631623664356466656266396338643837383165643462 Dec 16 03:15:27.279704 systemd-resolved[1283]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 03:15:27.296622 containerd[1606]: 2025-12-16 03:15:26.762 [INFO][4444] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--dvxtw-eth0 coredns-668d6bf9bc- kube-system 534f1053-c823-4a53-90bf-e8b0c562d2fe 889 0 2025-12-16 03:14:41 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-dvxtw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali413a78e32d4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="2e95c9c2277970da4be37948ae8f7a24cf9fe4c277e6d825d6a0fed30be87d3b" Namespace="kube-system" Pod="coredns-668d6bf9bc-dvxtw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--dvxtw-" Dec 16 03:15:27.296622 containerd[1606]: 2025-12-16 03:15:26.762 [INFO][4444] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2e95c9c2277970da4be37948ae8f7a24cf9fe4c277e6d825d6a0fed30be87d3b" Namespace="kube-system" Pod="coredns-668d6bf9bc-dvxtw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--dvxtw-eth0" Dec 16 03:15:27.296622 containerd[1606]: 2025-12-16 03:15:26.883 [INFO][4523] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2e95c9c2277970da4be37948ae8f7a24cf9fe4c277e6d825d6a0fed30be87d3b" HandleID="k8s-pod-network.2e95c9c2277970da4be37948ae8f7a24cf9fe4c277e6d825d6a0fed30be87d3b" Workload="localhost-k8s-coredns--668d6bf9bc--dvxtw-eth0" Dec 16 03:15:27.296622 containerd[1606]: 2025-12-16 03:15:26.884 [INFO][4523] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2e95c9c2277970da4be37948ae8f7a24cf9fe4c277e6d825d6a0fed30be87d3b" HandleID="k8s-pod-network.2e95c9c2277970da4be37948ae8f7a24cf9fe4c277e6d825d6a0fed30be87d3b" Workload="localhost-k8s-coredns--668d6bf9bc--dvxtw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000492c20), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-dvxtw", "timestamp":"2025-12-16 03:15:26.883254279 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 03:15:27.296622 containerd[1606]: 2025-12-16 03:15:26.884 [INFO][4523] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 03:15:27.296622 containerd[1606]: 2025-12-16 03:15:27.161 [INFO][4523] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 03:15:27.296622 containerd[1606]: 2025-12-16 03:15:27.162 [INFO][4523] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 03:15:27.296622 containerd[1606]: 2025-12-16 03:15:27.200 [INFO][4523] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2e95c9c2277970da4be37948ae8f7a24cf9fe4c277e6d825d6a0fed30be87d3b" host="localhost" Dec 16 03:15:27.296622 containerd[1606]: 2025-12-16 03:15:27.219 [INFO][4523] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 03:15:27.296622 containerd[1606]: 2025-12-16 03:15:27.225 [INFO][4523] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 03:15:27.296622 containerd[1606]: 2025-12-16 03:15:27.227 [INFO][4523] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 03:15:27.296622 containerd[1606]: 2025-12-16 03:15:27.230 [INFO][4523] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 03:15:27.296622 containerd[1606]: 2025-12-16 03:15:27.230 [INFO][4523] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2e95c9c2277970da4be37948ae8f7a24cf9fe4c277e6d825d6a0fed30be87d3b" host="localhost" Dec 16 03:15:27.296622 containerd[1606]: 2025-12-16 03:15:27.231 [INFO][4523] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2e95c9c2277970da4be37948ae8f7a24cf9fe4c277e6d825d6a0fed30be87d3b Dec 16 03:15:27.296622 containerd[1606]: 2025-12-16 03:15:27.237 [INFO][4523] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2e95c9c2277970da4be37948ae8f7a24cf9fe4c277e6d825d6a0fed30be87d3b" host="localhost" Dec 16 03:15:27.296622 containerd[1606]: 2025-12-16 03:15:27.247 [INFO][4523] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.2e95c9c2277970da4be37948ae8f7a24cf9fe4c277e6d825d6a0fed30be87d3b" host="localhost" Dec 16 03:15:27.296622 containerd[1606]: 2025-12-16 03:15:27.247 [INFO][4523] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.2e95c9c2277970da4be37948ae8f7a24cf9fe4c277e6d825d6a0fed30be87d3b" host="localhost" Dec 16 03:15:27.296622 containerd[1606]: 2025-12-16 03:15:27.247 [INFO][4523] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 03:15:27.296622 containerd[1606]: 2025-12-16 03:15:27.247 [INFO][4523] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="2e95c9c2277970da4be37948ae8f7a24cf9fe4c277e6d825d6a0fed30be87d3b" HandleID="k8s-pod-network.2e95c9c2277970da4be37948ae8f7a24cf9fe4c277e6d825d6a0fed30be87d3b" Workload="localhost-k8s-coredns--668d6bf9bc--dvxtw-eth0" Dec 16 03:15:27.297198 containerd[1606]: 2025-12-16 03:15:27.260 [INFO][4444] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2e95c9c2277970da4be37948ae8f7a24cf9fe4c277e6d825d6a0fed30be87d3b" Namespace="kube-system" Pod="coredns-668d6bf9bc-dvxtw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--dvxtw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--dvxtw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"534f1053-c823-4a53-90bf-e8b0c562d2fe", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 14, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-dvxtw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali413a78e32d4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:15:27.297198 containerd[1606]: 2025-12-16 03:15:27.261 [INFO][4444] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="2e95c9c2277970da4be37948ae8f7a24cf9fe4c277e6d825d6a0fed30be87d3b" Namespace="kube-system" Pod="coredns-668d6bf9bc-dvxtw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--dvxtw-eth0" Dec 16 03:15:27.297198 containerd[1606]: 2025-12-16 03:15:27.261 [INFO][4444] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali413a78e32d4 ContainerID="2e95c9c2277970da4be37948ae8f7a24cf9fe4c277e6d825d6a0fed30be87d3b" Namespace="kube-system" Pod="coredns-668d6bf9bc-dvxtw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--dvxtw-eth0" Dec 16 03:15:27.297198 containerd[1606]: 2025-12-16 03:15:27.277 [INFO][4444] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2e95c9c2277970da4be37948ae8f7a24cf9fe4c277e6d825d6a0fed30be87d3b" Namespace="kube-system" Pod="coredns-668d6bf9bc-dvxtw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--dvxtw-eth0" Dec 16 03:15:27.297198 containerd[1606]: 2025-12-16 03:15:27.279 [INFO][4444] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2e95c9c2277970da4be37948ae8f7a24cf9fe4c277e6d825d6a0fed30be87d3b" Namespace="kube-system" Pod="coredns-668d6bf9bc-dvxtw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--dvxtw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--dvxtw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"534f1053-c823-4a53-90bf-e8b0c562d2fe", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 14, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2e95c9c2277970da4be37948ae8f7a24cf9fe4c277e6d825d6a0fed30be87d3b", Pod:"coredns-668d6bf9bc-dvxtw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali413a78e32d4", MAC:"32:80:39:4c:11:e8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:15:27.297198 containerd[1606]: 2025-12-16 03:15:27.291 [INFO][4444] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2e95c9c2277970da4be37948ae8f7a24cf9fe4c277e6d825d6a0fed30be87d3b" Namespace="kube-system" Pod="coredns-668d6bf9bc-dvxtw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--dvxtw-eth0" Dec 16 03:15:27.307397 systemd[1]: Started cri-containerd-43aa028f7fa201a12a4a704ddf9982d3217b3816ef04bbaa0e10574748531f3d.scope - libcontainer container 43aa028f7fa201a12a4a704ddf9982d3217b3816ef04bbaa0e10574748531f3d. Dec 16 03:15:27.325000 audit: BPF prog-id=234 op=LOAD Dec 16 03:15:27.326838 containerd[1606]: time="2025-12-16T03:15:27.326789861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h4rmp,Uid:05279c13-1f07-47f3-aaa0-f3eff20006ee,Namespace:calico-system,Attempt:0,} returns sandbox id \"6a93cbf1b6d5dfebf9c8d8781ed4b86b044062e6c451dd0316e9fc74200e0f28\"" Dec 16 03:15:27.326000 audit: BPF prog-id=235 op=LOAD Dec 16 03:15:27.326000 audit[4780]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=4757 pid=4780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:27.326000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433616130323866376661323031613132613461373034646466393938 Dec 16 03:15:27.326000 audit: BPF prog-id=235 op=UNLOAD Dec 16 03:15:27.326000 audit[4780]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4757 pid=4780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:27.326000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433616130323866376661323031613132613461373034646466393938 Dec 16 03:15:27.326000 audit: BPF prog-id=236 op=LOAD Dec 16 03:15:27.326000 audit[4780]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=4757 pid=4780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:27.326000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433616130323866376661323031613132613461373034646466393938 Dec 16 03:15:27.326000 audit: BPF prog-id=237 op=LOAD Dec 16 03:15:27.326000 audit[4780]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=4757 pid=4780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:27.326000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433616130323866376661323031613132613461373034646466393938 Dec 16 03:15:27.327000 audit: BPF prog-id=237 op=UNLOAD Dec 16 03:15:27.327000 audit[4780]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4757 pid=4780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:27.327000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433616130323866376661323031613132613461373034646466393938 Dec 16 03:15:27.327000 audit: BPF prog-id=236 op=UNLOAD Dec 16 03:15:27.327000 audit[4780]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4757 pid=4780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:27.327000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433616130323866376661323031613132613461373034646466393938 Dec 16 03:15:27.327000 audit: BPF prog-id=238 op=LOAD Dec 16 03:15:27.327000 audit[4780]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=4757 pid=4780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:27.327000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433616130323866376661323031613132613461373034646466393938 Dec 16 03:15:27.330845 systemd-resolved[1283]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 03:15:27.337000 audit[4825]: NETFILTER_CFG table=filter:130 family=2 entries=53 op=nft_register_chain pid=4825 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:15:27.337000 audit[4825]: SYSCALL arch=c000003e syscall=46 success=yes exit=26640 a0=3 a1=7ffe7a6348b0 a2=0 a3=7ffe7a63489c items=0 ppid=4062 pid=4825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:27.337000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:15:27.373526 containerd[1606]: time="2025-12-16T03:15:27.373366398Z" level=info msg="StartContainer for \"6a82776cffec5745ee5cc27b2de4e330cffdbb28120d42d513d87efa1568bafe\" returns successfully" Dec 16 03:15:27.396593 containerd[1606]: time="2025-12-16T03:15:27.396537769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d7bb69b54-5cccw,Uid:f6a2c05c-26b5-45cc-94cd-f96ac9ec6971,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"43aa028f7fa201a12a4a704ddf9982d3217b3816ef04bbaa0e10574748531f3d\"" Dec 16 03:15:27.409700 systemd-networkd[1500]: cali8ec5d4e496b: Link UP Dec 16 03:15:27.411007 systemd-networkd[1500]: cali8ec5d4e496b: Gained carrier Dec 16 03:15:27.417227 containerd[1606]: time="2025-12-16T03:15:27.417168335Z" level=info msg="connecting to shim 2e95c9c2277970da4be37948ae8f7a24cf9fe4c277e6d825d6a0fed30be87d3b" address="unix:///run/containerd/s/0df1fde7579d44ebe7d95078e9333e23f0fe3b4d0c4afd6a260a21e023d05d53" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:15:27.438675 containerd[1606]: 2025-12-16 03:15:26.835 [INFO][4464] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--vwjmk-eth0 goldmane-666569f655- calico-system 5b1e8305-d364-4bc6-9a3a-e97daf2d06ed 890 0 2025-12-16 03:14:58 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-vwjmk eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali8ec5d4e496b [] [] }} ContainerID="ed37b2cbef62a43c48863deed01527831880cc1c78296a8e5d6554b467292fa7" Namespace="calico-system" Pod="goldmane-666569f655-vwjmk" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--vwjmk-" Dec 16 03:15:27.438675 containerd[1606]: 2025-12-16 03:15:26.835 [INFO][4464] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ed37b2cbef62a43c48863deed01527831880cc1c78296a8e5d6554b467292fa7" Namespace="calico-system" Pod="goldmane-666569f655-vwjmk" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--vwjmk-eth0" Dec 16 03:15:27.438675 containerd[1606]: 2025-12-16 03:15:26.940 [INFO][4557] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ed37b2cbef62a43c48863deed01527831880cc1c78296a8e5d6554b467292fa7" HandleID="k8s-pod-network.ed37b2cbef62a43c48863deed01527831880cc1c78296a8e5d6554b467292fa7" Workload="localhost-k8s-goldmane--666569f655--vwjmk-eth0" Dec 16 03:15:27.438675 containerd[1606]: 2025-12-16 03:15:26.940 [INFO][4557] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ed37b2cbef62a43c48863deed01527831880cc1c78296a8e5d6554b467292fa7" HandleID="k8s-pod-network.ed37b2cbef62a43c48863deed01527831880cc1c78296a8e5d6554b467292fa7" Workload="localhost-k8s-goldmane--666569f655--vwjmk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003ba8a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-vwjmk", "timestamp":"2025-12-16 03:15:26.940295383 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 03:15:27.438675 containerd[1606]: 2025-12-16 03:15:26.940 [INFO][4557] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 03:15:27.438675 containerd[1606]: 2025-12-16 03:15:27.247 [INFO][4557] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 03:15:27.438675 containerd[1606]: 2025-12-16 03:15:27.247 [INFO][4557] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 16 03:15:27.438675 containerd[1606]: 2025-12-16 03:15:27.307 [INFO][4557] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ed37b2cbef62a43c48863deed01527831880cc1c78296a8e5d6554b467292fa7" host="localhost" Dec 16 03:15:27.438675 containerd[1606]: 2025-12-16 03:15:27.324 [INFO][4557] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 16 03:15:27.438675 containerd[1606]: 2025-12-16 03:15:27.336 [INFO][4557] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 16 03:15:27.438675 containerd[1606]: 2025-12-16 03:15:27.373 [INFO][4557] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 16 03:15:27.438675 containerd[1606]: 2025-12-16 03:15:27.382 [INFO][4557] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 16 03:15:27.438675 containerd[1606]: 2025-12-16 03:15:27.382 [INFO][4557] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ed37b2cbef62a43c48863deed01527831880cc1c78296a8e5d6554b467292fa7" host="localhost" Dec 16 03:15:27.438675 containerd[1606]: 2025-12-16 03:15:27.384 [INFO][4557] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ed37b2cbef62a43c48863deed01527831880cc1c78296a8e5d6554b467292fa7 Dec 16 03:15:27.438675 containerd[1606]: 2025-12-16 03:15:27.392 [INFO][4557] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ed37b2cbef62a43c48863deed01527831880cc1c78296a8e5d6554b467292fa7" host="localhost" Dec 16 03:15:27.438675 containerd[1606]: 2025-12-16 03:15:27.398 [INFO][4557] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.ed37b2cbef62a43c48863deed01527831880cc1c78296a8e5d6554b467292fa7" host="localhost" Dec 16 03:15:27.438675 containerd[1606]: 2025-12-16 03:15:27.399 [INFO][4557] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.ed37b2cbef62a43c48863deed01527831880cc1c78296a8e5d6554b467292fa7" host="localhost" Dec 16 03:15:27.438675 containerd[1606]: 2025-12-16 03:15:27.399 [INFO][4557] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 03:15:27.438675 containerd[1606]: 2025-12-16 03:15:27.399 [INFO][4557] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="ed37b2cbef62a43c48863deed01527831880cc1c78296a8e5d6554b467292fa7" HandleID="k8s-pod-network.ed37b2cbef62a43c48863deed01527831880cc1c78296a8e5d6554b467292fa7" Workload="localhost-k8s-goldmane--666569f655--vwjmk-eth0" Dec 16 03:15:27.439296 containerd[1606]: 2025-12-16 03:15:27.406 [INFO][4464] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ed37b2cbef62a43c48863deed01527831880cc1c78296a8e5d6554b467292fa7" Namespace="calico-system" Pod="goldmane-666569f655-vwjmk" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--vwjmk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--vwjmk-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"5b1e8305-d364-4bc6-9a3a-e97daf2d06ed", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 14, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-vwjmk", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8ec5d4e496b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:15:27.439296 containerd[1606]: 2025-12-16 03:15:27.407 [INFO][4464] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="ed37b2cbef62a43c48863deed01527831880cc1c78296a8e5d6554b467292fa7" Namespace="calico-system" Pod="goldmane-666569f655-vwjmk" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--vwjmk-eth0" Dec 16 03:15:27.439296 containerd[1606]: 2025-12-16 03:15:27.407 [INFO][4464] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8ec5d4e496b ContainerID="ed37b2cbef62a43c48863deed01527831880cc1c78296a8e5d6554b467292fa7" Namespace="calico-system" Pod="goldmane-666569f655-vwjmk" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--vwjmk-eth0" Dec 16 03:15:27.439296 containerd[1606]: 2025-12-16 03:15:27.415 [INFO][4464] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ed37b2cbef62a43c48863deed01527831880cc1c78296a8e5d6554b467292fa7" Namespace="calico-system" Pod="goldmane-666569f655-vwjmk" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--vwjmk-eth0" Dec 16 03:15:27.439296 containerd[1606]: 2025-12-16 03:15:27.418 [INFO][4464] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ed37b2cbef62a43c48863deed01527831880cc1c78296a8e5d6554b467292fa7" Namespace="calico-system" Pod="goldmane-666569f655-vwjmk" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--vwjmk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--vwjmk-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"5b1e8305-d364-4bc6-9a3a-e97daf2d06ed", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 3, 14, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ed37b2cbef62a43c48863deed01527831880cc1c78296a8e5d6554b467292fa7", Pod:"goldmane-666569f655-vwjmk", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8ec5d4e496b", MAC:"b2:0f:48:c2:2c:31", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 03:15:27.439296 containerd[1606]: 2025-12-16 03:15:27.432 [INFO][4464] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ed37b2cbef62a43c48863deed01527831880cc1c78296a8e5d6554b467292fa7" Namespace="calico-system" Pod="goldmane-666569f655-vwjmk" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--vwjmk-eth0" Dec 16 03:15:27.439000 audit[4863]: NETFILTER_CFG table=filter:131 family=2 entries=58 op=nft_register_chain pid=4863 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:15:27.439000 audit[4863]: SYSCALL arch=c000003e syscall=46 success=yes exit=26760 a0=3 a1=7fff64f68c50 a2=0 a3=7fff64f68c3c items=0 ppid=4062 pid=4863 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:27.439000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:15:27.455898 containerd[1606]: time="2025-12-16T03:15:27.455707135Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:15:27.463677 containerd[1606]: time="2025-12-16T03:15:27.463612722Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 03:15:27.463918 containerd[1606]: time="2025-12-16T03:15:27.463900417Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 03:15:27.464202 kubelet[2848]: E1216 03:15:27.464137 2848 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 03:15:27.464272 kubelet[2848]: E1216 03:15:27.464212 2848 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 03:15:27.464482 kubelet[2848]: E1216 03:15:27.464426 2848 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:c5e6ea94913b425cb341c293718167ca,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wv6n9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-66c75bc767-db58s_calico-system(a5002d96-1b90-443a-9926-1ad68bf4babc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 03:15:27.466109 containerd[1606]: time="2025-12-16T03:15:27.466074607Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 03:15:27.472036 systemd[1]: Started cri-containerd-2e95c9c2277970da4be37948ae8f7a24cf9fe4c277e6d825d6a0fed30be87d3b.scope - libcontainer container 2e95c9c2277970da4be37948ae8f7a24cf9fe4c277e6d825d6a0fed30be87d3b. Dec 16 03:15:27.485771 containerd[1606]: time="2025-12-16T03:15:27.485081939Z" level=info msg="connecting to shim ed37b2cbef62a43c48863deed01527831880cc1c78296a8e5d6554b467292fa7" address="unix:///run/containerd/s/b2eb8d15fe576c5d35237e4a10f50325a1195581ac91e54904790568fbcd416d" namespace=k8s.io protocol=ttrpc version=3 Dec 16 03:15:27.507000 audit: BPF prog-id=239 op=LOAD Dec 16 03:15:27.509000 audit: BPF prog-id=240 op=LOAD Dec 16 03:15:27.509000 audit[4862]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=4851 pid=4862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:27.509000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265393563396332323737393730646134626533373934386165386637 Dec 16 03:15:27.509000 audit: BPF prog-id=240 op=UNLOAD Dec 16 03:15:27.509000 audit[4862]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4851 pid=4862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:27.509000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265393563396332323737393730646134626533373934386165386637 Dec 16 03:15:27.509000 audit: BPF prog-id=241 op=LOAD Dec 16 03:15:27.509000 audit[4862]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=4851 pid=4862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:27.509000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265393563396332323737393730646134626533373934386165386637 Dec 16 03:15:27.510000 audit: BPF prog-id=242 op=LOAD Dec 16 03:15:27.510000 audit[4862]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=4851 pid=4862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:27.510000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265393563396332323737393730646134626533373934386165386637 Dec 16 03:15:27.510000 audit: BPF prog-id=242 op=UNLOAD Dec 16 03:15:27.510000 audit[4862]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4851 pid=4862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:27.510000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265393563396332323737393730646134626533373934386165386637 Dec 16 03:15:27.510000 audit: BPF prog-id=241 op=UNLOAD Dec 16 03:15:27.510000 audit[4862]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4851 pid=4862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:27.510000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265393563396332323737393730646134626533373934386165386637 Dec 16 03:15:27.511000 audit: BPF prog-id=243 op=LOAD Dec 16 03:15:27.511000 audit[4862]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=4851 pid=4862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:27.511000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265393563396332323737393730646134626533373934386165386637 Dec 16 03:15:27.514311 systemd-resolved[1283]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 03:15:27.530135 systemd[1]: Started cri-containerd-ed37b2cbef62a43c48863deed01527831880cc1c78296a8e5d6554b467292fa7.scope - libcontainer container ed37b2cbef62a43c48863deed01527831880cc1c78296a8e5d6554b467292fa7. Dec 16 03:15:27.533000 audit[4922]: NETFILTER_CFG table=filter:132 family=2 entries=64 op=nft_register_chain pid=4922 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 16 03:15:27.533000 audit[4922]: SYSCALL arch=c000003e syscall=46 success=yes exit=31104 a0=3 a1=7ffda4ce4500 a2=0 a3=7ffda4ce44ec items=0 ppid=4062 pid=4922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:27.533000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 16 03:15:27.548000 audit: BPF prog-id=244 op=LOAD Dec 16 03:15:27.549000 audit: BPF prog-id=245 op=LOAD Dec 16 03:15:27.549000 audit[4907]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=4889 pid=4907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:27.549000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564333762326362656636326134336334383836336465656430313532 Dec 16 03:15:27.549000 audit: BPF prog-id=245 op=UNLOAD Dec 16 03:15:27.549000 audit[4907]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4889 pid=4907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:27.549000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564333762326362656636326134336334383836336465656430313532 Dec 16 03:15:27.550000 audit: BPF prog-id=246 op=LOAD Dec 16 03:15:27.550000 audit[4907]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=4889 pid=4907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:27.550000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564333762326362656636326134336334383836336465656430313532 Dec 16 03:15:27.550000 audit: BPF prog-id=247 op=LOAD Dec 16 03:15:27.550000 audit[4907]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=4889 pid=4907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:27.550000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564333762326362656636326134336334383836336465656430313532 Dec 16 03:15:27.550000 audit: BPF prog-id=247 op=UNLOAD Dec 16 03:15:27.550000 audit[4907]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4889 pid=4907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:27.550000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564333762326362656636326134336334383836336465656430313532 Dec 16 03:15:27.550000 audit: BPF prog-id=246 op=UNLOAD Dec 16 03:15:27.550000 audit[4907]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4889 pid=4907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:27.550000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564333762326362656636326134336334383836336465656430313532 Dec 16 03:15:27.550000 audit: BPF prog-id=248 op=LOAD Dec 16 03:15:27.550000 audit[4907]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=4889 pid=4907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:27.550000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564333762326362656636326134336334383836336465656430313532 Dec 16 03:15:27.552651 systemd-resolved[1283]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 16 03:15:27.566051 containerd[1606]: time="2025-12-16T03:15:27.565978874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dvxtw,Uid:534f1053-c823-4a53-90bf-e8b0c562d2fe,Namespace:kube-system,Attempt:0,} returns sandbox id \"2e95c9c2277970da4be37948ae8f7a24cf9fe4c277e6d825d6a0fed30be87d3b\"" Dec 16 03:15:27.567300 kubelet[2848]: E1216 03:15:27.567238 2848 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:15:27.569328 containerd[1606]: time="2025-12-16T03:15:27.569290334Z" level=info msg="CreateContainer within sandbox \"2e95c9c2277970da4be37948ae8f7a24cf9fe4c277e6d825d6a0fed30be87d3b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 03:15:27.637370 containerd[1606]: time="2025-12-16T03:15:27.636647311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-vwjmk,Uid:5b1e8305-d364-4bc6-9a3a-e97daf2d06ed,Namespace:calico-system,Attempt:0,} returns sandbox id \"ed37b2cbef62a43c48863deed01527831880cc1c78296a8e5d6554b467292fa7\"" Dec 16 03:15:27.647748 containerd[1606]: time="2025-12-16T03:15:27.647520133Z" level=info msg="Container 1dd53c70c766e3dc14cbab31f3521dc669d2dc48e7317cbc72e1694f67cc9163: CDI devices from CRI Config.CDIDevices: []" Dec 16 03:15:27.662517 containerd[1606]: time="2025-12-16T03:15:27.662456964Z" level=info msg="CreateContainer within sandbox \"2e95c9c2277970da4be37948ae8f7a24cf9fe4c277e6d825d6a0fed30be87d3b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1dd53c70c766e3dc14cbab31f3521dc669d2dc48e7317cbc72e1694f67cc9163\"" Dec 16 03:15:27.663121 containerd[1606]: time="2025-12-16T03:15:27.663077102Z" level=info msg="StartContainer for \"1dd53c70c766e3dc14cbab31f3521dc669d2dc48e7317cbc72e1694f67cc9163\"" Dec 16 03:15:27.664264 containerd[1606]: time="2025-12-16T03:15:27.664237867Z" level=info msg="connecting to shim 1dd53c70c766e3dc14cbab31f3521dc669d2dc48e7317cbc72e1694f67cc9163" address="unix:///run/containerd/s/0df1fde7579d44ebe7d95078e9333e23f0fe3b4d0c4afd6a260a21e023d05d53" protocol=ttrpc version=3 Dec 16 03:15:27.691063 systemd[1]: Started cri-containerd-1dd53c70c766e3dc14cbab31f3521dc669d2dc48e7317cbc72e1694f67cc9163.scope - libcontainer container 1dd53c70c766e3dc14cbab31f3521dc669d2dc48e7317cbc72e1694f67cc9163. Dec 16 03:15:27.711000 audit: BPF prog-id=249 op=LOAD Dec 16 03:15:27.712000 audit: BPF prog-id=250 op=LOAD Dec 16 03:15:27.712000 audit[4944]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=4851 pid=4944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:27.712000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164643533633730633736366533646331346362616233316633353231 Dec 16 03:15:27.712000 audit: BPF prog-id=250 op=UNLOAD Dec 16 03:15:27.712000 audit[4944]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4851 pid=4944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:27.712000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164643533633730633736366533646331346362616233316633353231 Dec 16 03:15:27.712000 audit: BPF prog-id=251 op=LOAD Dec 16 03:15:27.712000 audit[4944]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=4851 pid=4944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:27.712000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164643533633730633736366533646331346362616233316633353231 Dec 16 03:15:27.712000 audit: BPF prog-id=252 op=LOAD Dec 16 03:15:27.712000 audit[4944]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=4851 pid=4944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:27.712000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164643533633730633736366533646331346362616233316633353231 Dec 16 03:15:27.712000 audit: BPF prog-id=252 op=UNLOAD Dec 16 03:15:27.712000 audit[4944]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4851 pid=4944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:27.712000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164643533633730633736366533646331346362616233316633353231 Dec 16 03:15:27.712000 audit: BPF prog-id=251 op=UNLOAD Dec 16 03:15:27.712000 audit[4944]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4851 pid=4944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:27.712000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164643533633730633736366533646331346362616233316633353231 Dec 16 03:15:27.712000 audit: BPF prog-id=253 op=LOAD Dec 16 03:15:27.712000 audit[4944]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=4851 pid=4944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:27.712000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164643533633730633736366533646331346362616233316633353231 Dec 16 03:15:27.741578 containerd[1606]: time="2025-12-16T03:15:27.741517951Z" level=info msg="StartContainer for \"1dd53c70c766e3dc14cbab31f3521dc669d2dc48e7317cbc72e1694f67cc9163\" returns successfully" Dec 16 03:15:27.757254 systemd-networkd[1500]: calib4d5c351b47: Gained IPv6LL Dec 16 03:15:27.770765 kubelet[2848]: E1216 03:15:27.770527 2848 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:15:27.770988 containerd[1606]: time="2025-12-16T03:15:27.770935117Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:15:27.773741 containerd[1606]: time="2025-12-16T03:15:27.773669571Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 03:15:27.773830 containerd[1606]: time="2025-12-16T03:15:27.773777736Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 03:15:27.774158 kubelet[2848]: E1216 03:15:27.774066 2848 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:15:27.774158 kubelet[2848]: E1216 03:15:27.774133 2848 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:15:27.774494 kubelet[2848]: E1216 03:15:27.774425 2848 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gtxvk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6d7bb69b54-c7929_calico-apiserver(9e2da91d-bd6f-474c-851c-2fd9d9db86f3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 03:15:27.774631 containerd[1606]: time="2025-12-16T03:15:27.774494407Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 03:15:27.775617 kubelet[2848]: E1216 03:15:27.775549 2848 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d7bb69b54-c7929" podUID="9e2da91d-bd6f-474c-851c-2fd9d9db86f3" Dec 16 03:15:27.777542 kubelet[2848]: E1216 03:15:27.777398 2848 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:15:27.797165 kubelet[2848]: E1216 03:15:27.797065 2848 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-79777bd46b-pgqh8" podUID="629371f1-f66b-44ce-8151-2d326255465b" Dec 16 03:15:27.798708 kubelet[2848]: E1216 03:15:27.797457 2848 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d7bb69b54-c7929" podUID="9e2da91d-bd6f-474c-851c-2fd9d9db86f3" Dec 16 03:15:27.800693 kubelet[2848]: I1216 03:15:27.800616 2848 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-l2dsg" podStartSLOduration=46.800595463 podStartE2EDuration="46.800595463s" podCreationTimestamp="2025-12-16 03:14:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 03:15:27.797793772 +0000 UTC m=+50.536486182" watchObservedRunningTime="2025-12-16 03:15:27.800595463 +0000 UTC m=+50.539287873" Dec 16 03:15:27.853926 kubelet[2848]: I1216 03:15:27.853601 2848 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-dvxtw" podStartSLOduration=46.853577577 podStartE2EDuration="46.853577577s" podCreationTimestamp="2025-12-16 03:14:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 03:15:27.831594492 +0000 UTC m=+50.570286902" watchObservedRunningTime="2025-12-16 03:15:27.853577577 +0000 UTC m=+50.592269987" Dec 16 03:15:27.874000 audit[4985]: NETFILTER_CFG table=filter:133 family=2 entries=20 op=nft_register_rule pid=4985 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:15:27.874000 audit[4985]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffde0affa70 a2=0 a3=7ffde0affa5c items=0 ppid=2997 pid=4985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:27.874000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:15:27.881000 audit[4985]: NETFILTER_CFG table=nat:134 family=2 entries=14 op=nft_register_rule pid=4985 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:15:27.881000 audit[4985]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffde0affa70 a2=0 a3=0 items=0 ppid=2997 pid=4985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:27.881000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:15:27.884237 systemd-networkd[1500]: calide655afb685: Gained IPv6LL Dec 16 03:15:27.940000 audit[4987]: NETFILTER_CFG table=filter:135 family=2 entries=20 op=nft_register_rule pid=4987 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:15:27.940000 audit[4987]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffddcdc1340 a2=0 a3=7ffddcdc132c items=0 ppid=2997 pid=4987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:27.940000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:15:27.950000 audit[4987]: NETFILTER_CFG table=nat:136 family=2 entries=14 op=nft_register_rule pid=4987 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:15:27.950000 audit[4987]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffddcdc1340 a2=0 a3=0 items=0 ppid=2997 pid=4987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:27.950000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:15:28.132594 containerd[1606]: time="2025-12-16T03:15:28.132242361Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:15:28.140054 systemd-networkd[1500]: cali4c8a9d146c5: Gained IPv6LL Dec 16 03:15:28.152763 containerd[1606]: time="2025-12-16T03:15:28.152597323Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 03:15:28.152763 containerd[1606]: time="2025-12-16T03:15:28.152675881Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 03:15:28.153328 kubelet[2848]: E1216 03:15:28.153278 2848 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 03:15:28.153897 kubelet[2848]: E1216 03:15:28.153341 2848 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 03:15:28.153897 kubelet[2848]: E1216 03:15:28.153616 2848 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qvmgs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-h4rmp_calico-system(05279c13-1f07-47f3-aaa0-f3eff20006ee): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 03:15:28.154067 containerd[1606]: time="2025-12-16T03:15:28.154012900Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 03:15:28.460027 systemd-networkd[1500]: cali8ec5d4e496b: Gained IPv6LL Dec 16 03:15:28.495518 containerd[1606]: time="2025-12-16T03:15:28.495439983Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:15:28.554756 containerd[1606]: time="2025-12-16T03:15:28.554619055Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 03:15:28.554960 containerd[1606]: time="2025-12-16T03:15:28.554749683Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 03:15:28.555216 kubelet[2848]: E1216 03:15:28.555126 2848 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:15:28.555293 kubelet[2848]: E1216 03:15:28.555222 2848 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:15:28.555957 kubelet[2848]: E1216 03:15:28.555571 2848 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s6cjb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6d7bb69b54-5cccw_calico-apiserver(f6a2c05c-26b5-45cc-94cd-f96ac9ec6971): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 03:15:28.556269 containerd[1606]: time="2025-12-16T03:15:28.555668418Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 03:15:28.557073 kubelet[2848]: E1216 03:15:28.557022 2848 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d7bb69b54-5cccw" podUID="f6a2c05c-26b5-45cc-94cd-f96ac9ec6971" Dec 16 03:15:28.587950 systemd-networkd[1500]: cali18a16bf3740: Gained IPv6LL Dec 16 03:15:28.716940 systemd-networkd[1500]: cali74696440517: Gained IPv6LL Dec 16 03:15:28.723581 systemd[1]: Started sshd@10-10.0.0.26:22-10.0.0.1:36336.service - OpenSSH per-connection server daemon (10.0.0.1:36336). Dec 16 03:15:28.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.26:22-10.0.0.1:36336 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:28.738451 kernel: kauditd_printk_skb: 470 callbacks suppressed Dec 16 03:15:28.738571 kernel: audit: type=1130 audit(1765854928.722:735): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.26:22-10.0.0.1:36336 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:28.779871 systemd-networkd[1500]: cali9170dfe05aa: Gained IPv6LL Dec 16 03:15:28.796825 kubelet[2848]: E1216 03:15:28.796411 2848 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:15:28.796825 kubelet[2848]: E1216 03:15:28.796704 2848 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:15:28.797810 kubelet[2848]: E1216 03:15:28.797782 2848 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d7bb69b54-c7929" podUID="9e2da91d-bd6f-474c-851c-2fd9d9db86f3" Dec 16 03:15:28.798722 kubelet[2848]: E1216 03:15:28.798635 2848 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d7bb69b54-5cccw" podUID="f6a2c05c-26b5-45cc-94cd-f96ac9ec6971" Dec 16 03:15:28.832000 audit[4989]: USER_ACCT pid=4989 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:28.833109 sshd[4989]: Accepted publickey for core from 10.0.0.1 port 36336 ssh2: RSA SHA256:GhpAgDjPDQKTYeqxTnKpUWsy+dD7djTvvXmspUjCjIY Dec 16 03:15:28.836163 sshd-session[4989]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:15:28.833000 audit[4989]: CRED_ACQ pid=4989 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:28.841315 systemd-logind[1586]: New session 12 of user core. Dec 16 03:15:28.844367 kernel: audit: type=1101 audit(1765854928.832:736): pid=4989 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:28.844426 kernel: audit: type=1103 audit(1765854928.833:737): pid=4989 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:28.847617 kernel: audit: type=1006 audit(1765854928.833:738): pid=4989 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Dec 16 03:15:28.847703 systemd-networkd[1500]: cali413a78e32d4: Gained IPv6LL Dec 16 03:15:28.833000 audit[4989]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd327598a0 a2=3 a3=0 items=0 ppid=1 pid=4989 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:28.848921 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 16 03:15:28.833000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:15:28.856349 kernel: audit: type=1300 audit(1765854928.833:738): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd327598a0 a2=3 a3=0 items=0 ppid=1 pid=4989 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:28.856407 kernel: audit: type=1327 audit(1765854928.833:738): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:15:28.856443 kernel: audit: type=1105 audit(1765854928.851:739): pid=4989 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:28.851000 audit[4989]: USER_START pid=4989 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:28.853000 audit[4993]: CRED_ACQ pid=4993 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:28.870768 kernel: audit: type=1103 audit(1765854928.853:740): pid=4993 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:28.872267 containerd[1606]: time="2025-12-16T03:15:28.872077896Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:15:28.940418 containerd[1606]: time="2025-12-16T03:15:28.940325298Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 03:15:28.943468 containerd[1606]: time="2025-12-16T03:15:28.940461486Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 03:15:28.943468 containerd[1606]: time="2025-12-16T03:15:28.942447017Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 03:15:28.943566 kubelet[2848]: E1216 03:15:28.940954 2848 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 03:15:28.943566 kubelet[2848]: E1216 03:15:28.941050 2848 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 03:15:28.943566 kubelet[2848]: E1216 03:15:28.942271 2848 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wv6n9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-66c75bc767-db58s_calico-system(a5002d96-1b90-443a-9926-1ad68bf4babc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 03:15:28.944238 kubelet[2848]: E1216 03:15:28.944171 2848 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66c75bc767-db58s" podUID="a5002d96-1b90-443a-9926-1ad68bf4babc" Dec 16 03:15:28.966221 sshd[4993]: Connection closed by 10.0.0.1 port 36336 Dec 16 03:15:28.964623 sshd-session[4989]: pam_unix(sshd:session): session closed for user core Dec 16 03:15:28.966000 audit[4989]: USER_END pid=4989 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:28.970609 systemd[1]: sshd@10-10.0.0.26:22-10.0.0.1:36336.service: Deactivated successfully. Dec 16 03:15:28.975762 kernel: audit: type=1106 audit(1765854928.966:741): pid=4989 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:28.984792 kernel: audit: type=1104 audit(1765854928.966:742): pid=4989 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:28.966000 audit[4989]: CRED_DISP pid=4989 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:28.981775 systemd[1]: session-12.scope: Deactivated successfully. Dec 16 03:15:28.987103 systemd-logind[1586]: Session 12 logged out. Waiting for processes to exit. Dec 16 03:15:28.970000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.26:22-10.0.0.1:36336 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:28.989401 systemd-logind[1586]: Removed session 12. Dec 16 03:15:28.997000 audit[5009]: NETFILTER_CFG table=filter:137 family=2 entries=20 op=nft_register_rule pid=5009 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:15:28.997000 audit[5009]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffeeea2c9c0 a2=0 a3=7ffeeea2c9ac items=0 ppid=2997 pid=5009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:28.997000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:15:29.004000 audit[5009]: NETFILTER_CFG table=nat:138 family=2 entries=14 op=nft_register_rule pid=5009 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:15:29.004000 audit[5009]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffeeea2c9c0 a2=0 a3=0 items=0 ppid=2997 pid=5009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:29.004000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:15:29.300165 containerd[1606]: time="2025-12-16T03:15:29.299975931Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:15:29.301354 containerd[1606]: time="2025-12-16T03:15:29.301289695Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 03:15:29.301453 containerd[1606]: time="2025-12-16T03:15:29.301393202Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 03:15:29.301660 kubelet[2848]: E1216 03:15:29.301589 2848 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 03:15:29.301660 kubelet[2848]: E1216 03:15:29.301657 2848 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 03:15:29.302164 kubelet[2848]: E1216 03:15:29.301993 2848 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zplnm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-vwjmk_calico-system(5b1e8305-d364-4bc6-9a3a-e97daf2d06ed): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 03:15:29.302369 containerd[1606]: time="2025-12-16T03:15:29.302341181Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 03:15:29.303342 kubelet[2848]: E1216 03:15:29.303274 2848 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vwjmk" podUID="5b1e8305-d364-4bc6-9a3a-e97daf2d06ed" Dec 16 03:15:29.616426 containerd[1606]: time="2025-12-16T03:15:29.616366902Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:15:29.617786 containerd[1606]: time="2025-12-16T03:15:29.617689723Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 03:15:29.617894 containerd[1606]: time="2025-12-16T03:15:29.617691436Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 03:15:29.618192 kubelet[2848]: E1216 03:15:29.618127 2848 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 03:15:29.618261 kubelet[2848]: E1216 03:15:29.618206 2848 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 03:15:29.618474 kubelet[2848]: E1216 03:15:29.618397 2848 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qvmgs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-h4rmp_calico-system(05279c13-1f07-47f3-aaa0-f3eff20006ee): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 03:15:29.619690 kubelet[2848]: E1216 03:15:29.619623 2848 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-h4rmp" podUID="05279c13-1f07-47f3-aaa0-f3eff20006ee" Dec 16 03:15:29.798859 kubelet[2848]: E1216 03:15:29.798800 2848 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:15:29.800635 kubelet[2848]: E1216 03:15:29.800523 2848 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:15:29.801358 kubelet[2848]: E1216 03:15:29.801291 2848 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vwjmk" podUID="5b1e8305-d364-4bc6-9a3a-e97daf2d06ed" Dec 16 03:15:29.803544 kubelet[2848]: E1216 03:15:29.803478 2848 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66c75bc767-db58s" podUID="a5002d96-1b90-443a-9926-1ad68bf4babc" Dec 16 03:15:29.803795 kubelet[2848]: E1216 03:15:29.803674 2848 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-h4rmp" podUID="05279c13-1f07-47f3-aaa0-f3eff20006ee" Dec 16 03:15:29.892000 audit[5011]: NETFILTER_CFG table=filter:139 family=2 entries=17 op=nft_register_rule pid=5011 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:15:29.892000 audit[5011]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff72d4d8c0 a2=0 a3=7fff72d4d8ac items=0 ppid=2997 pid=5011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:29.892000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:15:29.908000 audit[5011]: NETFILTER_CFG table=nat:140 family=2 entries=47 op=nft_register_chain pid=5011 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:15:29.908000 audit[5011]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7fff72d4d8c0 a2=0 a3=7fff72d4d8ac items=0 ppid=2997 pid=5011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:29.908000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:15:30.930000 audit[5014]: NETFILTER_CFG table=filter:141 family=2 entries=14 op=nft_register_rule pid=5014 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:15:30.930000 audit[5014]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff207c22e0 a2=0 a3=7fff207c22cc items=0 ppid=2997 pid=5014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:30.930000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:15:30.943000 audit[5014]: NETFILTER_CFG table=nat:142 family=2 entries=20 op=nft_register_rule pid=5014 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:15:30.943000 audit[5014]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff207c22e0 a2=0 a3=7fff207c22cc items=0 ppid=2997 pid=5014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:30.943000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:15:33.981350 systemd[1]: Started sshd@11-10.0.0.26:22-10.0.0.1:48754.service - OpenSSH per-connection server daemon (10.0.0.1:48754). Dec 16 03:15:33.980000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.26:22-10.0.0.1:48754 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:33.983060 kernel: kauditd_printk_skb: 19 callbacks suppressed Dec 16 03:15:33.983153 kernel: audit: type=1130 audit(1765854933.980:750): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.26:22-10.0.0.1:48754 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:34.045000 audit[5024]: USER_ACCT pid=5024 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:34.046303 sshd[5024]: Accepted publickey for core from 10.0.0.1 port 48754 ssh2: RSA SHA256:GhpAgDjPDQKTYeqxTnKpUWsy+dD7djTvvXmspUjCjIY Dec 16 03:15:34.048760 sshd-session[5024]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:15:34.046000 audit[5024]: CRED_ACQ pid=5024 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:34.054069 systemd-logind[1586]: New session 13 of user core. Dec 16 03:15:34.056555 kernel: audit: type=1101 audit(1765854934.045:751): pid=5024 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:34.056651 kernel: audit: type=1103 audit(1765854934.046:752): pid=5024 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:34.056679 kernel: audit: type=1006 audit(1765854934.046:753): pid=5024 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Dec 16 03:15:34.046000 audit[5024]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdda5e8b30 a2=3 a3=0 items=0 ppid=1 pid=5024 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:34.065541 kernel: audit: type=1300 audit(1765854934.046:753): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdda5e8b30 a2=3 a3=0 items=0 ppid=1 pid=5024 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:34.065599 kernel: audit: type=1327 audit(1765854934.046:753): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:15:34.046000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:15:34.073014 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 16 03:15:34.075000 audit[5024]: USER_START pid=5024 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:34.077000 audit[5028]: CRED_ACQ pid=5028 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:34.088315 kernel: audit: type=1105 audit(1765854934.075:754): pid=5024 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:34.088395 kernel: audit: type=1103 audit(1765854934.077:755): pid=5028 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:34.318046 sshd[5028]: Connection closed by 10.0.0.1 port 48754 Dec 16 03:15:34.318955 sshd-session[5024]: pam_unix(sshd:session): session closed for user core Dec 16 03:15:34.319000 audit[5024]: USER_END pid=5024 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:34.323777 systemd[1]: sshd@11-10.0.0.26:22-10.0.0.1:48754.service: Deactivated successfully. Dec 16 03:15:34.326018 systemd[1]: session-13.scope: Deactivated successfully. Dec 16 03:15:34.326977 systemd-logind[1586]: Session 13 logged out. Waiting for processes to exit. Dec 16 03:15:34.328523 systemd-logind[1586]: Removed session 13. Dec 16 03:15:34.319000 audit[5024]: CRED_DISP pid=5024 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:34.343102 kernel: audit: type=1106 audit(1765854934.319:756): pid=5024 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:34.343816 kernel: audit: type=1104 audit(1765854934.319:757): pid=5024 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:34.323000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.26:22-10.0.0.1:48754 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:39.342953 systemd[1]: Started sshd@12-10.0.0.26:22-10.0.0.1:48760.service - OpenSSH per-connection server daemon (10.0.0.1:48760). Dec 16 03:15:39.342000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.26:22-10.0.0.1:48760 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:39.358231 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 03:15:39.358372 kernel: audit: type=1130 audit(1765854939.342:759): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.26:22-10.0.0.1:48760 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:39.415000 audit[5053]: USER_ACCT pid=5053 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:39.416513 sshd[5053]: Accepted publickey for core from 10.0.0.1 port 48760 ssh2: RSA SHA256:GhpAgDjPDQKTYeqxTnKpUWsy+dD7djTvvXmspUjCjIY Dec 16 03:15:39.418783 sshd-session[5053]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:15:39.423461 systemd-logind[1586]: New session 14 of user core. Dec 16 03:15:39.416000 audit[5053]: CRED_ACQ pid=5053 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:39.452651 kernel: audit: type=1101 audit(1765854939.415:760): pid=5053 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:39.452747 kernel: audit: type=1103 audit(1765854939.416:761): pid=5053 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:39.452781 kernel: audit: type=1006 audit(1765854939.416:762): pid=5053 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Dec 16 03:15:39.416000 audit[5053]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd14858bb0 a2=3 a3=0 items=0 ppid=1 pid=5053 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:39.457118 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 16 03:15:39.462743 kernel: audit: type=1300 audit(1765854939.416:762): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd14858bb0 a2=3 a3=0 items=0 ppid=1 pid=5053 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:39.462799 kernel: audit: type=1327 audit(1765854939.416:762): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:15:39.416000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:15:39.460000 audit[5053]: USER_START pid=5053 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:39.472454 kernel: audit: type=1105 audit(1765854939.460:763): pid=5053 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:39.472534 kernel: audit: type=1103 audit(1765854939.463:764): pid=5057 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:39.463000 audit[5057]: CRED_ACQ pid=5057 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:39.588227 sshd[5057]: Connection closed by 10.0.0.1 port 48760 Dec 16 03:15:39.588565 sshd-session[5053]: pam_unix(sshd:session): session closed for user core Dec 16 03:15:39.589000 audit[5053]: USER_END pid=5053 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:39.590000 audit[5053]: CRED_DISP pid=5053 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:39.603221 kernel: audit: type=1106 audit(1765854939.589:765): pid=5053 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:39.603288 kernel: audit: type=1104 audit(1765854939.590:766): pid=5053 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:39.608922 systemd[1]: sshd@12-10.0.0.26:22-10.0.0.1:48760.service: Deactivated successfully. Dec 16 03:15:39.608000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.26:22-10.0.0.1:48760 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:39.611075 systemd[1]: session-14.scope: Deactivated successfully. Dec 16 03:15:39.612344 systemd-logind[1586]: Session 14 logged out. Waiting for processes to exit. Dec 16 03:15:39.616753 systemd[1]: Started sshd@13-10.0.0.26:22-10.0.0.1:48768.service - OpenSSH per-connection server daemon (10.0.0.1:48768). Dec 16 03:15:39.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.26:22-10.0.0.1:48768 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:39.618138 systemd-logind[1586]: Removed session 14. Dec 16 03:15:39.690000 audit[5072]: USER_ACCT pid=5072 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:39.690995 sshd[5072]: Accepted publickey for core from 10.0.0.1 port 48768 ssh2: RSA SHA256:GhpAgDjPDQKTYeqxTnKpUWsy+dD7djTvvXmspUjCjIY Dec 16 03:15:39.691000 audit[5072]: CRED_ACQ pid=5072 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:39.691000 audit[5072]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdbedd27b0 a2=3 a3=0 items=0 ppid=1 pid=5072 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:39.691000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:15:39.693478 sshd-session[5072]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:15:39.698472 systemd-logind[1586]: New session 15 of user core. Dec 16 03:15:39.711913 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 16 03:15:39.713000 audit[5072]: USER_START pid=5072 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:39.715000 audit[5076]: CRED_ACQ pid=5076 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:39.830345 sshd[5076]: Connection closed by 10.0.0.1 port 48768 Dec 16 03:15:39.830827 sshd-session[5072]: pam_unix(sshd:session): session closed for user core Dec 16 03:15:39.835000 audit[5072]: USER_END pid=5072 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:39.835000 audit[5072]: CRED_DISP pid=5072 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:39.846190 systemd[1]: sshd@13-10.0.0.26:22-10.0.0.1:48768.service: Deactivated successfully. Dec 16 03:15:39.847000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.26:22-10.0.0.1:48768 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:39.851348 systemd[1]: session-15.scope: Deactivated successfully. Dec 16 03:15:39.853828 systemd-logind[1586]: Session 15 logged out. Waiting for processes to exit. Dec 16 03:15:39.857621 systemd-logind[1586]: Removed session 15. Dec 16 03:15:39.859944 systemd[1]: Started sshd@14-10.0.0.26:22-10.0.0.1:48774.service - OpenSSH per-connection server daemon (10.0.0.1:48774). Dec 16 03:15:39.859000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.26:22-10.0.0.1:48774 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:39.924000 audit[5087]: USER_ACCT pid=5087 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:39.925566 sshd[5087]: Accepted publickey for core from 10.0.0.1 port 48774 ssh2: RSA SHA256:GhpAgDjPDQKTYeqxTnKpUWsy+dD7djTvvXmspUjCjIY Dec 16 03:15:39.926000 audit[5087]: CRED_ACQ pid=5087 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:39.926000 audit[5087]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc96e0e4a0 a2=3 a3=0 items=0 ppid=1 pid=5087 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:39.926000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:15:39.928331 sshd-session[5087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:15:39.933973 systemd-logind[1586]: New session 16 of user core. Dec 16 03:15:39.943959 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 16 03:15:39.946000 audit[5087]: USER_START pid=5087 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:39.948000 audit[5091]: CRED_ACQ pid=5091 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:40.025161 sshd[5091]: Connection closed by 10.0.0.1 port 48774 Dec 16 03:15:40.025501 sshd-session[5087]: pam_unix(sshd:session): session closed for user core Dec 16 03:15:40.026000 audit[5087]: USER_END pid=5087 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:40.026000 audit[5087]: CRED_DISP pid=5087 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:40.030686 systemd[1]: sshd@14-10.0.0.26:22-10.0.0.1:48774.service: Deactivated successfully. Dec 16 03:15:40.030000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.26:22-10.0.0.1:48774 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:40.032963 systemd[1]: session-16.scope: Deactivated successfully. Dec 16 03:15:40.034324 systemd-logind[1586]: Session 16 logged out. Waiting for processes to exit. Dec 16 03:15:40.036176 systemd-logind[1586]: Removed session 16. Dec 16 03:15:40.504553 containerd[1606]: time="2025-12-16T03:15:40.504502156Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 03:15:40.821899 containerd[1606]: time="2025-12-16T03:15:40.821560387Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:15:40.823579 containerd[1606]: time="2025-12-16T03:15:40.823517202Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 03:15:40.823579 containerd[1606]: time="2025-12-16T03:15:40.823601391Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 03:15:40.823874 kubelet[2848]: E1216 03:15:40.823709 2848 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 03:15:40.823874 kubelet[2848]: E1216 03:15:40.823769 2848 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 03:15:40.824374 kubelet[2848]: E1216 03:15:40.823913 2848 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zplnm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-vwjmk_calico-system(5b1e8305-d364-4bc6-9a3a-e97daf2d06ed): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 03:15:40.825543 kubelet[2848]: E1216 03:15:40.825502 2848 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vwjmk" podUID="5b1e8305-d364-4bc6-9a3a-e97daf2d06ed" Dec 16 03:15:41.502968 containerd[1606]: time="2025-12-16T03:15:41.502921797Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 03:15:41.835555 containerd[1606]: time="2025-12-16T03:15:41.835407976Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:15:41.837420 containerd[1606]: time="2025-12-16T03:15:41.837347278Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 03:15:41.837420 containerd[1606]: time="2025-12-16T03:15:41.837402593Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 03:15:41.837697 kubelet[2848]: E1216 03:15:41.837645 2848 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 03:15:41.838021 kubelet[2848]: E1216 03:15:41.837707 2848 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 03:15:41.838021 kubelet[2848]: E1216 03:15:41.837868 2848 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qvmgs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-h4rmp_calico-system(05279c13-1f07-47f3-aaa0-f3eff20006ee): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 03:15:41.839861 containerd[1606]: time="2025-12-16T03:15:41.839826793Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 03:15:42.180414 containerd[1606]: time="2025-12-16T03:15:42.180327113Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:15:42.182112 containerd[1606]: time="2025-12-16T03:15:42.182013646Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 03:15:42.182364 containerd[1606]: time="2025-12-16T03:15:42.182122562Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 03:15:42.182428 kubelet[2848]: E1216 03:15:42.182322 2848 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 03:15:42.182428 kubelet[2848]: E1216 03:15:42.182399 2848 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 03:15:42.182606 kubelet[2848]: E1216 03:15:42.182551 2848 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qvmgs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-h4rmp_calico-system(05279c13-1f07-47f3-aaa0-f3eff20006ee): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 03:15:42.183849 kubelet[2848]: E1216 03:15:42.183772 2848 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-h4rmp" podUID="05279c13-1f07-47f3-aaa0-f3eff20006ee" Dec 16 03:15:42.503394 containerd[1606]: time="2025-12-16T03:15:42.503252862Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 03:15:42.858262 containerd[1606]: time="2025-12-16T03:15:42.858217839Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:15:42.946464 containerd[1606]: time="2025-12-16T03:15:42.946387642Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 03:15:42.946620 containerd[1606]: time="2025-12-16T03:15:42.946498963Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 03:15:42.946776 kubelet[2848]: E1216 03:15:42.946699 2848 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 03:15:42.947179 kubelet[2848]: E1216 03:15:42.946793 2848 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 03:15:42.947179 kubelet[2848]: E1216 03:15:42.947089 2848 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:c5e6ea94913b425cb341c293718167ca,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wv6n9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-66c75bc767-db58s_calico-system(a5002d96-1b90-443a-9926-1ad68bf4babc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 03:15:42.947310 containerd[1606]: time="2025-12-16T03:15:42.947182517Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 03:15:43.483136 containerd[1606]: time="2025-12-16T03:15:43.483066069Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:15:43.572811 containerd[1606]: time="2025-12-16T03:15:43.572742214Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 03:15:43.572987 containerd[1606]: time="2025-12-16T03:15:43.572794693Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 03:15:43.573179 kubelet[2848]: E1216 03:15:43.573117 2848 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 03:15:43.573234 kubelet[2848]: E1216 03:15:43.573184 2848 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 03:15:43.573537 kubelet[2848]: E1216 03:15:43.573436 2848 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jtrdk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-79777bd46b-pgqh8_calico-system(629371f1-f66b-44ce-8151-2d326255465b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 03:15:43.573701 containerd[1606]: time="2025-12-16T03:15:43.573552518Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 03:15:43.574800 kubelet[2848]: E1216 03:15:43.574747 2848 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-79777bd46b-pgqh8" podUID="629371f1-f66b-44ce-8151-2d326255465b" Dec 16 03:15:43.933580 containerd[1606]: time="2025-12-16T03:15:43.933508854Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:15:44.009989 containerd[1606]: time="2025-12-16T03:15:44.009872851Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 03:15:44.009989 containerd[1606]: time="2025-12-16T03:15:44.009956980Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 03:15:44.010310 kubelet[2848]: E1216 03:15:44.010257 2848 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:15:44.010701 kubelet[2848]: E1216 03:15:44.010319 2848 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:15:44.010701 kubelet[2848]: E1216 03:15:44.010612 2848 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gtxvk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6d7bb69b54-c7929_calico-apiserver(9e2da91d-bd6f-474c-851c-2fd9d9db86f3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 03:15:44.010901 containerd[1606]: time="2025-12-16T03:15:44.010858757Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 03:15:44.012135 kubelet[2848]: E1216 03:15:44.012096 2848 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d7bb69b54-c7929" podUID="9e2da91d-bd6f-474c-851c-2fd9d9db86f3" Dec 16 03:15:44.507340 containerd[1606]: time="2025-12-16T03:15:44.506988763Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:15:44.569290 containerd[1606]: time="2025-12-16T03:15:44.569095850Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 03:15:44.569290 containerd[1606]: time="2025-12-16T03:15:44.569240513Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 03:15:44.569784 kubelet[2848]: E1216 03:15:44.569728 2848 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 03:15:44.570261 kubelet[2848]: E1216 03:15:44.569892 2848 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 03:15:44.570261 kubelet[2848]: E1216 03:15:44.570176 2848 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wv6n9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-66c75bc767-db58s_calico-system(a5002d96-1b90-443a-9926-1ad68bf4babc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 03:15:44.573517 containerd[1606]: time="2025-12-16T03:15:44.570611939Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 03:15:44.573702 kubelet[2848]: E1216 03:15:44.573431 2848 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66c75bc767-db58s" podUID="a5002d96-1b90-443a-9926-1ad68bf4babc" Dec 16 03:15:45.035883 containerd[1606]: time="2025-12-16T03:15:45.035778914Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:15:45.055592 systemd[1]: Started sshd@15-10.0.0.26:22-10.0.0.1:45774.service - OpenSSH per-connection server daemon (10.0.0.1:45774). Dec 16 03:15:45.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.26:22-10.0.0.1:45774 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:45.064061 kernel: kauditd_printk_skb: 23 callbacks suppressed Dec 16 03:15:45.064206 kernel: audit: type=1130 audit(1765854945.054:786): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.26:22-10.0.0.1:45774 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:45.180916 containerd[1606]: time="2025-12-16T03:15:45.180839750Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 03:15:45.181332 containerd[1606]: time="2025-12-16T03:15:45.181195263Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 03:15:45.181679 kubelet[2848]: E1216 03:15:45.181632 2848 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:15:45.182213 kubelet[2848]: E1216 03:15:45.182187 2848 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:15:45.182538 kubelet[2848]: E1216 03:15:45.182483 2848 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s6cjb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6d7bb69b54-5cccw_calico-apiserver(f6a2c05c-26b5-45cc-94cd-f96ac9ec6971): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 03:15:45.186199 kubelet[2848]: E1216 03:15:45.186143 2848 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d7bb69b54-5cccw" podUID="f6a2c05c-26b5-45cc-94cd-f96ac9ec6971" Dec 16 03:15:45.197000 audit[5106]: USER_ACCT pid=5106 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:45.200968 sshd-session[5106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:15:45.201930 sshd[5106]: Accepted publickey for core from 10.0.0.1 port 45774 ssh2: RSA SHA256:GhpAgDjPDQKTYeqxTnKpUWsy+dD7djTvvXmspUjCjIY Dec 16 03:15:45.198000 audit[5106]: CRED_ACQ pid=5106 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:45.216105 kernel: audit: type=1101 audit(1765854945.197:787): pid=5106 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:45.216189 kernel: audit: type=1103 audit(1765854945.198:788): pid=5106 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:45.220670 systemd-logind[1586]: New session 17 of user core. Dec 16 03:15:45.226098 kernel: audit: type=1006 audit(1765854945.198:789): pid=5106 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Dec 16 03:15:45.226217 kernel: audit: type=1300 audit(1765854945.198:789): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff4d642950 a2=3 a3=0 items=0 ppid=1 pid=5106 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:45.198000 audit[5106]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff4d642950 a2=3 a3=0 items=0 ppid=1 pid=5106 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:45.231286 kernel: audit: type=1327 audit(1765854945.198:789): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:15:45.198000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:15:45.237101 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 16 03:15:45.242000 audit[5106]: USER_START pid=5106 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:45.254874 kernel: audit: type=1105 audit(1765854945.242:790): pid=5106 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:45.255010 kernel: audit: type=1103 audit(1765854945.242:791): pid=5110 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:45.242000 audit[5110]: CRED_ACQ pid=5110 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:45.496220 sshd[5110]: Connection closed by 10.0.0.1 port 45774 Dec 16 03:15:45.496458 sshd-session[5106]: pam_unix(sshd:session): session closed for user core Dec 16 03:15:45.497000 audit[5106]: USER_END pid=5106 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:45.497000 audit[5106]: CRED_DISP pid=5106 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:45.516213 kernel: audit: type=1106 audit(1765854945.497:792): pid=5106 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:45.516385 kernel: audit: type=1104 audit(1765854945.497:793): pid=5106 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:45.518529 systemd[1]: sshd@15-10.0.0.26:22-10.0.0.1:45774.service: Deactivated successfully. Dec 16 03:15:45.519000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.26:22-10.0.0.1:45774 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:45.529432 systemd[1]: session-17.scope: Deactivated successfully. Dec 16 03:15:45.542692 systemd-logind[1586]: Session 17 logged out. Waiting for processes to exit. Dec 16 03:15:45.550924 systemd-logind[1586]: Removed session 17. Dec 16 03:15:47.502793 kubelet[2848]: E1216 03:15:47.502707 2848 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:15:50.508368 systemd[1]: Started sshd@16-10.0.0.26:22-10.0.0.1:54718.service - OpenSSH per-connection server daemon (10.0.0.1:54718). Dec 16 03:15:50.507000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.26:22-10.0.0.1:54718 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:50.526049 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 03:15:50.526162 kernel: audit: type=1130 audit(1765854950.507:795): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.26:22-10.0.0.1:54718 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:50.587000 audit[5135]: USER_ACCT pid=5135 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:50.588577 sshd[5135]: Accepted publickey for core from 10.0.0.1 port 54718 ssh2: RSA SHA256:GhpAgDjPDQKTYeqxTnKpUWsy+dD7djTvvXmspUjCjIY Dec 16 03:15:50.591021 sshd-session[5135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:15:50.595423 systemd-logind[1586]: New session 18 of user core. Dec 16 03:15:50.588000 audit[5135]: CRED_ACQ pid=5135 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:50.610620 kernel: audit: type=1101 audit(1765854950.587:796): pid=5135 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:50.610831 kernel: audit: type=1103 audit(1765854950.588:797): pid=5135 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:50.610862 kernel: audit: type=1006 audit(1765854950.589:798): pid=5135 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Dec 16 03:15:50.589000 audit[5135]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdc026dbb0 a2=3 a3=0 items=0 ppid=1 pid=5135 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:50.620823 kernel: audit: type=1300 audit(1765854950.589:798): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdc026dbb0 a2=3 a3=0 items=0 ppid=1 pid=5135 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:50.589000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:15:50.622231 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 16 03:15:50.623338 kernel: audit: type=1327 audit(1765854950.589:798): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:15:50.624000 audit[5135]: USER_START pid=5135 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:50.632749 kernel: audit: type=1105 audit(1765854950.624:799): pid=5135 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:50.627000 audit[5139]: CRED_ACQ pid=5139 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:50.638757 kernel: audit: type=1103 audit(1765854950.627:800): pid=5139 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:50.703908 sshd[5139]: Connection closed by 10.0.0.1 port 54718 Dec 16 03:15:50.704284 sshd-session[5135]: pam_unix(sshd:session): session closed for user core Dec 16 03:15:50.705000 audit[5135]: USER_END pid=5135 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:50.712689 systemd-logind[1586]: Session 18 logged out. Waiting for processes to exit. Dec 16 03:15:50.712980 systemd[1]: sshd@16-10.0.0.26:22-10.0.0.1:54718.service: Deactivated successfully. Dec 16 03:15:50.705000 audit[5135]: CRED_DISP pid=5135 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:50.716110 systemd[1]: session-18.scope: Deactivated successfully. Dec 16 03:15:50.718861 systemd-logind[1586]: Removed session 18. Dec 16 03:15:50.719755 kernel: audit: type=1106 audit(1765854950.705:801): pid=5135 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:50.719822 kernel: audit: type=1104 audit(1765854950.705:802): pid=5135 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:50.710000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.26:22-10.0.0.1:54718 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:52.502061 kubelet[2848]: E1216 03:15:52.501989 2848 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vwjmk" podUID="5b1e8305-d364-4bc6-9a3a-e97daf2d06ed" Dec 16 03:15:53.504108 kubelet[2848]: E1216 03:15:53.504022 2848 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-h4rmp" podUID="05279c13-1f07-47f3-aaa0-f3eff20006ee" Dec 16 03:15:54.502564 kubelet[2848]: E1216 03:15:54.502482 2848 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-79777bd46b-pgqh8" podUID="629371f1-f66b-44ce-8151-2d326255465b" Dec 16 03:15:55.729163 systemd[1]: Started sshd@17-10.0.0.26:22-10.0.0.1:54724.service - OpenSSH per-connection server daemon (10.0.0.1:54724). Dec 16 03:15:55.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.26:22-10.0.0.1:54724 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:55.730556 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 03:15:55.730636 kernel: audit: type=1130 audit(1765854955.727:804): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.26:22-10.0.0.1:54724 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:55.791000 audit[5153]: USER_ACCT pid=5153 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:55.793400 sshd[5153]: Accepted publickey for core from 10.0.0.1 port 54724 ssh2: RSA SHA256:GhpAgDjPDQKTYeqxTnKpUWsy+dD7djTvvXmspUjCjIY Dec 16 03:15:55.796292 sshd-session[5153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:15:55.801590 systemd-logind[1586]: New session 19 of user core. Dec 16 03:15:55.793000 audit[5153]: CRED_ACQ pid=5153 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:55.851954 kernel: audit: type=1101 audit(1765854955.791:805): pid=5153 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:55.852010 kernel: audit: type=1103 audit(1765854955.793:806): pid=5153 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:55.852060 kernel: audit: type=1006 audit(1765854955.793:807): pid=5153 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Dec 16 03:15:55.793000 audit[5153]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc2ba22a90 a2=3 a3=0 items=0 ppid=1 pid=5153 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:55.860153 kernel: audit: type=1300 audit(1765854955.793:807): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc2ba22a90 a2=3 a3=0 items=0 ppid=1 pid=5153 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:15:55.860199 kernel: audit: type=1327 audit(1765854955.793:807): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:15:55.793000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:15:55.864128 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 16 03:15:55.865000 audit[5153]: USER_START pid=5153 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:55.865000 audit[5176]: CRED_ACQ pid=5176 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:55.877466 kernel: audit: type=1105 audit(1765854955.865:808): pid=5153 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:55.877543 kernel: audit: type=1103 audit(1765854955.865:809): pid=5176 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:55.943489 sshd[5176]: Connection closed by 10.0.0.1 port 54724 Dec 16 03:15:55.943816 sshd-session[5153]: pam_unix(sshd:session): session closed for user core Dec 16 03:15:55.943000 audit[5153]: USER_END pid=5153 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:55.949161 systemd[1]: sshd@17-10.0.0.26:22-10.0.0.1:54724.service: Deactivated successfully. Dec 16 03:15:55.951291 systemd[1]: session-19.scope: Deactivated successfully. Dec 16 03:15:55.952139 systemd-logind[1586]: Session 19 logged out. Waiting for processes to exit. Dec 16 03:15:55.954335 systemd-logind[1586]: Removed session 19. Dec 16 03:15:55.943000 audit[5153]: CRED_DISP pid=5153 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:56.003669 kernel: audit: type=1106 audit(1765854955.943:810): pid=5153 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:56.003971 kernel: audit: type=1104 audit(1765854955.943:811): pid=5153 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:15:55.947000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.26:22-10.0.0.1:54724 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:15:56.068100 kubelet[2848]: E1216 03:15:56.068056 2848 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:15:56.504600 kubelet[2848]: E1216 03:15:56.504499 2848 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66c75bc767-db58s" podUID="a5002d96-1b90-443a-9926-1ad68bf4babc" Dec 16 03:15:57.502822 kubelet[2848]: E1216 03:15:57.502764 2848 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d7bb69b54-c7929" podUID="9e2da91d-bd6f-474c-851c-2fd9d9db86f3" Dec 16 03:16:00.503089 kubelet[2848]: E1216 03:16:00.502732 2848 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d7bb69b54-5cccw" podUID="f6a2c05c-26b5-45cc-94cd-f96ac9ec6971" Dec 16 03:16:00.959981 systemd[1]: Started sshd@18-10.0.0.26:22-10.0.0.1:49200.service - OpenSSH per-connection server daemon (10.0.0.1:49200). Dec 16 03:16:00.958000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.26:22-10.0.0.1:49200 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:00.961662 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 03:16:00.961770 kernel: audit: type=1130 audit(1765854960.958:813): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.26:22-10.0.0.1:49200 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:01.040000 audit[5199]: USER_ACCT pid=5199 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:16:01.042736 sshd[5199]: Accepted publickey for core from 10.0.0.1 port 49200 ssh2: RSA SHA256:GhpAgDjPDQKTYeqxTnKpUWsy+dD7djTvvXmspUjCjIY Dec 16 03:16:01.046093 sshd-session[5199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:16:01.042000 audit[5199]: CRED_ACQ pid=5199 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:16:01.053222 kernel: audit: type=1101 audit(1765854961.040:814): pid=5199 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:16:01.053268 kernel: audit: type=1103 audit(1765854961.042:815): pid=5199 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:16:01.053304 kernel: audit: type=1006 audit(1765854961.042:816): pid=5199 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Dec 16 03:16:01.052790 systemd-logind[1586]: New session 20 of user core. Dec 16 03:16:01.042000 audit[5199]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff8031c730 a2=3 a3=0 items=0 ppid=1 pid=5199 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:01.060360 kernel: audit: type=1300 audit(1765854961.042:816): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff8031c730 a2=3 a3=0 items=0 ppid=1 pid=5199 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:01.060424 kernel: audit: type=1327 audit(1765854961.042:816): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:16:01.042000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:16:01.063933 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 16 03:16:01.064000 audit[5199]: USER_START pid=5199 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:16:01.066000 audit[5203]: CRED_ACQ pid=5203 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:16:01.080295 kernel: audit: type=1105 audit(1765854961.064:817): pid=5199 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:16:01.080348 kernel: audit: type=1103 audit(1765854961.066:818): pid=5203 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:16:01.201938 sshd[5203]: Connection closed by 10.0.0.1 port 49200 Dec 16 03:16:01.202199 sshd-session[5199]: pam_unix(sshd:session): session closed for user core Dec 16 03:16:01.201000 audit[5199]: USER_END pid=5199 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:16:01.209764 kernel: audit: type=1106 audit(1765854961.201:819): pid=5199 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:16:01.202000 audit[5199]: CRED_DISP pid=5199 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:16:01.214746 kernel: audit: type=1104 audit(1765854961.202:820): pid=5199 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:16:01.219954 systemd[1]: sshd@18-10.0.0.26:22-10.0.0.1:49200.service: Deactivated successfully. Dec 16 03:16:01.218000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.26:22-10.0.0.1:49200 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:01.222777 systemd[1]: session-20.scope: Deactivated successfully. Dec 16 03:16:01.224304 systemd-logind[1586]: Session 20 logged out. Waiting for processes to exit. Dec 16 03:16:01.229139 systemd[1]: Started sshd@19-10.0.0.26:22-10.0.0.1:49204.service - OpenSSH per-connection server daemon (10.0.0.1:49204). Dec 16 03:16:01.228000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.26:22-10.0.0.1:49204 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:01.231602 systemd-logind[1586]: Removed session 20. Dec 16 03:16:01.303000 audit[5217]: USER_ACCT pid=5217 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:16:01.305167 sshd[5217]: Accepted publickey for core from 10.0.0.1 port 49204 ssh2: RSA SHA256:GhpAgDjPDQKTYeqxTnKpUWsy+dD7djTvvXmspUjCjIY Dec 16 03:16:01.304000 audit[5217]: CRED_ACQ pid=5217 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:16:01.304000 audit[5217]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdab33ae50 a2=3 a3=0 items=0 ppid=1 pid=5217 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:01.304000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:16:01.307478 sshd-session[5217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:16:01.312014 systemd-logind[1586]: New session 21 of user core. Dec 16 03:16:01.319920 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 16 03:16:01.321000 audit[5217]: USER_START pid=5217 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:16:01.324000 audit[5221]: CRED_ACQ pid=5221 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:16:01.785761 sshd[5221]: Connection closed by 10.0.0.1 port 49204 Dec 16 03:16:01.786350 sshd-session[5217]: pam_unix(sshd:session): session closed for user core Dec 16 03:16:01.787000 audit[5217]: USER_END pid=5217 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:16:01.787000 audit[5217]: CRED_DISP pid=5217 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:16:01.794617 systemd[1]: sshd@19-10.0.0.26:22-10.0.0.1:49204.service: Deactivated successfully. Dec 16 03:16:01.793000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.26:22-10.0.0.1:49204 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:01.797022 systemd[1]: session-21.scope: Deactivated successfully. Dec 16 03:16:01.798303 systemd-logind[1586]: Session 21 logged out. Waiting for processes to exit. Dec 16 03:16:01.801067 systemd-logind[1586]: Removed session 21. Dec 16 03:16:01.802623 systemd[1]: Started sshd@20-10.0.0.26:22-10.0.0.1:49218.service - OpenSSH per-connection server daemon (10.0.0.1:49218). Dec 16 03:16:01.801000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.26:22-10.0.0.1:49218 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:01.859000 audit[5233]: USER_ACCT pid=5233 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:16:01.861228 sshd[5233]: Accepted publickey for core from 10.0.0.1 port 49218 ssh2: RSA SHA256:GhpAgDjPDQKTYeqxTnKpUWsy+dD7djTvvXmspUjCjIY Dec 16 03:16:01.860000 audit[5233]: CRED_ACQ pid=5233 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:16:01.860000 audit[5233]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc6fd9d370 a2=3 a3=0 items=0 ppid=1 pid=5233 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:01.860000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:16:01.864855 sshd-session[5233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:16:01.871024 systemd-logind[1586]: New session 22 of user core. Dec 16 03:16:01.880900 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 16 03:16:01.882000 audit[5233]: USER_START pid=5233 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:16:01.885000 audit[5237]: CRED_ACQ pid=5237 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:16:03.149000 audit[5268]: NETFILTER_CFG table=filter:143 family=2 entries=26 op=nft_register_rule pid=5268 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:16:03.149000 audit[5268]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7fffd91bf870 a2=0 a3=7fffd91bf85c items=0 ppid=2997 pid=5268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:03.149000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:16:03.157000 audit[5268]: NETFILTER_CFG table=nat:144 family=2 entries=20 op=nft_register_rule pid=5268 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:16:03.157000 audit[5268]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fffd91bf870 a2=0 a3=0 items=0 ppid=2997 pid=5268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:03.157000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:16:03.178000 audit[5270]: NETFILTER_CFG table=filter:145 family=2 entries=38 op=nft_register_rule pid=5270 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:16:03.178000 audit[5270]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffdf70a5450 a2=0 a3=7ffdf70a543c items=0 ppid=2997 pid=5270 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:03.178000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:16:03.183000 audit[5270]: NETFILTER_CFG table=nat:146 family=2 entries=20 op=nft_register_rule pid=5270 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:16:03.183000 audit[5270]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffdf70a5450 a2=0 a3=0 items=0 ppid=2997 pid=5270 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:03.183000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:16:03.265512 sshd[5237]: Connection closed by 10.0.0.1 port 49218 Dec 16 03:16:03.266027 sshd-session[5233]: pam_unix(sshd:session): session closed for user core Dec 16 03:16:03.269000 audit[5233]: USER_END pid=5233 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:16:03.270000 audit[5233]: CRED_DISP pid=5233 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:16:03.279956 systemd[1]: sshd@20-10.0.0.26:22-10.0.0.1:49218.service: Deactivated successfully. Dec 16 03:16:03.278000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.26:22-10.0.0.1:49218 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:03.283445 systemd[1]: session-22.scope: Deactivated successfully. Dec 16 03:16:03.285759 systemd-logind[1586]: Session 22 logged out. Waiting for processes to exit. Dec 16 03:16:03.292962 systemd[1]: Started sshd@21-10.0.0.26:22-10.0.0.1:49222.service - OpenSSH per-connection server daemon (10.0.0.1:49222). Dec 16 03:16:03.291000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.26:22-10.0.0.1:49222 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:03.296841 systemd-logind[1586]: Removed session 22. Dec 16 03:16:03.387000 audit[5275]: USER_ACCT pid=5275 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:16:03.389705 sshd[5275]: Accepted publickey for core from 10.0.0.1 port 49222 ssh2: RSA SHA256:GhpAgDjPDQKTYeqxTnKpUWsy+dD7djTvvXmspUjCjIY Dec 16 03:16:03.389000 audit[5275]: CRED_ACQ pid=5275 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:16:03.389000 audit[5275]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd46ca17c0 a2=3 a3=0 items=0 ppid=1 pid=5275 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:03.389000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:16:03.392236 sshd-session[5275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:16:03.400384 systemd-logind[1586]: New session 23 of user core. Dec 16 03:16:03.410100 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 16 03:16:03.411000 audit[5275]: USER_START pid=5275 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:16:03.415000 audit[5279]: CRED_ACQ pid=5279 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:16:03.745835 sshd[5279]: Connection closed by 10.0.0.1 port 49222 Dec 16 03:16:03.745927 sshd-session[5275]: pam_unix(sshd:session): session closed for user core Dec 16 03:16:03.747000 audit[5275]: USER_END pid=5275 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:16:03.747000 audit[5275]: CRED_DISP pid=5275 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:16:03.756616 systemd[1]: sshd@21-10.0.0.26:22-10.0.0.1:49222.service: Deactivated successfully. Dec 16 03:16:03.755000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.26:22-10.0.0.1:49222 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:03.760276 systemd[1]: session-23.scope: Deactivated successfully. Dec 16 03:16:03.761688 systemd-logind[1586]: Session 23 logged out. Waiting for processes to exit. Dec 16 03:16:03.768149 systemd[1]: Started sshd@22-10.0.0.26:22-10.0.0.1:49238.service - OpenSSH per-connection server daemon (10.0.0.1:49238). Dec 16 03:16:03.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.26:22-10.0.0.1:49238 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:03.769434 systemd-logind[1586]: Removed session 23. Dec 16 03:16:03.829000 audit[5291]: USER_ACCT pid=5291 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:16:03.831154 sshd[5291]: Accepted publickey for core from 10.0.0.1 port 49238 ssh2: RSA SHA256:GhpAgDjPDQKTYeqxTnKpUWsy+dD7djTvvXmspUjCjIY Dec 16 03:16:03.830000 audit[5291]: CRED_ACQ pid=5291 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:16:03.831000 audit[5291]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeca7d1bc0 a2=3 a3=0 items=0 ppid=1 pid=5291 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:03.831000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:16:03.833961 sshd-session[5291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:16:03.839313 systemd-logind[1586]: New session 24 of user core. Dec 16 03:16:03.855990 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 16 03:16:03.857000 audit[5291]: USER_START pid=5291 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:16:03.859000 audit[5295]: CRED_ACQ pid=5295 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:16:03.964196 sshd[5295]: Connection closed by 10.0.0.1 port 49238 Dec 16 03:16:03.964510 sshd-session[5291]: pam_unix(sshd:session): session closed for user core Dec 16 03:16:03.964000 audit[5291]: USER_END pid=5291 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:16:03.964000 audit[5291]: CRED_DISP pid=5291 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:16:03.971170 systemd[1]: sshd@22-10.0.0.26:22-10.0.0.1:49238.service: Deactivated successfully. Dec 16 03:16:03.969000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.26:22-10.0.0.1:49238 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:03.974023 systemd[1]: session-24.scope: Deactivated successfully. Dec 16 03:16:03.975362 systemd-logind[1586]: Session 24 logged out. Waiting for processes to exit. Dec 16 03:16:03.977481 systemd-logind[1586]: Removed session 24. Dec 16 03:16:04.508246 containerd[1606]: time="2025-12-16T03:16:04.508195686Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 03:16:04.832766 containerd[1606]: time="2025-12-16T03:16:04.832580450Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:16:04.833849 containerd[1606]: time="2025-12-16T03:16:04.833777660Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 03:16:04.833929 containerd[1606]: time="2025-12-16T03:16:04.833858282Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 16 03:16:04.834128 kubelet[2848]: E1216 03:16:04.834074 2848 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 03:16:04.834560 kubelet[2848]: E1216 03:16:04.834134 2848 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 03:16:04.834560 kubelet[2848]: E1216 03:16:04.834327 2848 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zplnm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-vwjmk_calico-system(5b1e8305-d364-4bc6-9a3a-e97daf2d06ed): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 03:16:04.835700 kubelet[2848]: E1216 03:16:04.835668 2848 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vwjmk" podUID="5b1e8305-d364-4bc6-9a3a-e97daf2d06ed" Dec 16 03:16:05.503213 containerd[1606]: time="2025-12-16T03:16:05.503134747Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 03:16:05.832643 containerd[1606]: time="2025-12-16T03:16:05.832433643Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:16:05.881097 containerd[1606]: time="2025-12-16T03:16:05.880971298Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 03:16:05.881097 containerd[1606]: time="2025-12-16T03:16:05.881036141Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 16 03:16:05.881434 kubelet[2848]: E1216 03:16:05.881332 2848 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 03:16:05.881434 kubelet[2848]: E1216 03:16:05.881416 2848 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 03:16:05.882163 kubelet[2848]: E1216 03:16:05.881593 2848 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jtrdk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-79777bd46b-pgqh8_calico-system(629371f1-f66b-44ce-8151-2d326255465b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 03:16:05.883032 kubelet[2848]: E1216 03:16:05.882917 2848 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-79777bd46b-pgqh8" podUID="629371f1-f66b-44ce-8151-2d326255465b" Dec 16 03:16:07.502998 kubelet[2848]: E1216 03:16:07.502851 2848 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:16:08.502832 containerd[1606]: time="2025-12-16T03:16:08.502783868Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 03:16:08.811000 audit[5317]: NETFILTER_CFG table=filter:147 family=2 entries=26 op=nft_register_rule pid=5317 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:16:08.814501 kernel: kauditd_printk_skb: 57 callbacks suppressed Dec 16 03:16:08.814555 kernel: audit: type=1325 audit(1765854968.811:862): table=filter:147 family=2 entries=26 op=nft_register_rule pid=5317 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:16:08.811000 audit[5317]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fffdd8f4150 a2=0 a3=7fffdd8f413c items=0 ppid=2997 pid=5317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:08.824532 kernel: audit: type=1300 audit(1765854968.811:862): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fffdd8f4150 a2=0 a3=7fffdd8f413c items=0 ppid=2997 pid=5317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:08.824574 kernel: audit: type=1327 audit(1765854968.811:862): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:16:08.811000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:16:08.832447 containerd[1606]: time="2025-12-16T03:16:08.832402064Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:16:08.833798 containerd[1606]: time="2025-12-16T03:16:08.833746360Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 03:16:08.833798 containerd[1606]: time="2025-12-16T03:16:08.833830770Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 16 03:16:08.834069 kubelet[2848]: E1216 03:16:08.833958 2848 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 03:16:08.834069 kubelet[2848]: E1216 03:16:08.834011 2848 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 03:16:08.834540 kubelet[2848]: E1216 03:16:08.834145 2848 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qvmgs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-h4rmp_calico-system(05279c13-1f07-47f3-aaa0-f3eff20006ee): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 03:16:08.836433 containerd[1606]: time="2025-12-16T03:16:08.836370593Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 03:16:08.836000 audit[5317]: NETFILTER_CFG table=nat:148 family=2 entries=104 op=nft_register_chain pid=5317 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:16:08.836000 audit[5317]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7fffdd8f4150 a2=0 a3=7fffdd8f413c items=0 ppid=2997 pid=5317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:08.852427 kernel: audit: type=1325 audit(1765854968.836:863): table=nat:148 family=2 entries=104 op=nft_register_chain pid=5317 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 16 03:16:08.852504 kernel: audit: type=1300 audit(1765854968.836:863): arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7fffdd8f4150 a2=0 a3=7fffdd8f413c items=0 ppid=2997 pid=5317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:08.852554 kernel: audit: type=1327 audit(1765854968.836:863): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:16:08.836000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 16 03:16:08.982418 systemd[1]: Started sshd@23-10.0.0.26:22-10.0.0.1:49242.service - OpenSSH per-connection server daemon (10.0.0.1:49242). Dec 16 03:16:08.980000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.26:22-10.0.0.1:49242 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:08.988764 kernel: audit: type=1130 audit(1765854968.980:864): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.26:22-10.0.0.1:49242 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:09.038000 audit[5319]: USER_ACCT pid=5319 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:16:09.040637 sshd[5319]: Accepted publickey for core from 10.0.0.1 port 49242 ssh2: RSA SHA256:GhpAgDjPDQKTYeqxTnKpUWsy+dD7djTvvXmspUjCjIY Dec 16 03:16:09.042864 sshd-session[5319]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:16:09.039000 audit[5319]: CRED_ACQ pid=5319 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:16:09.048103 systemd-logind[1586]: New session 25 of user core. Dec 16 03:16:09.054161 kernel: audit: type=1101 audit(1765854969.038:865): pid=5319 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:16:09.054219 kernel: audit: type=1103 audit(1765854969.039:866): pid=5319 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:16:09.054245 kernel: audit: type=1006 audit(1765854969.039:867): pid=5319 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Dec 16 03:16:09.039000 audit[5319]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd2ea56160 a2=3 a3=0 items=0 ppid=1 pid=5319 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:09.039000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:16:09.062896 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 16 03:16:09.063000 audit[5319]: USER_START pid=5319 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:16:09.066000 audit[5323]: CRED_ACQ pid=5323 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:16:09.144201 sshd[5323]: Connection closed by 10.0.0.1 port 49242 Dec 16 03:16:09.144496 sshd-session[5319]: pam_unix(sshd:session): session closed for user core Dec 16 03:16:09.144000 audit[5319]: USER_END pid=5319 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:16:09.144000 audit[5319]: CRED_DISP pid=5319 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:16:09.150014 systemd[1]: sshd@23-10.0.0.26:22-10.0.0.1:49242.service: Deactivated successfully. Dec 16 03:16:09.148000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.26:22-10.0.0.1:49242 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:09.152151 systemd[1]: session-25.scope: Deactivated successfully. Dec 16 03:16:09.153172 systemd-logind[1586]: Session 25 logged out. Waiting for processes to exit. Dec 16 03:16:09.154835 systemd-logind[1586]: Removed session 25. Dec 16 03:16:09.169091 containerd[1606]: time="2025-12-16T03:16:09.169016989Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:16:09.170582 containerd[1606]: time="2025-12-16T03:16:09.170447349Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 03:16:09.170672 containerd[1606]: time="2025-12-16T03:16:09.170550573Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 16 03:16:09.170872 kubelet[2848]: E1216 03:16:09.170828 2848 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 03:16:09.170924 kubelet[2848]: E1216 03:16:09.170884 2848 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 03:16:09.171042 kubelet[2848]: E1216 03:16:09.171011 2848 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qvmgs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-h4rmp_calico-system(05279c13-1f07-47f3-aaa0-f3eff20006ee): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 03:16:09.172232 kubelet[2848]: E1216 03:16:09.172202 2848 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-h4rmp" podUID="05279c13-1f07-47f3-aaa0-f3eff20006ee" Dec 16 03:16:09.502582 kubelet[2848]: E1216 03:16:09.502292 2848 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:16:09.503590 containerd[1606]: time="2025-12-16T03:16:09.503553770Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 03:16:09.899889 containerd[1606]: time="2025-12-16T03:16:09.899624328Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:16:09.900937 containerd[1606]: time="2025-12-16T03:16:09.900885497Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 03:16:09.901015 containerd[1606]: time="2025-12-16T03:16:09.900965568Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 16 03:16:09.901187 kubelet[2848]: E1216 03:16:09.901128 2848 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 03:16:09.901595 kubelet[2848]: E1216 03:16:09.901206 2848 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 03:16:09.901595 kubelet[2848]: E1216 03:16:09.901397 2848 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:c5e6ea94913b425cb341c293718167ca,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wv6n9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-66c75bc767-db58s_calico-system(a5002d96-1b90-443a-9926-1ad68bf4babc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 03:16:09.903390 containerd[1606]: time="2025-12-16T03:16:09.903362291Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 03:16:10.236055 containerd[1606]: time="2025-12-16T03:16:10.235852587Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:16:10.237457 containerd[1606]: time="2025-12-16T03:16:10.237367024Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 03:16:10.237457 containerd[1606]: time="2025-12-16T03:16:10.237428209Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 16 03:16:10.237778 kubelet[2848]: E1216 03:16:10.237687 2848 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 03:16:10.237877 kubelet[2848]: E1216 03:16:10.237785 2848 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 03:16:10.237971 kubelet[2848]: E1216 03:16:10.237922 2848 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wv6n9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-66c75bc767-db58s_calico-system(a5002d96-1b90-443a-9926-1ad68bf4babc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 03:16:10.239179 kubelet[2848]: E1216 03:16:10.239131 2848 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66c75bc767-db58s" podUID="a5002d96-1b90-443a-9926-1ad68bf4babc" Dec 16 03:16:12.503564 containerd[1606]: time="2025-12-16T03:16:12.503478730Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 03:16:12.955456 containerd[1606]: time="2025-12-16T03:16:12.955395320Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:16:12.957000 containerd[1606]: time="2025-12-16T03:16:12.956929314Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 03:16:12.957095 containerd[1606]: time="2025-12-16T03:16:12.956964801Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 03:16:12.957320 kubelet[2848]: E1216 03:16:12.957259 2848 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:16:12.957808 kubelet[2848]: E1216 03:16:12.957323 2848 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:16:12.957808 kubelet[2848]: E1216 03:16:12.957455 2848 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gtxvk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6d7bb69b54-c7929_calico-apiserver(9e2da91d-bd6f-474c-851c-2fd9d9db86f3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 03:16:12.958617 kubelet[2848]: E1216 03:16:12.958590 2848 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d7bb69b54-c7929" podUID="9e2da91d-bd6f-474c-851c-2fd9d9db86f3" Dec 16 03:16:13.502060 kubelet[2848]: E1216 03:16:13.502007 2848 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 16 03:16:14.163129 systemd[1]: Started sshd@24-10.0.0.26:22-10.0.0.1:46264.service - OpenSSH per-connection server daemon (10.0.0.1:46264). Dec 16 03:16:14.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.26:22-10.0.0.1:46264 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:14.179929 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 16 03:16:14.180013 kernel: audit: type=1130 audit(1765854974.161:873): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.26:22-10.0.0.1:46264 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:14.249000 audit[5338]: USER_ACCT pid=5338 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:16:14.251967 sshd[5338]: Accepted publickey for core from 10.0.0.1 port 46264 ssh2: RSA SHA256:GhpAgDjPDQKTYeqxTnKpUWsy+dD7djTvvXmspUjCjIY Dec 16 03:16:14.254451 sshd-session[5338]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:16:14.251000 audit[5338]: CRED_ACQ pid=5338 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:16:14.260137 systemd-logind[1586]: New session 26 of user core. Dec 16 03:16:14.263244 kernel: audit: type=1101 audit(1765854974.249:874): pid=5338 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:16:14.263349 kernel: audit: type=1103 audit(1765854974.251:875): pid=5338 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:16:14.263398 kernel: audit: type=1006 audit(1765854974.251:876): pid=5338 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Dec 16 03:16:14.251000 audit[5338]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffddef7bb0 a2=3 a3=0 items=0 ppid=1 pid=5338 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:14.271424 kernel: audit: type=1300 audit(1765854974.251:876): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffddef7bb0 a2=3 a3=0 items=0 ppid=1 pid=5338 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:14.271501 kernel: audit: type=1327 audit(1765854974.251:876): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:16:14.251000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:16:14.275105 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 16 03:16:14.276000 audit[5338]: USER_START pid=5338 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:16:14.288764 kernel: audit: type=1105 audit(1765854974.276:877): pid=5338 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:16:14.288895 kernel: audit: type=1103 audit(1765854974.279:878): pid=5342 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:16:14.279000 audit[5342]: CRED_ACQ pid=5342 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:16:14.397196 sshd[5342]: Connection closed by 10.0.0.1 port 46264 Dec 16 03:16:14.397516 sshd-session[5338]: pam_unix(sshd:session): session closed for user core Dec 16 03:16:14.397000 audit[5338]: USER_END pid=5338 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:16:14.403169 systemd[1]: sshd@24-10.0.0.26:22-10.0.0.1:46264.service: Deactivated successfully. Dec 16 03:16:14.405407 systemd[1]: session-26.scope: Deactivated successfully. Dec 16 03:16:14.406554 systemd-logind[1586]: Session 26 logged out. Waiting for processes to exit. Dec 16 03:16:14.397000 audit[5338]: CRED_DISP pid=5338 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:16:14.408932 systemd-logind[1586]: Removed session 26. Dec 16 03:16:14.413281 kernel: audit: type=1106 audit(1765854974.397:879): pid=5338 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:16:14.413335 kernel: audit: type=1104 audit(1765854974.397:880): pid=5338 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:16:14.401000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.26:22-10.0.0.1:46264 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:14.503793 containerd[1606]: time="2025-12-16T03:16:14.503685148Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 03:16:14.849147 containerd[1606]: time="2025-12-16T03:16:14.848979198Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 03:16:14.864430 containerd[1606]: time="2025-12-16T03:16:14.864331657Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 03:16:14.864613 containerd[1606]: time="2025-12-16T03:16:14.864449229Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 16 03:16:14.864764 kubelet[2848]: E1216 03:16:14.864687 2848 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:16:14.865176 kubelet[2848]: E1216 03:16:14.864768 2848 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 03:16:14.865176 kubelet[2848]: E1216 03:16:14.864900 2848 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s6cjb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6d7bb69b54-5cccw_calico-apiserver(f6a2c05c-26b5-45cc-94cd-f96ac9ec6971): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 03:16:14.866097 kubelet[2848]: E1216 03:16:14.866066 2848 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d7bb69b54-5cccw" podUID="f6a2c05c-26b5-45cc-94cd-f96ac9ec6971" Dec 16 03:16:16.503104 kubelet[2848]: E1216 03:16:16.503015 2848 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vwjmk" podUID="5b1e8305-d364-4bc6-9a3a-e97daf2d06ed" Dec 16 03:16:17.503488 kubelet[2848]: E1216 03:16:17.503150 2848 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-79777bd46b-pgqh8" podUID="629371f1-f66b-44ce-8151-2d326255465b" Dec 16 03:16:19.410257 systemd[1]: Started sshd@25-10.0.0.26:22-10.0.0.1:46280.service - OpenSSH per-connection server daemon (10.0.0.1:46280). Dec 16 03:16:19.409000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.26:22-10.0.0.1:46280 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:19.448226 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 03:16:19.448367 kernel: audit: type=1130 audit(1765854979.409:882): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.26:22-10.0.0.1:46280 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:19.511000 audit[5355]: USER_ACCT pid=5355 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:16:19.512351 sshd[5355]: Accepted publickey for core from 10.0.0.1 port 46280 ssh2: RSA SHA256:GhpAgDjPDQKTYeqxTnKpUWsy+dD7djTvvXmspUjCjIY Dec 16 03:16:19.515731 sshd-session[5355]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:16:19.511000 audit[5355]: CRED_ACQ pid=5355 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:16:19.522467 systemd-logind[1586]: New session 27 of user core. Dec 16 03:16:19.524402 kernel: audit: type=1101 audit(1765854979.511:883): pid=5355 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:16:19.524486 kernel: audit: type=1103 audit(1765854979.511:884): pid=5355 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:16:19.524535 kernel: audit: type=1006 audit(1765854979.511:885): pid=5355 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Dec 16 03:16:19.527885 kernel: audit: type=1300 audit(1765854979.511:885): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd33ba44b0 a2=3 a3=0 items=0 ppid=1 pid=5355 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:19.511000 audit[5355]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd33ba44b0 a2=3 a3=0 items=0 ppid=1 pid=5355 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:19.511000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:16:19.536212 kernel: audit: type=1327 audit(1765854979.511:885): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:16:19.541989 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 16 03:16:19.544000 audit[5355]: USER_START pid=5355 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:16:19.546000 audit[5359]: CRED_ACQ pid=5359 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:16:19.558306 kernel: audit: type=1105 audit(1765854979.544:886): pid=5355 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:16:19.558483 kernel: audit: type=1103 audit(1765854979.546:887): pid=5359 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:16:19.643894 sshd[5359]: Connection closed by 10.0.0.1 port 46280 Dec 16 03:16:19.644195 sshd-session[5355]: pam_unix(sshd:session): session closed for user core Dec 16 03:16:19.644000 audit[5355]: USER_END pid=5355 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:16:19.650070 systemd[1]: sshd@25-10.0.0.26:22-10.0.0.1:46280.service: Deactivated successfully. Dec 16 03:16:19.652465 systemd[1]: session-27.scope: Deactivated successfully. Dec 16 03:16:19.653542 systemd-logind[1586]: Session 27 logged out. Waiting for processes to exit. Dec 16 03:16:19.655069 systemd-logind[1586]: Removed session 27. Dec 16 03:16:19.644000 audit[5355]: CRED_DISP pid=5355 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:16:19.673107 kernel: audit: type=1106 audit(1765854979.644:888): pid=5355 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:16:19.673262 kernel: audit: type=1104 audit(1765854979.644:889): pid=5355 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:16:19.649000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.26:22-10.0.0.1:46280 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:22.502991 kubelet[2848]: E1216 03:16:22.502934 2848 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66c75bc767-db58s" podUID="a5002d96-1b90-443a-9926-1ad68bf4babc" Dec 16 03:16:23.506027 kubelet[2848]: E1216 03:16:23.505968 2848 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-h4rmp" podUID="05279c13-1f07-47f3-aaa0-f3eff20006ee" Dec 16 03:16:24.662035 systemd[1]: Started sshd@26-10.0.0.26:22-10.0.0.1:49300.service - OpenSSH per-connection server daemon (10.0.0.1:49300). Dec 16 03:16:24.661000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.26:22-10.0.0.1:49300 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:24.664020 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 16 03:16:24.664177 kernel: audit: type=1130 audit(1765854984.661:891): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.26:22-10.0.0.1:49300 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 16 03:16:24.736000 audit[5372]: USER_ACCT pid=5372 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:16:24.739779 sshd-session[5372]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 03:16:24.742418 sshd[5372]: Accepted publickey for core from 10.0.0.1 port 49300 ssh2: RSA SHA256:GhpAgDjPDQKTYeqxTnKpUWsy+dD7djTvvXmspUjCjIY Dec 16 03:16:24.743749 kernel: audit: type=1101 audit(1765854984.736:892): pid=5372 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:16:24.737000 audit[5372]: CRED_ACQ pid=5372 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:16:24.746393 systemd-logind[1586]: New session 28 of user core. Dec 16 03:16:24.753807 kernel: audit: type=1103 audit(1765854984.737:893): pid=5372 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:16:24.753907 kernel: audit: type=1006 audit(1765854984.737:894): pid=5372 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Dec 16 03:16:24.753957 kernel: audit: type=1300 audit(1765854984.737:894): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcf7ce6950 a2=3 a3=0 items=0 ppid=1 pid=5372 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:24.737000 audit[5372]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcf7ce6950 a2=3 a3=0 items=0 ppid=1 pid=5372 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 16 03:16:24.737000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:16:24.762046 systemd[1]: Started session-28.scope - Session 28 of User core. Dec 16 03:16:24.763576 kernel: audit: type=1327 audit(1765854984.737:894): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 16 03:16:24.767000 audit[5372]: USER_START pid=5372 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:16:24.770000 audit[5376]: CRED_ACQ pid=5376 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:16:24.783341 kernel: audit: type=1105 audit(1765854984.767:895): pid=5372 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:16:24.783440 kernel: audit: type=1103 audit(1765854984.770:896): pid=5376 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:16:24.883930 sshd[5376]: Connection closed by 10.0.0.1 port 49300 Dec 16 03:16:24.884237 sshd-session[5372]: pam_unix(sshd:session): session closed for user core Dec 16 03:16:24.885000 audit[5372]: USER_END pid=5372 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:16:24.889619 systemd[1]: sshd@26-10.0.0.26:22-10.0.0.1:49300.service: Deactivated successfully. Dec 16 03:16:24.892796 systemd[1]: session-28.scope: Deactivated successfully. Dec 16 03:16:24.885000 audit[5372]: CRED_DISP pid=5372 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:16:24.894923 systemd-logind[1586]: Session 28 logged out. Waiting for processes to exit. Dec 16 03:16:24.896500 systemd-logind[1586]: Removed session 28. Dec 16 03:16:24.904455 kernel: audit: type=1106 audit(1765854984.885:897): pid=5372 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:16:24.904550 kernel: audit: type=1104 audit(1765854984.885:898): pid=5372 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 16 03:16:24.889000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.26:22-10.0.0.1:49300 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'