Apr 20 19:00:15.225910 kernel: Linux version 6.12.81-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 15.2.1_p20260214 p5) 15.2.1 20260214, GNU ld (Gentoo 2.46.0 p1) 2.46.0) #1 SMP PREEMPT_DYNAMIC Tue Apr 14 02:21:25 -00 2026 Apr 20 19:00:15.226034 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=70de22794192cd52e167b5a4b1ae0509811ded61dbe4152dfc02378f843ae81a Apr 20 19:00:15.226042 kernel: BIOS-provided physical RAM map: Apr 20 19:00:15.226049 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000002ffff] usable Apr 20 19:00:15.226054 kernel: BIOS-e820: [mem 0x0000000000030000-0x000000000004ffff] reserved Apr 20 19:00:15.226060 kernel: BIOS-e820: [mem 0x0000000000050000-0x000000000009efff] usable Apr 20 19:00:15.226066 kernel: BIOS-e820: [mem 0x000000000009f000-0x000000000009ffff] reserved Apr 20 19:00:15.226134 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009b8ecfff] usable Apr 20 19:00:15.226142 kernel: BIOS-e820: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Apr 20 19:00:15.226147 kernel: BIOS-e820: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Apr 20 19:00:15.226154 kernel: BIOS-e820: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Apr 20 19:00:15.226206 kernel: BIOS-e820: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Apr 20 19:00:15.226214 kernel: BIOS-e820: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Apr 20 19:00:15.226219 kernel: BIOS-e820: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Apr 20 19:00:15.226228 kernel: BIOS-e820: [mem 0x000000009bfb7000-0x000000009bffffff] usable Apr 20 19:00:15.226233 kernel: BIOS-e820: [mem 0x000000009c000000-0x000000009cffffff] reserved Apr 20 19:00:15.226240 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Apr 20 19:00:15.226245 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 20 19:00:15.226250 kernel: NX (Execute Disable) protection: active Apr 20 19:00:15.226255 kernel: APIC: Static calls initialized Apr 20 19:00:15.226262 kernel: e820: update [mem 0x9a142018-0x9a14bc57] usable ==> usable Apr 20 19:00:15.226269 kernel: e820: update [mem 0x9a105018-0x9a141e57] usable ==> usable Apr 20 19:00:15.226274 kernel: extended physical RAM map: Apr 20 19:00:15.226281 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000002ffff] usable Apr 20 19:00:15.226286 kernel: reserve setup_data: [mem 0x0000000000030000-0x000000000004ffff] reserved Apr 20 19:00:15.226290 kernel: reserve setup_data: [mem 0x0000000000050000-0x000000000009efff] usable Apr 20 19:00:15.226297 kernel: reserve setup_data: [mem 0x000000000009f000-0x000000000009ffff] reserved Apr 20 19:00:15.226302 kernel: reserve setup_data: [mem 0x0000000000100000-0x000000009a105017] usable Apr 20 19:00:15.226309 kernel: reserve setup_data: [mem 0x000000009a105018-0x000000009a141e57] usable Apr 20 19:00:15.226314 kernel: reserve setup_data: [mem 0x000000009a141e58-0x000000009a142017] usable Apr 20 19:00:15.226321 kernel: reserve setup_data: [mem 0x000000009a142018-0x000000009a14bc57] usable Apr 20 19:00:15.226327 kernel: reserve setup_data: [mem 0x000000009a14bc58-0x000000009b8ecfff] usable Apr 20 19:00:15.226336 kernel: reserve setup_data: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Apr 20 19:00:15.226342 kernel: reserve setup_data: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Apr 20 19:00:15.226347 kernel: reserve setup_data: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Apr 20 19:00:15.226352 kernel: reserve setup_data: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Apr 20 19:00:15.226357 kernel: reserve setup_data: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Apr 20 19:00:15.226362 kernel: reserve setup_data: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Apr 20 19:00:15.226369 kernel: reserve setup_data: [mem 0x000000009bfb7000-0x000000009bffffff] usable Apr 20 19:00:15.226374 kernel: reserve setup_data: [mem 0x000000009c000000-0x000000009cffffff] reserved Apr 20 19:00:15.226385 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Apr 20 19:00:15.226436 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 20 19:00:15.226442 kernel: efi: EFI v2.7 by EDK II Apr 20 19:00:15.226448 kernel: efi: SMBIOS=0x9b9d5000 ACPI=0x9bb7e000 ACPI 2.0=0x9bb7e014 MEMATTR=0x9a1b4018 RNG=0x9bb73018 Apr 20 19:00:15.226456 kernel: random: crng init done Apr 20 19:00:15.226461 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Apr 20 19:00:15.226467 kernel: secureboot: Secure boot enabled Apr 20 19:00:15.226474 kernel: SMBIOS 2.8 present. Apr 20 19:00:15.226480 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Apr 20 19:00:15.226487 kernel: DMI: Memory slots populated: 1/1 Apr 20 19:00:15.226546 kernel: Hypervisor detected: KVM Apr 20 19:00:15.226553 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x10000000000 Apr 20 19:00:15.226561 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 20 19:00:15.226566 kernel: kvm-clock: using sched offset of 13424810418 cycles Apr 20 19:00:15.226573 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 20 19:00:15.226582 kernel: tsc: Detected 2793.438 MHz processor Apr 20 19:00:15.226593 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 20 19:00:15.226652 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 20 19:00:15.226662 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x10000000000 Apr 20 19:00:15.226671 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 20 19:00:15.226682 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 20 19:00:15.226691 kernel: Using GB pages for direct mapping Apr 20 19:00:15.227440 kernel: ACPI: Early table checksum verification disabled Apr 20 19:00:15.228394 kernel: ACPI: RSDP 0x000000009BB7E014 000024 (v02 BOCHS ) Apr 20 19:00:15.228411 kernel: ACPI: XSDT 0x000000009BB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Apr 20 19:00:15.228417 kernel: ACPI: FACP 0x000000009BB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 20 19:00:15.228462 kernel: ACPI: DSDT 0x000000009BB7A000 002237 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 20 19:00:15.228470 kernel: ACPI: FACS 0x000000009BBDD000 000040 Apr 20 19:00:15.228476 kernel: ACPI: APIC 0x000000009BB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 20 19:00:15.228481 kernel: ACPI: HPET 0x000000009BB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 20 19:00:15.228492 kernel: ACPI: MCFG 0x000000009BB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 20 19:00:15.228581 kernel: ACPI: WAET 0x000000009BB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 20 19:00:15.228596 kernel: ACPI: BGRT 0x000000009BB74000 000038 (v01 INTEL EDK2 00000002 01000013) Apr 20 19:00:15.228604 kernel: ACPI: Reserving FACP table memory at [mem 0x9bb79000-0x9bb790f3] Apr 20 19:00:15.228612 kernel: ACPI: Reserving DSDT table memory at [mem 0x9bb7a000-0x9bb7c236] Apr 20 19:00:15.228624 kernel: ACPI: Reserving FACS table memory at [mem 0x9bbdd000-0x9bbdd03f] Apr 20 19:00:15.228632 kernel: ACPI: Reserving APIC table memory at [mem 0x9bb78000-0x9bb7808f] Apr 20 19:00:15.228642 kernel: ACPI: Reserving HPET table memory at [mem 0x9bb77000-0x9bb77037] Apr 20 19:00:15.228651 kernel: ACPI: Reserving MCFG table memory at [mem 0x9bb76000-0x9bb7603b] Apr 20 19:00:15.228659 kernel: ACPI: Reserving WAET table memory at [mem 0x9bb75000-0x9bb75027] Apr 20 19:00:15.228668 kernel: ACPI: Reserving BGRT table memory at [mem 0x9bb74000-0x9bb74037] Apr 20 19:00:15.228677 kernel: No NUMA configuration found Apr 20 19:00:15.228687 kernel: Faking a node at [mem 0x0000000000000000-0x000000009bffffff] Apr 20 19:00:15.228696 kernel: NODE_DATA(0) allocated [mem 0x9bf57dc0-0x9bf5efff] Apr 20 19:00:15.228705 kernel: Zone ranges: Apr 20 19:00:15.228716 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 20 19:00:15.228726 kernel: DMA32 [mem 0x0000000001000000-0x000000009bffffff] Apr 20 19:00:15.228735 kernel: Normal empty Apr 20 19:00:15.228742 kernel: Device empty Apr 20 19:00:15.228754 kernel: Movable zone start for each node Apr 20 19:00:15.228762 kernel: Early memory node ranges Apr 20 19:00:15.228770 kernel: node 0: [mem 0x0000000000001000-0x000000000002ffff] Apr 20 19:00:15.229248 kernel: node 0: [mem 0x0000000000050000-0x000000000009efff] Apr 20 19:00:15.229284 kernel: node 0: [mem 0x0000000000100000-0x000000009b8ecfff] Apr 20 19:00:15.229290 kernel: node 0: [mem 0x000000009bbff000-0x000000009bfb0fff] Apr 20 19:00:15.229296 kernel: node 0: [mem 0x000000009bfb7000-0x000000009bffffff] Apr 20 19:00:15.229301 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009bffffff] Apr 20 19:00:15.229307 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 20 19:00:15.229313 kernel: On node 0, zone DMA: 32 pages in unavailable ranges Apr 20 19:00:15.229354 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 20 19:00:15.229361 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Apr 20 19:00:15.229367 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Apr 20 19:00:15.229372 kernel: On node 0, zone DMA32: 16384 pages in unavailable ranges Apr 20 19:00:15.229379 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 20 19:00:15.229452 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 20 19:00:15.229461 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 20 19:00:15.229473 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 20 19:00:15.229481 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 20 19:00:15.229489 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 20 19:00:15.229563 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 20 19:00:15.229571 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 20 19:00:15.229580 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 20 19:00:15.229588 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 20 19:00:15.229600 kernel: TSC deadline timer available Apr 20 19:00:15.229609 kernel: CPU topo: Max. logical packages: 1 Apr 20 19:00:15.229617 kernel: CPU topo: Max. logical dies: 1 Apr 20 19:00:15.229625 kernel: CPU topo: Max. dies per package: 1 Apr 20 19:00:15.229633 kernel: CPU topo: Max. threads per core: 1 Apr 20 19:00:15.229649 kernel: CPU topo: Num. cores per package: 4 Apr 20 19:00:15.229660 kernel: CPU topo: Num. threads per package: 4 Apr 20 19:00:15.229669 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Apr 20 19:00:15.230090 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 20 19:00:15.230124 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 20 19:00:15.230145 kernel: kvm-guest: setup PV sched yield Apr 20 19:00:15.230151 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Apr 20 19:00:15.230158 kernel: Booting paravirtualized kernel on KVM Apr 20 19:00:15.230165 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 20 19:00:15.230180 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 20 19:00:15.230187 kernel: percpu: Embedded 60 pages/cpu s207960 r8192 d29608 u524288 Apr 20 19:00:15.230194 kernel: pcpu-alloc: s207960 r8192 d29608 u524288 alloc=1*2097152 Apr 20 19:00:15.230201 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 20 19:00:15.230207 kernel: kvm-guest: PV spinlocks enabled Apr 20 19:00:15.230213 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 20 19:00:15.230221 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=70de22794192cd52e167b5a4b1ae0509811ded61dbe4152dfc02378f843ae81a Apr 20 19:00:15.230230 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 20 19:00:15.230236 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 20 19:00:15.230242 kernel: Fallback order for Node 0: 0 Apr 20 19:00:15.230248 kernel: Built 1 zonelists, mobility grouping on. Total pages: 638054 Apr 20 19:00:15.230254 kernel: Policy zone: DMA32 Apr 20 19:00:15.230260 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 20 19:00:15.230267 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 20 19:00:15.230275 kernel: ftrace: allocating 40346 entries in 158 pages Apr 20 19:00:15.230281 kernel: ftrace: allocated 158 pages with 5 groups Apr 20 19:00:15.230287 kernel: Dynamic Preempt: voluntary Apr 20 19:00:15.230293 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 20 19:00:15.230301 kernel: rcu: RCU event tracing is enabled. Apr 20 19:00:15.230308 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 20 19:00:15.230314 kernel: Trampoline variant of Tasks RCU enabled. Apr 20 19:00:15.230321 kernel: Rude variant of Tasks RCU enabled. Apr 20 19:00:15.230327 kernel: Tracing variant of Tasks RCU enabled. Apr 20 19:00:15.230393 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 20 19:00:15.230400 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 20 19:00:15.230406 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 20 19:00:15.230412 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 20 19:00:15.230418 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 20 19:00:15.230426 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 20 19:00:15.230432 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 20 19:00:15.230439 kernel: Console: colour dummy device 80x25 Apr 20 19:00:15.230445 kernel: printk: legacy console [ttyS0] enabled Apr 20 19:00:15.230451 kernel: ACPI: Core revision 20240827 Apr 20 19:00:15.230457 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 20 19:00:15.230463 kernel: APIC: Switch to symmetric I/O mode setup Apr 20 19:00:15.230471 kernel: x2apic enabled Apr 20 19:00:15.230477 kernel: APIC: Switched APIC routing to: physical x2apic Apr 20 19:00:15.230483 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 20 19:00:15.230489 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 20 19:00:15.230551 kernel: kvm-guest: setup PV IPIs Apr 20 19:00:15.230558 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 20 19:00:15.230564 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 20 19:00:15.230572 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 20 19:00:15.230578 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 20 19:00:15.230584 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 20 19:00:15.230590 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 20 19:00:15.230597 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 20 19:00:15.230603 kernel: Spectre V2 : Mitigation: Retpolines Apr 20 19:00:15.230609 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 20 19:00:15.231060 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 20 19:00:15.231073 kernel: RETBleed: Vulnerable Apr 20 19:00:15.231079 kernel: Speculative Store Bypass: Vulnerable Apr 20 19:00:15.231085 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 20 19:00:15.231092 kernel: GDS: Unknown: Dependent on hypervisor status Apr 20 19:00:15.231098 kernel: active return thunk: its_return_thunk Apr 20 19:00:15.231104 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 20 19:00:15.231148 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 20 19:00:15.231154 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 20 19:00:15.231162 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 20 19:00:15.231168 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 20 19:00:15.231175 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 20 19:00:15.231181 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 20 19:00:15.231187 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 20 19:00:15.231195 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 20 19:00:15.231201 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 20 19:00:15.231207 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 20 19:00:15.231213 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 20 19:00:15.231219 kernel: Freeing SMP alternatives memory: 32K Apr 20 19:00:15.231225 kernel: pid_max: default: 32768 minimum: 301 Apr 20 19:00:15.231231 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Apr 20 19:00:15.231294 kernel: landlock: Up and running. Apr 20 19:00:15.231301 kernel: SELinux: Initializing. Apr 20 19:00:15.231307 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 20 19:00:15.231313 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 20 19:00:15.231319 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 20 19:00:15.231326 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 20 19:00:15.231332 kernel: signal: max sigframe size: 3632 Apr 20 19:00:15.231338 kernel: rcu: Hierarchical SRCU implementation. Apr 20 19:00:15.231347 kernel: rcu: Max phase no-delay instances is 400. Apr 20 19:00:15.231353 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Apr 20 19:00:15.231359 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 20 19:00:15.231365 kernel: smp: Bringing up secondary CPUs ... Apr 20 19:00:15.231371 kernel: smpboot: x86: Booting SMP configuration: Apr 20 19:00:15.231377 kernel: .... node #0, CPUs: #1 #2 #3 Apr 20 19:00:15.231384 kernel: smp: Brought up 1 node, 4 CPUs Apr 20 19:00:15.231392 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 20 19:00:15.231399 kernel: Memory: 2381832K/2552216K available (14336K kernel code, 2458K rwdata, 31736K rodata, 15944K init, 2284K bss, 164492K reserved, 0K cma-reserved) Apr 20 19:00:15.231405 kernel: devtmpfs: initialized Apr 20 19:00:15.231411 kernel: x86/mm: Memory block size: 128MB Apr 20 19:00:15.231417 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bb7f000-0x9bbfefff] (524288 bytes) Apr 20 19:00:15.231424 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bfb5000-0x9bfb6fff] (8192 bytes) Apr 20 19:00:15.231430 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 20 19:00:15.231438 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 20 19:00:15.231444 kernel: pinctrl core: initialized pinctrl subsystem Apr 20 19:00:15.231450 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 20 19:00:15.231457 kernel: audit: initializing netlink subsys (disabled) Apr 20 19:00:15.231463 kernel: audit: type=2000 audit(1776711590.062:1): state=initialized audit_enabled=0 res=1 Apr 20 19:00:15.231470 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 20 19:00:15.231476 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 20 19:00:15.231483 kernel: cpuidle: using governor menu Apr 20 19:00:15.231489 kernel: efi: Freeing EFI boot services memory: 42800K Apr 20 19:00:15.231547 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 20 19:00:15.231554 kernel: dca service started, version 1.12.1 Apr 20 19:00:15.231560 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Apr 20 19:00:15.231566 kernel: PCI: Using configuration type 1 for base access Apr 20 19:00:15.231572 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 20 19:00:15.231581 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 20 19:00:15.231587 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 20 19:00:15.231594 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 20 19:00:15.231600 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 20 19:00:15.231606 kernel: ACPI: Added _OSI(Module Device) Apr 20 19:00:15.231612 kernel: ACPI: Added _OSI(Processor Device) Apr 20 19:00:15.231618 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 20 19:00:15.231626 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 20 19:00:15.231632 kernel: ACPI: Interpreter enabled Apr 20 19:00:15.231638 kernel: ACPI: PM: (supports S0 S5) Apr 20 19:00:15.231644 kernel: ACPI: Using IOAPIC for interrupt routing Apr 20 19:00:15.231650 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 20 19:00:15.231656 kernel: PCI: Using E820 reservations for host bridge windows Apr 20 19:00:15.231662 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 20 19:00:15.231670 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 20 19:00:15.232333 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 20 19:00:15.232442 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 20 19:00:15.232604 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 20 19:00:15.232613 kernel: PCI host bridge to bus 0000:00 Apr 20 19:00:15.232708 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 20 19:00:15.232916 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 20 19:00:15.233038 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 20 19:00:15.233154 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Apr 20 19:00:15.233275 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Apr 20 19:00:15.233400 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Apr 20 19:00:15.234020 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 20 19:00:15.234230 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Apr 20 19:00:15.234371 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Apr 20 19:00:15.236232 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Apr 20 19:00:15.237358 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Apr 20 19:00:15.237457 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Apr 20 19:00:15.240449 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 20 19:00:15.242699 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Apr 20 19:00:15.244694 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Apr 20 19:00:15.245681 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Apr 20 19:00:15.246001 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Apr 20 19:00:15.246191 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Apr 20 19:00:15.246325 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Apr 20 19:00:15.246456 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Apr 20 19:00:15.246660 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Apr 20 19:00:15.248341 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Apr 20 19:00:15.249076 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Apr 20 19:00:15.249232 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Apr 20 19:00:15.250465 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Apr 20 19:00:15.252423 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Apr 20 19:00:15.252676 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Apr 20 19:00:15.252941 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 20 19:00:15.253112 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Apr 20 19:00:15.253243 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Apr 20 19:00:15.253373 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Apr 20 19:00:15.254399 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Apr 20 19:00:15.254633 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Apr 20 19:00:15.254649 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 20 19:00:15.256725 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 20 19:00:15.256902 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 20 19:00:15.256914 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 20 19:00:15.256925 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 20 19:00:15.256935 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 20 19:00:15.256949 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 20 19:00:15.256958 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 20 19:00:15.257059 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 20 19:00:15.257071 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 20 19:00:15.257079 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 20 19:00:15.257089 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 20 19:00:15.257099 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 20 19:00:15.257109 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 20 19:00:15.257119 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 20 19:00:15.257132 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 20 19:00:15.257142 kernel: iommu: Default domain type: Translated Apr 20 19:00:15.257150 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 20 19:00:15.257159 kernel: efivars: Registered efivars operations Apr 20 19:00:15.257168 kernel: PCI: Using ACPI for IRQ routing Apr 20 19:00:15.257178 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 20 19:00:15.257187 kernel: e820: reserve RAM buffer [mem 0x0009f000-0x0009ffff] Apr 20 19:00:15.257196 kernel: e820: reserve RAM buffer [mem 0x9a105018-0x9bffffff] Apr 20 19:00:15.257207 kernel: e820: reserve RAM buffer [mem 0x9a142018-0x9bffffff] Apr 20 19:00:15.257216 kernel: e820: reserve RAM buffer [mem 0x9b8ed000-0x9bffffff] Apr 20 19:00:15.257225 kernel: e820: reserve RAM buffer [mem 0x9bfb1000-0x9bffffff] Apr 20 19:00:15.257578 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 20 19:00:15.257720 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 20 19:00:15.261087 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 20 19:00:15.261140 kernel: vgaarb: loaded Apr 20 19:00:15.261153 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 20 19:00:15.261164 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 20 19:00:15.261174 kernel: clocksource: Switched to clocksource kvm-clock Apr 20 19:00:15.261186 kernel: VFS: Disk quotas dquot_6.6.0 Apr 20 19:00:15.261197 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 20 19:00:15.261208 kernel: pnp: PnP ACPI init Apr 20 19:00:15.261405 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Apr 20 19:00:15.261422 kernel: pnp: PnP ACPI: found 6 devices Apr 20 19:00:15.261435 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 20 19:00:15.261445 kernel: NET: Registered PF_INET protocol family Apr 20 19:00:15.261455 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 20 19:00:15.261465 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 20 19:00:15.261476 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 20 19:00:15.261489 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 20 19:00:15.261571 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 20 19:00:15.261581 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 20 19:00:15.261591 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 20 19:00:15.261600 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 20 19:00:15.261609 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 20 19:00:15.261618 kernel: NET: Registered PF_XDP protocol family Apr 20 19:00:15.261769 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Apr 20 19:00:15.262051 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Apr 20 19:00:15.262191 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 20 19:00:15.262324 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 20 19:00:15.262450 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 20 19:00:15.262654 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Apr 20 19:00:15.262912 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Apr 20 19:00:15.263039 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Apr 20 19:00:15.263053 kernel: PCI: CLS 0 bytes, default 64 Apr 20 19:00:15.263062 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 20 19:00:15.263072 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 20 19:00:15.263082 kernel: Initialise system trusted keyrings Apr 20 19:00:15.263093 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 20 19:00:15.263108 kernel: Key type asymmetric registered Apr 20 19:00:15.263117 kernel: Asymmetric key parser 'x509' registered Apr 20 19:00:15.263127 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 20 19:00:15.263137 kernel: io scheduler mq-deadline registered Apr 20 19:00:15.263162 kernel: io scheduler kyber registered Apr 20 19:00:15.263174 kernel: io scheduler bfq registered Apr 20 19:00:15.263184 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 20 19:00:15.263196 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 20 19:00:15.263207 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 20 19:00:15.263217 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 20 19:00:15.263228 kernel: hrtimer: interrupt took 3943219 ns Apr 20 19:00:15.263239 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 20 19:00:15.263250 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 20 19:00:15.263261 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 20 19:00:15.263274 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 20 19:00:15.263285 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 20 19:00:15.263426 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 20 19:00:15.263442 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Apr 20 19:00:15.264164 kernel: rtc_cmos 00:04: registered as rtc0 Apr 20 19:00:15.264304 kernel: rtc_cmos 00:04: setting system clock to 2026-04-20T19:00:01 UTC (1776711601) Apr 20 19:00:15.264429 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Apr 20 19:00:15.264442 kernel: intel_pstate: CPU model not supported Apr 20 19:00:15.264453 kernel: efifb: probing for efifb Apr 20 19:00:15.264463 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Apr 20 19:00:15.264473 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Apr 20 19:00:15.264483 kernel: efifb: scrolling: redraw Apr 20 19:00:15.265143 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 20 19:00:15.265216 kernel: Console: switching to colour frame buffer device 160x50 Apr 20 19:00:15.265228 kernel: fb0: EFI VGA frame buffer device Apr 20 19:00:15.265237 kernel: pstore: Using crash dump compression: deflate Apr 20 19:00:15.265247 kernel: pstore: Registered efi_pstore as persistent store backend Apr 20 19:00:15.265257 kernel: NET: Registered PF_INET6 protocol family Apr 20 19:00:15.265269 kernel: Segment Routing with IPv6 Apr 20 19:00:15.265278 kernel: In-situ OAM (IOAM) with IPv6 Apr 20 19:00:15.265287 kernel: NET: Registered PF_PACKET protocol family Apr 20 19:00:15.265298 kernel: Key type dns_resolver registered Apr 20 19:00:15.265307 kernel: IPI shorthand broadcast: enabled Apr 20 19:00:15.265316 kernel: sched_clock: Marking stable (12060063694, 1726205961)->(15725530619, -1939260964) Apr 20 19:00:15.265325 kernel: registered taskstats version 1 Apr 20 19:00:15.265338 kernel: Loading compiled-in X.509 certificates Apr 20 19:00:15.265347 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.81-flatcar: 7cf14208c08026297bea8a5678f7340932b35e4b' Apr 20 19:00:15.265358 kernel: Demotion targets for Node 0: null Apr 20 19:00:15.265367 kernel: Key type .fscrypt registered Apr 20 19:00:15.265442 kernel: Key type fscrypt-provisioning registered Apr 20 19:00:15.265453 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 20 19:00:15.265463 kernel: ima: Allocated hash algorithm: sha1 Apr 20 19:00:15.265476 kernel: ima: No architecture policies found Apr 20 19:00:15.265486 kernel: clk: Disabling unused clocks Apr 20 19:00:15.266217 kernel: Freeing unused kernel image (initmem) memory: 15944K Apr 20 19:00:15.266235 kernel: Write protecting the kernel read-only data: 47104k Apr 20 19:00:15.266246 kernel: Freeing unused kernel image (rodata/data gap) memory: 1032K Apr 20 19:00:15.266257 kernel: Run /init as init process Apr 20 19:00:15.266268 kernel: with arguments: Apr 20 19:00:15.266308 kernel: /init Apr 20 19:00:15.266318 kernel: with environment: Apr 20 19:00:15.266329 kernel: HOME=/ Apr 20 19:00:15.266339 kernel: TERM=linux Apr 20 19:00:15.266349 kernel: SCSI subsystem initialized Apr 20 19:00:15.266360 kernel: libata version 3.00 loaded. Apr 20 19:00:15.266608 kernel: ahci 0000:00:1f.2: version 3.0 Apr 20 19:00:15.266629 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 20 19:00:15.266763 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Apr 20 19:00:15.267026 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Apr 20 19:00:15.267156 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 20 19:00:15.270172 kernel: scsi host0: ahci Apr 20 19:00:15.270356 kernel: scsi host1: ahci Apr 20 19:00:15.272436 kernel: scsi host2: ahci Apr 20 19:00:15.272742 kernel: scsi host3: ahci Apr 20 19:00:15.273046 kernel: scsi host4: ahci Apr 20 19:00:15.273224 kernel: scsi host5: ahci Apr 20 19:00:15.273243 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 26 lpm-pol 1 Apr 20 19:00:15.273281 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 26 lpm-pol 1 Apr 20 19:00:15.273292 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 26 lpm-pol 1 Apr 20 19:00:15.273303 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 26 lpm-pol 1 Apr 20 19:00:15.273313 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 26 lpm-pol 1 Apr 20 19:00:15.273324 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 26 lpm-pol 1 Apr 20 19:00:15.273335 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 20 19:00:15.273346 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 20 19:00:15.273358 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 20 19:00:15.273371 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 20 19:00:15.273381 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 20 19:00:15.273391 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 20 19:00:15.273401 kernel: ata3.00: LPM support broken, forcing max_power Apr 20 19:00:15.273412 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 20 19:00:15.273424 kernel: ata3.00: applying bridge limits Apr 20 19:00:15.273439 kernel: ata3.00: LPM support broken, forcing max_power Apr 20 19:00:15.273450 kernel: ata3.00: configured for UDMA/100 Apr 20 19:00:15.273720 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 20 19:00:15.276400 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 20 19:00:15.276655 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Apr 20 19:00:15.279463 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 20 19:00:15.279571 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 20 19:00:15.279583 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 20 19:00:15.279594 kernel: GPT:16515071 != 27000831 Apr 20 19:00:15.279602 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 20 19:00:15.279611 kernel: GPT:16515071 != 27000831 Apr 20 19:00:15.279620 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 20 19:00:15.279630 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 20 19:00:15.279929 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 20 19:00:15.279949 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 20 19:00:15.279960 kernel: device-mapper: uevent: version 1.0.3 Apr 20 19:00:15.279971 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Apr 20 19:00:15.279982 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Apr 20 19:00:15.279992 kernel: raid6: avx512x4 gen() 10505 MB/s Apr 20 19:00:15.280003 kernel: raid6: avx512x2 gen() 19519 MB/s Apr 20 19:00:15.280020 kernel: raid6: avx512x1 gen() 17318 MB/s Apr 20 19:00:15.280031 kernel: raid6: avx2x4 gen() 11685 MB/s Apr 20 19:00:15.280041 kernel: raid6: avx2x2 gen() 10520 MB/s Apr 20 19:00:15.280066 kernel: raid6: avx2x1 gen() 10119 MB/s Apr 20 19:00:15.280077 kernel: raid6: using algorithm avx512x2 gen() 19519 MB/s Apr 20 19:00:15.280087 kernel: raid6: .... xor() 11220 MB/s, rmw enabled Apr 20 19:00:15.280098 kernel: raid6: using avx512x2 recovery algorithm Apr 20 19:00:15.280111 kernel: xor: automatically using best checksumming function avx Apr 20 19:00:15.280121 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 20 19:00:15.280132 kernel: BTRFS: device fsid 2b1891e6-d4d2-4c02-a1ed-3a6feccae86f devid 1 transid 45 /dev/mapper/usr (253:0) scanned by mount (181) Apr 20 19:00:15.280142 kernel: BTRFS info (device dm-0): first mount of filesystem 2b1891e6-d4d2-4c02-a1ed-3a6feccae86f Apr 20 19:00:15.280153 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 20 19:00:15.280164 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Apr 20 19:00:15.280173 kernel: BTRFS info (device dm-0 state E): enabling free space tree Apr 20 19:00:15.280185 kernel: loop: module loaded Apr 20 19:00:15.280195 kernel: loop0: detected capacity change from 0 to 106960 Apr 20 19:00:15.280206 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 20 19:00:15.280219 systemd[1]: /etc/systemd/system.conf.d/nocgroup.conf:2: Support for option DefaultCPUAccounting= has been removed and it is ignored Apr 20 19:00:15.280233 systemd[1]: /etc/systemd/system.conf.d/nocgroup.conf:5: Support for option DefaultBlockIOAccounting= has been removed and it is ignored Apr 20 19:00:15.280244 systemd[1]: Successfully made /usr/ read-only. Apr 20 19:00:15.280258 systemd[1]: systemd 258.3 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 20 19:00:15.280270 systemd[1]: Detected virtualization kvm. Apr 20 19:00:15.280280 systemd[1]: Detected architecture x86-64. Apr 20 19:00:15.280291 systemd[1]: Running in initrd. Apr 20 19:00:15.280303 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Apr 20 19:00:15.280313 systemd[1]: No hostname configured, using default hostname. Apr 20 19:00:15.280325 systemd[1]: Hostname set to . Apr 20 19:00:15.280337 systemd[1]: Queued start job for default target initrd.target. Apr 20 19:00:15.280348 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Apr 20 19:00:15.280359 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 20 19:00:15.280370 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 20 19:00:15.280382 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 20 19:00:15.280396 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 20 19:00:15.280407 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 20 19:00:15.280417 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 20 19:00:15.280428 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 20 19:00:15.280438 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 20 19:00:15.280450 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Apr 20 19:00:15.280460 systemd[1]: Reached target paths.target - Path Units. Apr 20 19:00:15.280473 systemd[1]: Reached target slices.target - Slice Units. Apr 20 19:00:15.280484 systemd[1]: Reached target swap.target - Swaps. Apr 20 19:00:15.280562 systemd[1]: Reached target timers.target - Timer Units. Apr 20 19:00:15.280574 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 20 19:00:15.280584 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 20 19:00:15.280593 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Apr 20 19:00:15.280604 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 20 19:00:15.280617 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Apr 20 19:00:15.280626 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 20 19:00:15.280636 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 20 19:00:15.280646 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 20 19:00:15.280658 systemd[1]: Reached target sockets.target - Socket Units. Apr 20 19:00:15.280668 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 20 19:00:15.280679 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 20 19:00:15.280689 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 20 19:00:15.280700 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 20 19:00:15.280711 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Apr 20 19:00:15.280722 systemd[1]: Starting systemd-fsck-usr.service... Apr 20 19:00:15.280736 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 20 19:00:15.280746 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 20 19:00:15.280757 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 20 19:00:15.280769 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 20 19:00:15.281668 systemd-journald[320]: Collecting audit messages is enabled. Apr 20 19:00:15.281708 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 20 19:00:15.281723 kernel: audit: type=1130 audit(1776711615.219:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:00:15.281735 kernel: audit: type=1130 audit(1776711615.263:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:00:15.281751 systemd[1]: Finished systemd-fsck-usr.service. Apr 20 19:00:15.281763 systemd-journald[320]: Journal started Apr 20 19:00:15.281907 systemd-journald[320]: Runtime Journal (/run/log/journal/7d59c4521b6a4ae798963236ead50d67) is 5.9M, max 47.8M, 41.8M free. Apr 20 19:00:15.219000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:00:15.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:00:15.294000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:00:15.328280 kernel: audit: type=1130 audit(1776711615.294:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:00:15.330283 systemd[1]: Started systemd-journald.service - Journal Service. Apr 20 19:00:15.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:00:15.352676 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 20 19:00:15.412977 kernel: audit: type=1130 audit(1776711615.339:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:00:15.417477 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 20 19:00:15.551052 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 20 19:00:15.558300 systemd-tmpfiles[332]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Apr 20 19:00:15.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:00:15.621000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:00:15.600312 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 20 19:00:15.704631 kernel: audit: type=1130 audit(1776711615.578:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:00:15.694447 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 20 19:00:15.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:00:15.759013 kernel: audit: type=1130 audit(1776711615.621:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:00:15.751282 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 20 19:00:15.780378 kernel: audit: type=1130 audit(1776711615.720:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:00:15.807669 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 20 19:00:15.826652 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 20 19:00:15.840011 systemd-modules-load[321]: Inserted module 'br_netfilter' Apr 20 19:00:15.842701 kernel: Bridge firewalling registered Apr 20 19:00:15.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:00:15.951102 kernel: audit: type=1130 audit(1776711615.912:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:00:15.912352 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 20 19:00:15.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:00:16.019295 kernel: audit: type=1130 audit(1776711615.963:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:00:15.940308 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 20 19:00:15.951579 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 20 19:00:15.980413 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 20 19:00:16.112078 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 20 19:00:16.134000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:00:16.175310 kernel: audit: type=1130 audit(1776711616.134:11): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:00:16.176203 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 20 19:00:16.188000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:00:16.193000 audit: BPF prog-id=5 op=LOAD Apr 20 19:00:16.209150 dracut-cmdline[354]: dracut-109 Apr 20 19:00:16.198229 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 20 19:00:16.233774 dracut-cmdline[354]: Using kernel command line parameters: SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=70de22794192cd52e167b5a4b1ae0509811ded61dbe4152dfc02378f843ae81a Apr 20 19:00:16.507464 systemd-resolved[368]: Positive Trust Anchors: Apr 20 19:00:16.508360 systemd-resolved[368]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 20 19:00:16.508395 systemd-resolved[368]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Apr 20 19:00:16.508458 systemd-resolved[368]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 20 19:00:16.755108 systemd-resolved[368]: Defaulting to hostname 'linux'. Apr 20 19:00:16.788064 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 20 19:00:16.798000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:00:16.805080 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 20 19:00:17.876141 kernel: Loading iSCSI transport class v2.0-870. Apr 20 19:00:18.064275 kernel: iscsi: registered transport (tcp) Apr 20 19:00:18.209705 kernel: iscsi: registered transport (qla4xxx) Apr 20 19:00:18.210082 kernel: QLogic iSCSI HBA Driver Apr 20 19:00:18.629363 systemd[1]: Starting systemd-network-generator.service - Generate Network Units from Kernel Command Line... Apr 20 19:00:18.778258 systemd[1]: Finished systemd-network-generator.service - Generate Network Units from Kernel Command Line. Apr 20 19:00:18.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:00:18.809417 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 20 19:00:19.345746 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 20 19:00:19.395000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:00:19.408256 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 20 19:00:19.423682 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 20 19:00:19.636473 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 20 19:00:19.644000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:00:19.659000 audit: BPF prog-id=6 op=LOAD Apr 20 19:00:19.659000 audit: BPF prog-id=7 op=LOAD Apr 20 19:00:19.662266 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 20 19:00:19.846106 systemd-udevd[583]: Using default interface naming scheme 'v258'. Apr 20 19:00:19.971171 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 20 19:00:19.984000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:00:20.009136 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 20 19:00:20.154734 dracut-pre-trigger[628]: rd.md=0: removing MD RAID activation Apr 20 19:00:20.498677 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 20 19:00:20.523000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:00:20.537006 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 20 19:00:20.594135 kernel: kauditd_printk_skb: 9 callbacks suppressed Apr 20 19:00:20.594184 kernel: audit: type=1130 audit(1776711620.523:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:00:20.615330 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 20 19:00:20.650000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:00:20.668193 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 20 19:00:20.657000 audit: BPF prog-id=8 op=LOAD Apr 20 19:00:20.699269 kernel: audit: type=1130 audit(1776711620.650:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:00:20.699296 kernel: audit: type=1334 audit(1776711620.657:23): prog-id=8 op=LOAD Apr 20 19:00:21.022646 systemd-networkd[737]: lo: Link UP Apr 20 19:00:21.022713 systemd-networkd[737]: lo: Gained carrier Apr 20 19:00:21.042262 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 20 19:00:21.072776 systemd[1]: Reached target network.target - Network. Apr 20 19:00:21.071000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:00:21.094919 kernel: audit: type=1130 audit(1776711621.071:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:00:21.168203 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 20 19:00:21.207000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:00:21.245919 kernel: audit: type=1130 audit(1776711621.207:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:00:21.217420 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 20 19:00:21.648334 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 20 19:00:21.740483 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 20 19:00:21.792320 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 20 19:00:21.857381 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 20 19:00:21.920331 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 20 19:00:22.031661 kernel: cryptd: max_cpu_qlen set to 1000 Apr 20 19:00:22.119454 disk-uuid[776]: Primary Header is updated. Apr 20 19:00:22.119454 disk-uuid[776]: Secondary Entries is updated. Apr 20 19:00:22.119454 disk-uuid[776]: Secondary Header is updated. Apr 20 19:00:22.185000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:00:22.240345 kernel: audit: type=1131 audit(1776711622.185:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:00:22.149654 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 20 19:00:22.289422 kernel: AES CTR mode by8 optimization enabled Apr 20 19:00:22.149769 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 20 19:00:22.185237 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 20 19:00:22.347292 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Apr 20 19:00:22.320628 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 20 19:00:22.347487 systemd-networkd[737]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Apr 20 19:00:22.347490 systemd-networkd[737]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 20 19:00:22.359308 systemd-networkd[737]: eth0: Link UP Apr 20 19:00:22.363279 systemd-networkd[737]: eth0: Gained carrier Apr 20 19:00:22.363301 systemd-networkd[737]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Apr 20 19:00:22.506255 systemd-networkd[737]: eth0: DHCPv4 address 10.0.0.14/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 20 19:00:22.582662 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 20 19:00:22.584468 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 20 19:00:22.611000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:00:22.646769 kernel: audit: type=1130 audit(1776711622.611:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:00:22.612000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:00:22.675973 kernel: audit: type=1131 audit(1776711622.612:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:00:22.676200 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 20 19:00:22.829254 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 20 19:00:22.864000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:00:22.894204 kernel: audit: type=1130 audit(1776711622.864:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:00:23.078217 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 20 19:00:23.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:00:23.118676 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 20 19:00:23.135262 kernel: audit: type=1130 audit(1776711623.105:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:00:23.168267 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 20 19:00:23.194742 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 20 19:00:23.250129 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 20 19:00:23.350650 disk-uuid[778]: Warning: The kernel is still using the old partition table. Apr 20 19:00:23.350650 disk-uuid[778]: The new table will be used at the next reboot or after you Apr 20 19:00:23.350650 disk-uuid[778]: run partprobe(8) or kpartx(8) Apr 20 19:00:23.350650 disk-uuid[778]: The operation has completed successfully. Apr 20 19:00:23.404320 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 20 19:00:23.404608 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 20 19:00:23.426000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:00:23.427000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:00:23.435377 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 20 19:00:23.526629 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 20 19:00:23.553000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:00:23.648390 systemd-networkd[737]: eth0: Gained IPv6LL Apr 20 19:00:23.863396 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (901) Apr 20 19:00:23.879240 kernel: BTRFS info (device vda6): first mount of filesystem 17906e87-85d1-46f5-980e-3e85555360cf Apr 20 19:00:23.879436 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 20 19:00:24.037128 kernel: BTRFS info (device vda6): turning on async discard Apr 20 19:00:24.037386 kernel: BTRFS info (device vda6): enabling free space tree Apr 20 19:00:24.304478 kernel: BTRFS info (device vda6): last unmount of filesystem 17906e87-85d1-46f5-980e-3e85555360cf Apr 20 19:00:24.359708 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 20 19:00:24.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:00:24.408640 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 20 19:00:25.651930 ignition[920]: Ignition 2.24.0 Apr 20 19:00:25.678217 ignition[920]: Stage: fetch-offline Apr 20 19:00:25.678499 ignition[920]: no configs at "/usr/lib/ignition/base.d" Apr 20 19:00:25.678515 ignition[920]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 20 19:00:25.680015 ignition[920]: parsed url from cmdline: "" Apr 20 19:00:25.680021 ignition[920]: no config URL provided Apr 20 19:00:25.680143 ignition[920]: reading system config file "/usr/lib/ignition/user.ign" Apr 20 19:00:25.680453 ignition[920]: no config at "/usr/lib/ignition/user.ign" Apr 20 19:00:25.680686 ignition[920]: op(1): [started] loading QEMU firmware config module Apr 20 19:00:25.680692 ignition[920]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 20 19:00:25.986505 ignition[920]: op(1): [finished] loading QEMU firmware config module Apr 20 19:00:29.317734 ignition[920]: parsing config with SHA512: 5ded6116325716d7dfc6befe7d47c3f33a9a6712c12a5f64a41bf0006a38e4d8fe0d3a73932ed1f9d0e9ae27d235b3751009ef3e717b5d443c959552b8caed22 Apr 20 19:00:29.345955 unknown[920]: fetched base config from "system" Apr 20 19:00:29.346170 unknown[920]: fetched user config from "qemu" Apr 20 19:00:29.349987 ignition[920]: fetch-offline: fetch-offline passed Apr 20 19:00:29.393402 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 20 19:00:29.408000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:00:29.350127 ignition[920]: Ignition finished successfully Apr 20 19:00:29.476223 kernel: kauditd_printk_skb: 4 callbacks suppressed Apr 20 19:00:29.424333 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 20 19:00:29.476501 kernel: audit: type=1130 audit(1776711629.408:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:00:29.476428 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 20 19:00:30.296351 ignition[932]: Ignition 2.24.0 Apr 20 19:00:30.306416 ignition[932]: Stage: kargs Apr 20 19:00:30.317490 ignition[932]: no configs at "/usr/lib/ignition/base.d" Apr 20 19:00:30.318469 ignition[932]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 20 19:00:30.337939 ignition[932]: kargs: kargs passed Apr 20 19:00:30.338148 ignition[932]: Ignition finished successfully Apr 20 19:00:30.370257 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 20 19:00:30.393000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:00:30.412355 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 20 19:00:30.452172 kernel: audit: type=1130 audit(1776711630.393:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:00:30.943353 ignition[940]: Ignition 2.24.0 Apr 20 19:00:30.943441 ignition[940]: Stage: disks Apr 20 19:00:30.948493 ignition[940]: no configs at "/usr/lib/ignition/base.d" Apr 20 19:00:30.948519 ignition[940]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 20 19:00:30.963663 ignition[940]: disks: disks passed Apr 20 19:00:31.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:00:30.989957 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 20 19:00:31.108275 kernel: audit: type=1130 audit(1776711631.010:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:00:30.964491 ignition[940]: Ignition finished successfully Apr 20 19:00:31.031547 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 20 19:00:31.117446 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 20 19:00:31.141395 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 20 19:00:31.177083 systemd[1]: Reached target sysinit.target - System Initialization. Apr 20 19:00:31.201523 systemd[1]: Reached target basic.target - Basic System. Apr 20 19:00:31.243743 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 20 19:00:31.828449 systemd-fsck[951]: ROOT: clean, 15/456736 files, 38230/456704 blocks Apr 20 19:00:31.881219 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 20 19:00:31.885000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:00:31.933492 kernel: audit: type=1130 audit(1776711631.885:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:00:31.922507 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 20 19:00:34.391169 kernel: EXT4-fs (vda9): mounted filesystem 2bdffc2e-451a-418b-b04b-9e3cd9229e7e r/w with ordered data mode. Quota mode: none. Apr 20 19:00:34.424479 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 20 19:00:34.481457 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 20 19:00:34.512446 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 20 19:00:34.540207 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 20 19:00:34.552549 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 20 19:00:34.554749 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 20 19:00:34.738406 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (961) Apr 20 19:00:34.556558 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 20 19:00:34.762358 kernel: BTRFS info (device vda6): first mount of filesystem 17906e87-85d1-46f5-980e-3e85555360cf Apr 20 19:00:34.590527 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 20 19:00:34.782015 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 20 19:00:34.617325 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 20 19:00:34.805739 kernel: BTRFS info (device vda6): turning on async discard Apr 20 19:00:34.806000 kernel: BTRFS info (device vda6): enabling free space tree Apr 20 19:00:34.806465 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 20 19:00:37.830533 kernel: loop1: detected capacity change from 0 to 43472 Apr 20 19:00:37.866438 kernel: loop1: p1 p2 p3 Apr 20 19:00:38.216167 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 19:00:38.216232 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 19:00:38.216253 kernel: device-mapper: table: 253:1: verity: Unrecognized verity feature request (-EINVAL) Apr 20 19:00:38.234888 kernel: device-mapper: ioctl: error adding target to table Apr 20 19:00:38.234362 systemd-confext[1051]: device-mapper: reload ioctl on 036bab43330e5a16b58b2997d79b59667c299046b83a0d438261a470d6586a8f-verity (253:1) failed: Invalid argument Apr 20 19:00:38.313259 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 19:00:39.528189 kernel: erofs: (device dm-1): mounted with root inode @ nid 40. Apr 20 19:00:39.626744 kernel: loop2: detected capacity change from 0 to 43472 Apr 20 19:00:39.654331 kernel: loop2: p1 p2 p3 Apr 20 19:00:39.943674 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 19:00:39.946072 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 19:00:39.946212 kernel: device-mapper: table: 253:1: verity: Unrecognized verity feature request (-EINVAL) Apr 20 19:00:39.955279 kernel: device-mapper: ioctl: error adding target to table Apr 20 19:00:39.964028 (sd-merge)[1065]: device-mapper: reload ioctl on 036bab43330e5a16b58b2997d79b59667c299046b83a0d438261a470d6586a8f-verity (253:1) failed: Invalid argument Apr 20 19:00:39.992408 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 19:00:40.685431 kernel: erofs: (device dm-1): mounted with root inode @ nid 40. Apr 20 19:00:40.692574 (sd-merge)[1065]: Using extensions '00-flatcar-default.raw'. Apr 20 19:00:40.696775 (sd-merge)[1065]: Merged extensions into '/sysroot/etc'. Apr 20 19:00:40.751300 initrd-setup-root[1072]: /etc 00-flatcar-default Mon 2026-04-20 19:00:15 UTC Apr 20 19:00:40.819433 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 20 19:00:40.834000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:00:40.863563 kernel: audit: type=1130 audit(1776711640.834:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:00:40.851733 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 20 19:00:40.889551 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 20 19:00:40.968311 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 20 19:00:41.011506 kernel: BTRFS info (device vda6): last unmount of filesystem 17906e87-85d1-46f5-980e-3e85555360cf Apr 20 19:00:41.111420 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 20 19:00:41.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:00:41.152211 kernel: audit: type=1130 audit(1776711641.111:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:00:41.171205 ignition[1082]: INFO : Ignition 2.24.0 Apr 20 19:00:41.171205 ignition[1082]: INFO : Stage: mount Apr 20 19:00:41.192063 ignition[1082]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 20 19:00:41.192063 ignition[1082]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 20 19:00:41.243512 ignition[1082]: INFO : mount: mount passed Apr 20 19:00:41.304339 ignition[1082]: INFO : Ignition finished successfully Apr 20 19:00:41.319097 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 20 19:00:41.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:00:41.368516 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 20 19:00:41.404518 kernel: audit: type=1130 audit(1776711641.340:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:00:41.674722 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 20 19:00:41.906198 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1093) Apr 20 19:00:41.931340 kernel: BTRFS info (device vda6): first mount of filesystem 17906e87-85d1-46f5-980e-3e85555360cf Apr 20 19:00:41.933743 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 20 19:00:42.085116 kernel: BTRFS info (device vda6): turning on async discard Apr 20 19:00:42.085386 kernel: BTRFS info (device vda6): enabling free space tree Apr 20 19:00:42.152576 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 20 19:00:42.521158 ignition[1109]: INFO : Ignition 2.24.0 Apr 20 19:00:42.521158 ignition[1109]: INFO : Stage: files Apr 20 19:00:42.550077 ignition[1109]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 20 19:00:42.550077 ignition[1109]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 20 19:00:42.550077 ignition[1109]: DEBUG : files: compiled without relabeling support, skipping Apr 20 19:00:42.550077 ignition[1109]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 20 19:00:42.550077 ignition[1109]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 20 19:00:42.639981 ignition[1109]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 20 19:00:42.639981 ignition[1109]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 20 19:00:42.675531 ignition[1109]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 20 19:00:42.649538 unknown[1109]: wrote ssh authorized keys file for user: core Apr 20 19:00:42.710975 ignition[1109]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 20 19:00:42.710975 ignition[1109]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 20 19:00:43.272772 ignition[1109]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 20 19:00:43.948073 ignition[1109]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 20 19:00:43.948073 ignition[1109]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 20 19:00:44.012275 ignition[1109]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 20 19:00:44.012275 ignition[1109]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 20 19:00:44.012275 ignition[1109]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 20 19:00:44.012275 ignition[1109]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 20 19:00:44.012275 ignition[1109]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 20 19:00:44.012275 ignition[1109]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 20 19:00:44.012275 ignition[1109]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 20 19:00:44.012275 ignition[1109]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 20 19:00:44.012275 ignition[1109]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 20 19:00:44.012275 ignition[1109]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 20 19:00:44.012275 ignition[1109]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 20 19:00:44.012275 ignition[1109]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 20 19:00:44.012275 ignition[1109]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Apr 20 19:00:44.628440 ignition[1109]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 20 19:00:47.142726 ignition[1109]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 20 19:00:47.142726 ignition[1109]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 20 19:00:47.218740 ignition[1109]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 20 19:00:47.218740 ignition[1109]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 20 19:00:47.218740 ignition[1109]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 20 19:00:47.218740 ignition[1109]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 20 19:00:47.218740 ignition[1109]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 20 19:00:47.218740 ignition[1109]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 20 19:00:47.218740 ignition[1109]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 20 19:00:47.218740 ignition[1109]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Apr 20 19:00:49.046447 ignition[1109]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 20 19:00:49.305451 ignition[1109]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 20 19:00:49.330265 ignition[1109]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Apr 20 19:00:49.330265 ignition[1109]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Apr 20 19:00:49.330265 ignition[1109]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Apr 20 19:00:49.330265 ignition[1109]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 20 19:00:49.330265 ignition[1109]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 20 19:00:49.330265 ignition[1109]: INFO : files: files passed Apr 20 19:00:49.330265 ignition[1109]: INFO : Ignition finished successfully Apr 20 19:00:49.439000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:00:49.539740 kernel: audit: type=1130 audit(1776711649.439:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:00:49.420175 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 20 19:00:49.454347 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 20 19:00:49.518700 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 20 19:00:49.587410 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 20 19:00:49.588477 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 20 19:00:49.617000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:00:49.619000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:00:49.720709 kernel: audit: type=1130 audit(1776711649.617:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:00:49.722377 kernel: audit: type=1131 audit(1776711649.619:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:00:49.805422 initrd-setup-root-after-ignition[1142]: grep: /sysroot/oem/oem-release: No such file or directory Apr 20 19:00:49.931200 initrd-setup-root-after-ignition[1144]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 20 19:00:49.931200 initrd-setup-root-after-ignition[1144]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 20 19:00:49.971161 initrd-setup-root-after-ignition[1148]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 20 19:00:50.236379 kernel: loop3: detected capacity change from 0 to 43472 Apr 20 19:00:50.275380 kernel: loop3: p1 p2 p3 Apr 20 19:00:50.901543 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 19:00:50.903300 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 19:00:50.903363 kernel: device-mapper: table: 253:2: verity: Unrecognized verity feature request (-EINVAL) Apr 20 19:00:50.911907 kernel: device-mapper: ioctl: error adding target to table Apr 20 19:00:50.915075 systemd-confext[1150]: device-mapper: reload ioctl on loop3p1-verity (253:2) failed: Invalid argument Apr 20 19:00:50.968336 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 19:00:52.770133 kernel: erofs: (device dm-2): mounted with root inode @ nid 40. Apr 20 19:00:52.955080 kernel: loop4: detected capacity change from 0 to 43472 Apr 20 19:00:52.996501 kernel: loop4: p1 p2 p3 Apr 20 19:00:53.491361 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 19:00:53.492412 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 19:00:53.492631 kernel: device-mapper: table: 253:2: verity: Unrecognized verity feature request (-EINVAL) Apr 20 19:00:53.509385 kernel: device-mapper: ioctl: error adding target to table Apr 20 19:00:53.515508 (sd-merge)[1162]: device-mapper: reload ioctl on loop4p1-verity (253:2) failed: Invalid argument Apr 20 19:00:53.608532 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 19:00:55.099397 kernel: erofs: (device dm-2): mounted with root inode @ nid 40. Apr 20 19:00:55.123277 (sd-merge)[1162]: Skipping extension refresh because no change was found, use --always-refresh=yes to always do a refresh. Apr 20 19:00:55.197261 kernel: device-mapper: ioctl: remove_all left 2 open device(s) Apr 20 19:00:55.299599 kernel: loop4: detected capacity change from 0 to 178200 Apr 20 19:00:55.332757 kernel: loop4: p1 p2 p3 Apr 20 19:00:56.001458 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 19:00:56.005537 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 19:00:56.005627 kernel: device-mapper: table: 253:2: verity: Unrecognized verity feature request (-EINVAL) Apr 20 19:00:56.017154 kernel: device-mapper: ioctl: error adding target to table Apr 20 19:00:56.017501 systemd-sysext[1170]: device-mapper: reload ioctl on 47e3a0d62726bde98fcb471f946aa0f0e9f97280e4f7267ec40f142aba643eb6-verity (253:2) failed: Invalid argument Apr 20 19:00:56.114322 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 19:00:58.231305 kernel: erofs: (device dm-2): mounted with root inode @ nid 39. Apr 20 19:00:58.535143 kernel: loop5: detected capacity change from 0 to 378016 Apr 20 19:00:58.589229 kernel: loop5: p1 p2 p3 Apr 20 19:00:59.134748 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 19:00:59.139520 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 19:00:59.139880 kernel: device-mapper: table: 253:2: verity: Unrecognized verity feature request (-EINVAL) Apr 20 19:00:59.145530 kernel: device-mapper: ioctl: error adding target to table Apr 20 19:00:59.150580 systemd-sysext[1170]: device-mapper: reload ioctl on 5f63b01eb609e19b7df6b1f3554b098a8644903507171258f91f339ee69140b0-verity (253:2) failed: Invalid argument Apr 20 19:00:59.188997 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 19:01:01.683457 kernel: erofs: (device dm-2): mounted with root inode @ nid 39. Apr 20 19:01:02.126348 kernel: loop6: detected capacity change from 0 to 228704 Apr 20 19:01:02.944077 kernel: loop7: detected capacity change from 0 to 178200 Apr 20 19:01:02.986601 kernel: loop7: p1 p2 p3 Apr 20 19:01:03.613759 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 19:01:03.614167 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 19:01:03.624356 kernel: device-mapper: table: 253:2: verity: Unrecognized verity feature request (-EINVAL) Apr 20 19:01:03.639456 kernel: device-mapper: ioctl: error adding target to table Apr 20 19:01:03.645386 (sd-merge)[1190]: device-mapper: reload ioctl on 47e3a0d62726bde98fcb471f946aa0f0e9f97280e4f7267ec40f142aba643eb6-verity (253:2) failed: Invalid argument Apr 20 19:01:03.696175 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 19:01:04.686607 kernel: erofs: (device dm-2): mounted with root inode @ nid 39. Apr 20 19:01:04.739403 kernel: loop1: detected capacity change from 0 to 378016 Apr 20 19:01:04.877999 kernel: loop1: p1 p2 p3 Apr 20 19:01:05.318352 (sd-merge)[1190]: device-mapper: reload ioctl on 5f63b01eb609e19b7df6b1f3554b098a8644903507171258f91f339ee69140b0-verity (253:3) failed: Invalid argument Apr 20 19:01:05.345360 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 19:01:05.345496 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 19:01:05.345617 kernel: device-mapper: table: 253:3: verity: Unrecognized verity feature request (-EINVAL) Apr 20 19:01:05.345631 kernel: device-mapper: ioctl: error adding target to table Apr 20 19:01:05.345644 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 19:01:06.232351 kernel: erofs: (device dm-3): mounted with root inode @ nid 39. Apr 20 19:01:06.285522 kernel: loop3: detected capacity change from 0 to 228704 Apr 20 19:01:06.859970 (sd-merge)[1190]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes-v1.33.8-x86-64.raw'. Apr 20 19:01:06.872598 (sd-merge)[1190]: Merged extensions into '/sysroot/usr'. Apr 20 19:01:06.928966 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 20 19:01:06.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:06.948388 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 20 19:01:07.009697 kernel: audit: type=1130 audit(1776711666.944:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:07.037033 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 20 19:01:07.275946 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 20 19:01:07.276303 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 20 19:01:07.303000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:07.303000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:07.395987 kernel: audit: type=1130 audit(1776711667.303:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:07.323302 systemd[1]: initrd-parse-etc.service: Triggering OnSuccess= dependencies. Apr 20 19:01:07.414552 kernel: audit: type=1131 audit(1776711667.303:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:07.332949 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 20 19:01:07.394401 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 20 19:01:07.416653 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 20 19:01:07.438533 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 20 19:01:07.932548 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 20 19:01:07.947000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:08.007486 kernel: audit: type=1130 audit(1776711667.947:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:08.021598 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 20 19:01:08.509056 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 20 19:01:08.545695 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 20 19:01:08.649319 systemd[1]: Stopped target timers.target - Timer Units. Apr 20 19:01:08.677546 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 20 19:01:08.693000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:08.789440 kernel: audit: type=1131 audit(1776711668.693:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:08.679934 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 20 19:01:08.698451 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 20 19:01:08.723261 systemd[1]: Stopped target basic.target - Basic System. Apr 20 19:01:08.731918 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 20 19:01:08.732611 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 20 19:01:08.930000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:08.959418 kernel: audit: type=1131 audit(1776711668.930:50): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:08.747137 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 20 19:01:08.765267 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Apr 20 19:01:08.768644 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 20 19:01:08.782713 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 20 19:01:08.793694 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 20 19:01:09.135000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:09.199730 kernel: audit: type=1131 audit(1776711669.135:51): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:08.841662 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 20 19:01:09.207000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:09.257202 kernel: audit: type=1131 audit(1776711669.207:52): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:08.885621 systemd[1]: Stopped target swap.target - Swaps. Apr 20 19:01:08.910706 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 20 19:01:08.911196 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 20 19:01:08.931349 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 20 19:01:08.981977 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 20 19:01:09.025106 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 20 19:01:09.038200 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 20 19:01:09.108953 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 20 19:01:09.112182 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 20 19:01:09.148303 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 20 19:01:09.152891 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 20 19:01:09.211484 systemd[1]: ignition-fetch-offline.service: Consumed 2.120s CPU time. Apr 20 19:01:09.217326 systemd[1]: Stopped target paths.target - Path Units. Apr 20 19:01:09.309082 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 20 19:01:09.313969 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 20 19:01:09.355293 systemd[1]: Stopped target slices.target - Slice Units. Apr 20 19:01:09.372491 systemd[1]: Stopped target sockets.target - Socket Units. Apr 20 19:01:09.443579 systemd[1]: iscsid.socket: Deactivated successfully. Apr 20 19:01:09.453434 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 20 19:01:09.720709 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 20 19:01:09.740702 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 20 19:01:09.769419 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Apr 20 19:01:09.769622 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Apr 20 19:01:09.836000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:09.894716 kernel: audit: type=1131 audit(1776711669.836:53): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:09.791448 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 20 19:01:09.898000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:09.940233 kernel: audit: type=1131 audit(1776711669.898:54): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:09.801420 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 20 19:01:09.837167 systemd[1]: initrd-setup-root-after-ignition.service: Consumed 2.957s CPU time. Apr 20 19:01:09.838698 systemd[1]: ignition-files.service: Deactivated successfully. Apr 20 19:01:09.842134 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 20 19:01:09.900233 systemd[1]: ignition-files.service: Consumed 4.275s CPU time. Apr 20 19:01:09.944345 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 20 19:01:10.003906 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 20 19:01:10.090420 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 20 19:01:10.097000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:10.090711 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 20 19:01:10.136000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:10.100731 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 20 19:01:10.101339 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 20 19:01:10.145402 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 20 19:01:10.220000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:10.146108 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 20 19:01:10.283955 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 20 19:01:10.284193 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 20 19:01:10.316000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:10.316000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:10.400726 ignition[1219]: INFO : Ignition 2.24.0 Apr 20 19:01:10.400726 ignition[1219]: INFO : Stage: umount Apr 20 19:01:10.418122 ignition[1219]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 20 19:01:10.418122 ignition[1219]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 20 19:01:10.418122 ignition[1219]: INFO : umount: umount passed Apr 20 19:01:10.418122 ignition[1219]: INFO : Ignition finished successfully Apr 20 19:01:10.481000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:10.418955 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 20 19:01:10.419307 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 20 19:01:10.540000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:10.540000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:10.497329 systemd[1]: Stopped target network.target - Network. Apr 20 19:01:10.567000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:10.517717 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 20 19:01:10.578000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:10.519979 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 20 19:01:10.543553 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 20 19:01:10.543658 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 20 19:01:10.544034 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 20 19:01:10.544149 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 20 19:01:10.569062 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 20 19:01:10.569287 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 20 19:01:10.599189 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 20 19:01:10.731000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:10.617412 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 20 19:01:10.704441 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 20 19:01:10.713401 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 20 19:01:10.714950 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 20 19:01:10.788536 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 20 19:01:10.810736 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 20 19:01:10.834000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:10.849755 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 20 19:01:10.889071 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 20 19:01:10.903000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:10.919000 audit: BPF prog-id=5 op=UNLOAD Apr 20 19:01:10.921000 audit: BPF prog-id=8 op=UNLOAD Apr 20 19:01:10.923430 systemd[1]: Stopped target network-pre.target - Preparation for Network. Apr 20 19:01:10.929480 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 20 19:01:10.929599 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 20 19:01:10.969000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:10.962439 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 20 19:01:10.962989 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 20 19:01:10.970641 systemd[1]: initrd-setup-root.service: Consumed 2.110s CPU time. Apr 20 19:01:11.011190 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 20 19:01:11.034618 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 20 19:01:11.035597 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 20 19:01:11.047171 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 20 19:01:11.048161 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 20 19:01:11.097730 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 20 19:01:11.044000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:11.096000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:11.113000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:11.098691 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 20 19:01:11.116032 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 20 19:01:11.203177 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 20 19:01:11.242338 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 20 19:01:11.249000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:11.252986 systemd[1]: systemd-udevd.service: Consumed 7.297s CPU time. Apr 20 19:01:11.297527 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 20 19:01:11.300441 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 20 19:01:11.336986 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 20 19:01:11.337044 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 20 19:01:11.367002 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 20 19:01:11.365000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:11.376713 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 20 19:01:11.445000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:11.449177 systemd[1]: dracut-cmdline.service: Consumed 1.762s CPU time. Apr 20 19:01:11.491369 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 20 19:01:11.494013 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 20 19:01:11.539000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:11.640558 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 20 19:01:11.719364 systemd[1]: systemd-network-generator.service: Deactivated successfully. Apr 20 19:01:11.719692 systemd[1]: Stopped systemd-network-generator.service - Generate Network Units from Kernel Command Line. Apr 20 19:01:11.749000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:11.768484 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 20 19:01:11.798000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:11.771479 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 20 19:01:11.850000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:11.895000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:11.909000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:11.803350 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 20 19:01:11.803614 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 20 19:01:11.891742 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 20 19:01:11.892013 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 20 19:01:11.897439 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 20 19:01:11.897556 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 20 19:01:12.041702 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 20 19:01:12.095131 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 20 19:01:12.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:12.128362 kernel: kauditd_printk_skb: 28 callbacks suppressed Apr 20 19:01:12.128441 kernel: audit: type=1130 audit(1776711672.118:83): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:12.129531 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 20 19:01:12.118000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:12.189154 kernel: audit: type=1131 audit(1776711672.118:84): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:12.130393 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 20 19:01:12.197000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:12.231728 kernel: audit: type=1131 audit(1776711672.197:85): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:12.203687 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 20 19:01:12.298252 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 20 19:01:12.529570 systemd[1]: Switching root. Apr 20 19:01:12.740528 systemd-journald[320]: Received SIGTERM from PID 1 (systemd). Apr 20 19:01:12.741226 systemd-journald[320]: Journal stopped Apr 20 19:01:30.720674 kernel: audit: type=1335 audit(1776711672.755:86): pid=320 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=kernel comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" nl-mcgrp=1 op=disconnect res=1 Apr 20 19:01:30.721252 kernel: SELinux: policy capability network_peer_controls=1 Apr 20 19:01:30.721431 kernel: SELinux: policy capability open_perms=1 Apr 20 19:01:30.721448 kernel: SELinux: policy capability extended_socket_class=1 Apr 20 19:01:30.721464 kernel: SELinux: policy capability always_check_network=0 Apr 20 19:01:30.721477 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 20 19:01:30.721492 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 20 19:01:30.721514 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 20 19:01:30.721528 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 20 19:01:30.721545 kernel: SELinux: policy capability userspace_initial_context=0 Apr 20 19:01:30.725426 kernel: audit: type=1403 audit(1776711673.955:87): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 20 19:01:30.727066 systemd[1]: Successfully loaded SELinux policy in 537.373ms. Apr 20 19:01:30.727089 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 30.634ms. Apr 20 19:01:30.727112 systemd[1]: systemd 258.3 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 20 19:01:30.727132 systemd[1]: Detected virtualization kvm. Apr 20 19:01:30.727146 systemd[1]: Detected architecture x86-64. Apr 20 19:01:30.727160 systemd[1]: Detected first boot. Apr 20 19:01:30.727179 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Apr 20 19:01:30.727193 kernel: audit: type=1334 audit(1776711675.295:88): prog-id=9 op=LOAD Apr 20 19:01:30.727207 kernel: audit: type=1334 audit(1776711675.296:89): prog-id=9 op=UNLOAD Apr 20 19:01:30.727220 zram_generator::config[1268]: No configuration found. Apr 20 19:01:30.727242 kernel: Guest personality initialized and is inactive Apr 20 19:01:30.727257 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Apr 20 19:01:30.727270 kernel: Initialized host personality Apr 20 19:01:30.729538 kernel: NET: Registered PF_VSOCK protocol family Apr 20 19:01:30.729673 systemd-ssh-generator[1264]: Failed to query local AF_VSOCK CID: Cannot assign requested address Apr 20 19:01:30.729703 (sd-exec-[1249]: /usr/lib/systemd/system-generators/systemd-ssh-generator failed with exit status 1. Apr 20 19:01:30.729722 systemd[1]: Applying preset policy. Apr 20 19:01:30.729742 systemd[1]: Created symlink '/etc/systemd/system/multi-user.target.wants/prepare-helm.service' → '/etc/systemd/system/prepare-helm.service'. Apr 20 19:01:30.729758 systemd[1]: Created symlink '/etc/systemd/system/timers.target.wants/google-oslogin-cache.timer' → '/usr/lib/systemd/system/google-oslogin-cache.timer'. Apr 20 19:01:30.731635 systemd[1]: Populated /etc with preset unit settings. Apr 20 19:01:30.736544 systemd[1]: /usr/lib/systemd/system/update-engine.service:10: Support for option BlockIOWeight= has been removed and it is ignored Apr 20 19:01:30.736562 kernel: audit: type=1334 audit(1776711685.912:90): prog-id=10 op=LOAD Apr 20 19:01:30.736585 kernel: audit: type=1334 audit(1776711685.920:91): prog-id=2 op=UNLOAD Apr 20 19:01:30.736597 kernel: audit: type=1334 audit(1776711685.934:92): prog-id=11 op=LOAD Apr 20 19:01:30.736610 kernel: audit: type=1334 audit(1776711685.936:93): prog-id=12 op=LOAD Apr 20 19:01:30.736623 kernel: audit: type=1334 audit(1776711685.936:94): prog-id=3 op=UNLOAD Apr 20 19:01:30.736636 kernel: audit: type=1334 audit(1776711685.936:95): prog-id=4 op=UNLOAD Apr 20 19:01:30.736649 kernel: audit: type=1334 audit(1776711686.024:96): prog-id=13 op=LOAD Apr 20 19:01:30.736660 kernel: audit: type=1334 audit(1776711686.036:97): prog-id=10 op=UNLOAD Apr 20 19:01:30.736675 kernel: audit: type=1334 audit(1776711686.038:98): prog-id=14 op=LOAD Apr 20 19:01:30.736688 kernel: audit: type=1334 audit(1776711686.038:99): prog-id=15 op=LOAD Apr 20 19:01:30.736703 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 20 19:01:30.739593 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 20 19:01:30.739622 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 20 19:01:30.739640 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 20 19:01:30.739664 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 20 19:01:30.739680 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 20 19:01:30.739693 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 20 19:01:30.739706 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 20 19:01:30.739719 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 20 19:01:30.739732 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 20 19:01:30.744139 systemd[1]: Created slice user.slice - User and Session Slice. Apr 20 19:01:30.744272 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 20 19:01:30.744292 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 20 19:01:30.744307 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 20 19:01:30.744326 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 20 19:01:30.744343 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 20 19:01:30.744360 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 20 19:01:30.744374 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 20 19:01:30.744392 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 20 19:01:30.744409 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 20 19:01:30.744423 systemd[1]: Reached target imports.target - Image Downloads. Apr 20 19:01:30.744439 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 20 19:01:30.744457 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 20 19:01:30.744472 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 20 19:01:30.744640 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 20 19:01:30.747327 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 20 19:01:30.747404 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 20 19:01:30.747423 systemd[1]: Reached target remote-integritysetup.target - Remote Integrity Protected Volumes. Apr 20 19:01:30.747439 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Apr 20 19:01:30.747454 systemd[1]: Reached target slices.target - Slice Units. Apr 20 19:01:30.747469 systemd[1]: Reached target swap.target - Swaps. Apr 20 19:01:30.749570 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 20 19:01:30.749602 systemd[1]: Listening on systemd-ask-password.socket - Query the User Interactively for a Password. Apr 20 19:01:30.750522 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 20 19:01:30.750710 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Apr 20 19:01:30.750723 systemd[1]: Listening on systemd-factory-reset.socket - Factory Reset Management. Apr 20 19:01:30.750736 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Apr 20 19:01:30.750750 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Apr 20 19:01:30.750763 systemd[1]: Listening on systemd-networkd-varlink.socket - Network Service Varlink Socket. Apr 20 19:01:30.750915 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 20 19:01:30.750930 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Apr 20 19:01:30.750942 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Apr 20 19:01:30.750955 systemd[1]: Listening on systemd-resolved-monitor.socket - Resolve Monitor Varlink Socket. Apr 20 19:01:30.750968 systemd[1]: Listening on systemd-resolved-varlink.socket - Resolve Service Varlink Socket. Apr 20 19:01:30.750981 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 20 19:01:30.750994 systemd[1]: Listening on systemd-udevd-varlink.socket - udev Varlink Socket. Apr 20 19:01:30.751010 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 20 19:01:30.751024 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 20 19:01:30.751036 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 20 19:01:30.751052 systemd[1]: Mounting media.mount - External Media Directory... Apr 20 19:01:30.751065 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 20 19:01:30.751079 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 20 19:01:30.751092 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 20 19:01:30.751107 systemd[1]: tmp.mount: x-systemd.graceful-option=usrquota specified, but option is not available, suppressing. Apr 20 19:01:30.751119 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 20 19:01:30.751133 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 20 19:01:30.751145 systemd[1]: Reached target machines.target - Virtual Machines and Containers. Apr 20 19:01:30.751159 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 20 19:01:30.751171 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 20 19:01:30.751186 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 20 19:01:30.751200 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 20 19:01:30.751212 systemd[1]: modprobe@dm_mod.service - Load Kernel Module dm_mod was skipped because of an unmet condition check (ConditionKernelModuleLoaded=!dm_mod). Apr 20 19:01:30.751224 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 20 19:01:30.751240 systemd[1]: modprobe@efi_pstore.service - Load Kernel Module efi_pstore was skipped because of an unmet condition check (ConditionKernelModuleLoaded=!efi_pstore). Apr 20 19:01:30.751252 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 20 19:01:30.751264 systemd[1]: modprobe@loop.service - Load Kernel Module loop was skipped because of an unmet condition check (ConditionKernelModuleLoaded=!loop). Apr 20 19:01:30.751277 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 20 19:01:30.751289 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 20 19:01:30.751305 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 20 19:01:30.751317 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 20 19:01:30.751331 systemd[1]: Stopped systemd-fsck-usr.service. Apr 20 19:01:30.751347 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 20 19:01:30.751361 kernel: ACPI: bus type drm_connector registered Apr 20 19:01:30.751375 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 20 19:01:30.751393 kernel: fuse: init (API version 7.41) Apr 20 19:01:30.751405 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 20 19:01:30.751420 systemd[1]: Starting systemd-network-generator.service - Generate Network Units from Kernel Command Line... Apr 20 19:01:30.751434 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 20 19:01:30.751451 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 20 19:01:30.751507 systemd-journald[1338]: Collecting audit messages is enabled. Apr 20 19:01:30.751538 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Apr 20 19:01:30.751553 systemd-journald[1338]: Journal started Apr 20 19:01:30.751581 systemd-journald[1338]: Runtime Journal (/run/log/journal/7d59c4521b6a4ae798963236ead50d67) is 5.9M, max 47.8M, 41.8M free. Apr 20 19:01:28.741000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Apr 20 19:01:30.387000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:30.419000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:30.460000 audit: BPF prog-id=18 op=UNLOAD Apr 20 19:01:30.461000 audit: BPF prog-id=17 op=UNLOAD Apr 20 19:01:30.466000 audit: BPF prog-id=19 op=LOAD Apr 20 19:01:30.471000 audit: BPF prog-id=20 op=LOAD Apr 20 19:01:30.472000 audit: BPF prog-id=21 op=LOAD Apr 20 19:01:30.708000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Apr 20 19:01:30.708000 audit[1338]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffdc778c430 a2=4000 a3=0 items=0 ppid=1 pid=1338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:01:30.708000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Apr 20 19:01:25.688677 systemd[1]: Queued start job for default target multi-user.target. Apr 20 19:01:26.105956 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 20 19:01:26.123417 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 20 19:01:26.154720 systemd[1]: systemd-journald.service: Consumed 4.638s CPU time. Apr 20 19:01:30.817992 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 20 19:01:30.851221 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 20 19:01:30.894596 systemd[1]: Started systemd-journald.service - Journal Service. Apr 20 19:01:30.895000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:30.902415 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 20 19:01:30.917483 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 20 19:01:30.935018 systemd[1]: Mounted media.mount - External Media Directory. Apr 20 19:01:30.946475 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 20 19:01:30.960413 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 20 19:01:30.974282 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 20 19:01:31.009722 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 20 19:01:31.025000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:31.026458 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 20 19:01:31.055228 kernel: kauditd_printk_skb: 24 callbacks suppressed Apr 20 19:01:31.055390 kernel: audit: type=1130 audit(1776711691.025:122): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:31.088398 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 20 19:01:31.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:31.089216 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 20 19:01:31.104551 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 20 19:01:31.106677 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 20 19:01:31.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:31.123930 kernel: audit: type=1130 audit(1776711691.086:123): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:31.123980 kernel: audit: type=1130 audit(1776711691.103:124): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:31.103000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:31.189512 kernel: audit: type=1131 audit(1776711691.103:125): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:31.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:31.195427 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 20 19:01:31.202712 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 20 19:01:31.192000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:31.289212 kernel: audit: type=1130 audit(1776711691.192:126): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:31.290180 kernel: audit: type=1131 audit(1776711691.192:127): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:31.298000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:31.305343 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 20 19:01:31.320636 kernel: audit: type=1130 audit(1776711691.298:128): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:31.298000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:31.321721 kernel: audit: type=1131 audit(1776711691.298:129): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:31.365000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:31.407398 kernel: audit: type=1130 audit(1776711691.365:130): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:31.409582 systemd[1]: Finished systemd-network-generator.service - Generate Network Units from Kernel Command Line. Apr 20 19:01:31.422000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:31.440759 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 20 19:01:31.490966 kernel: audit: type=1130 audit(1776711691.422:131): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:31.510000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:31.513617 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Apr 20 19:01:31.530000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:31.550096 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 20 19:01:31.575000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:31.717761 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 20 19:01:31.744965 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Apr 20 19:01:31.822293 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 20 19:01:31.860260 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 20 19:01:31.873709 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 20 19:01:31.874473 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 20 19:01:31.892132 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Apr 20 19:01:31.920348 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 20 19:01:31.950522 systemd[1]: Starting systemd-confext.service - Merge System Configuration Images into /etc/... Apr 20 19:01:32.007430 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 20 19:01:32.035665 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 20 19:01:32.053286 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 20 19:01:32.079426 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 20 19:01:32.101379 systemd-journald[1338]: Time spent on flushing to /var/log/journal/7d59c4521b6a4ae798963236ead50d67 is 54.996ms for 1307 entries. Apr 20 19:01:32.101379 systemd-journald[1338]: System Journal (/var/log/journal/7d59c4521b6a4ae798963236ead50d67) is 8M, max 163.5M, 155.5M free. Apr 20 19:01:32.107578 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 20 19:01:32.289243 systemd-journald[1338]: Received client request to flush runtime journal. Apr 20 19:01:32.215741 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 20 19:01:32.241701 systemd[1]: Starting systemd-userdb-load-credentials.service - Load JSON user/group Records from Credentials... Apr 20 19:01:32.265761 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 20 19:01:32.294470 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 20 19:01:32.316368 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 20 19:01:32.331000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:32.333145 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 20 19:01:32.343000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:32.346309 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 20 19:01:32.362000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:32.371531 systemd-tmpfiles[1385]: ACLs are not supported, ignoring. Apr 20 19:01:32.371947 systemd-tmpfiles[1385]: ACLs are not supported, ignoring. Apr 20 19:01:32.381577 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 20 19:01:32.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:32.418586 systemd[1]: Finished systemd-userdb-load-credentials.service - Load JSON user/group Records from Credentials. Apr 20 19:01:32.430000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdb-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:32.441300 kernel: loop4: detected capacity change from 0 to 43472 Apr 20 19:01:32.448019 kernel: loop4: p1 p2 p3 Apr 20 19:01:32.462331 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 20 19:01:32.517004 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Apr 20 19:01:32.544699 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 20 19:01:32.633360 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 19:01:32.633488 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 19:01:32.634263 kernel: device-mapper: table: 253:4: verity: Unrecognized verity feature request (-EINVAL) Apr 20 19:01:32.644313 kernel: device-mapper: ioctl: error adding target to table Apr 20 19:01:32.650097 systemd-confext[1391]: device-mapper: reload ioctl on loop4p1-verity (253:4) failed: Invalid argument Apr 20 19:01:32.717421 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Apr 20 19:01:32.736345 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 19:01:32.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:32.830245 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 20 19:01:32.849000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:32.858000 audit: BPF prog-id=22 op=LOAD Apr 20 19:01:32.859000 audit: BPF prog-id=23 op=LOAD Apr 20 19:01:32.860000 audit: BPF prog-id=24 op=LOAD Apr 20 19:01:32.870301 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Apr 20 19:01:32.899000 audit: BPF prog-id=25 op=LOAD Apr 20 19:01:32.909198 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 20 19:01:32.931000 audit: BPF prog-id=26 op=LOAD Apr 20 19:01:32.946701 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 20 19:01:33.013474 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 20 19:01:33.041343 systemd[1]: Starting modprobe@tun.service - Load Kernel Module tun... Apr 20 19:01:33.053000 audit: BPF prog-id=27 op=LOAD Apr 20 19:01:33.056000 audit: BPF prog-id=28 op=LOAD Apr 20 19:01:33.056000 audit: BPF prog-id=29 op=LOAD Apr 20 19:01:33.061339 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 20 19:01:33.128497 kernel: tun: Universal TUN/TAP device driver, 1.6 Apr 20 19:01:33.140947 systemd[1]: modprobe@tun.service: Deactivated successfully. Apr 20 19:01:33.141401 systemd[1]: Finished modprobe@tun.service - Load Kernel Module tun. Apr 20 19:01:33.159675 systemd-tmpfiles[1412]: ACLs are not supported, ignoring. Apr 20 19:01:33.177314 systemd-tmpfiles[1412]: ACLs are not supported, ignoring. Apr 20 19:01:33.209000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@tun comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:33.211000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@tun comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:33.216000 audit: BPF prog-id=30 op=LOAD Apr 20 19:01:33.216000 audit: BPF prog-id=31 op=LOAD Apr 20 19:01:33.216000 audit: BPF prog-id=32 op=LOAD Apr 20 19:01:33.220604 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Apr 20 19:01:33.233572 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 20 19:01:33.246000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:33.266613 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 20 19:01:33.299637 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 20 19:01:33.318000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:33.412244 systemd-nsresourced[1417]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Apr 20 19:01:33.420141 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Apr 20 19:01:33.433000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:33.643255 systemd-oomd[1409]: No swap; memory pressure usage will be degraded Apr 20 19:01:33.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:33.647084 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Apr 20 19:01:33.742584 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 20 19:01:33.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:33.763628 systemd[1]: Reached target time-set.target - System Time Set. Apr 20 19:01:33.830557 systemd-resolved[1410]: Positive Trust Anchors: Apr 20 19:01:33.830649 systemd-resolved[1410]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 20 19:01:33.830653 systemd-resolved[1410]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Apr 20 19:01:33.830684 systemd-resolved[1410]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 20 19:01:33.843495 systemd-resolved[1410]: Defaulting to hostname 'linux'. Apr 20 19:01:33.888743 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 20 19:01:33.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:33.925507 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 20 19:01:35.028579 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 20 19:01:35.046000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:35.122000 audit: BPF prog-id=7 op=UNLOAD Apr 20 19:01:35.122000 audit: BPF prog-id=6 op=UNLOAD Apr 20 19:01:35.122000 audit: BPF prog-id=33 op=LOAD Apr 20 19:01:35.125000 audit: BPF prog-id=34 op=LOAD Apr 20 19:01:35.130207 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 20 19:01:36.140150 systemd-udevd[1439]: Using default interface naming scheme 'v258'. Apr 20 19:01:38.624519 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 20 19:01:38.650000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:38.692479 kernel: kauditd_printk_skb: 34 callbacks suppressed Apr 20 19:01:38.692700 kernel: audit: type=1130 audit(1776711698.650:166): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:38.681000 audit: BPF prog-id=35 op=LOAD Apr 20 19:01:38.695588 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 20 19:01:38.704503 kernel: audit: type=1334 audit(1776711698.681:167): prog-id=35 op=LOAD Apr 20 19:01:39.225454 systemd-networkd[1441]: lo: Link UP Apr 20 19:01:39.226128 systemd-networkd[1441]: lo: Gained carrier Apr 20 19:01:39.229072 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 20 19:01:39.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:39.244581 systemd[1]: Reached target network.target - Network. Apr 20 19:01:39.295187 kernel: audit: type=1130 audit(1776711699.241:168): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:39.331040 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Apr 20 19:01:39.349367 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 20 19:01:39.493765 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 20 19:01:39.513134 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Apr 20 19:01:39.525000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:39.554421 kernel: audit: type=1130 audit(1776711699.525:169): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-persistent-storage comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:39.935465 systemd-networkd[1441]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Apr 20 19:01:39.935474 systemd-networkd[1441]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 20 19:01:39.949185 systemd-networkd[1441]: eth0: Link UP Apr 20 19:01:39.949341 systemd-networkd[1441]: eth0: Gained carrier Apr 20 19:01:39.949367 systemd-networkd[1441]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Apr 20 19:01:40.035044 kernel: mousedev: PS/2 mouse device common for all mice Apr 20 19:01:40.048381 systemd-networkd[1441]: eth0: DHCPv4 address 10.0.0.14/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 20 19:01:40.053668 systemd-timesyncd[1411]: Network configuration changed, trying to establish connection. Apr 20 19:01:42.228235 systemd-resolved[1410]: Clock change detected. Flushing caches. Apr 20 19:01:42.232228 systemd-timesyncd[1411]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 20 19:01:42.233874 systemd-timesyncd[1411]: Initial clock synchronization to Mon 2026-04-20 19:01:42.226401 UTC. Apr 20 19:01:42.271016 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Apr 20 19:01:42.335179 kernel: ACPI: button: Power Button [PWRF] Apr 20 19:01:42.455347 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Apr 20 19:01:42.463854 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 20 19:01:42.481214 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 20 19:01:42.793309 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 20 19:01:42.856404 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 20 19:01:43.475233 systemd-networkd[1441]: eth0: Gained IPv6LL Apr 20 19:01:43.527223 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 20 19:01:43.540000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:43.568296 kernel: audit: type=1130 audit(1776711703.540:170): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:43.572987 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 20 19:01:43.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:43.628284 kernel: audit: type=1130 audit(1776711703.606:171): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:43.629062 systemd[1]: Reached target network-online.target - Network is Online. Apr 20 19:01:43.824248 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 20 19:01:44.042738 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 20 19:01:44.047914 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 20 19:01:44.058000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:44.099351 kernel: audit: type=1130 audit(1776711704.058:172): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:44.059000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:44.141982 kernel: audit: type=1131 audit(1776711704.059:173): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:44.142309 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 20 19:01:45.230837 kernel: erofs: (device dm-4): mounted with root inode @ nid 40. Apr 20 19:01:45.873340 kernel: loop4: detected capacity change from 0 to 43472 Apr 20 19:01:45.901445 kernel: loop4: p1 p2 p3 Apr 20 19:01:45.981494 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 20 19:01:46.007000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:46.035868 kernel: audit: type=1130 audit(1776711706.007:174): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:46.370083 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 19:01:46.370409 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 19:01:46.370468 kernel: device-mapper: table: 253:4: verity: Unrecognized verity feature request (-EINVAL) Apr 20 19:01:46.381050 kernel: device-mapper: ioctl: error adding target to table Apr 20 19:01:46.387087 (sd-merge)[1508]: device-mapper: reload ioctl on loop4p1-verity (253:4) failed: Invalid argument Apr 20 19:01:46.429979 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 19:01:48.276514 kernel: erofs: (device dm-4): mounted with root inode @ nid 40. Apr 20 19:01:48.325022 (sd-merge)[1508]: Skipping extension refresh because no change was found, use --always-refresh=yes to always do a refresh. Apr 20 19:01:48.378490 kernel: device-mapper: ioctl: remove_all left 4 open device(s) Apr 20 19:01:48.379012 systemd[1]: Finished systemd-confext.service - Merge System Configuration Images into /etc/. Apr 20 19:01:48.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-confext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:48.425161 kernel: audit: type=1130 audit(1776711708.404:175): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-confext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:01:48.455169 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 20 19:01:49.243423 kernel: loop4: detected capacity change from 0 to 228704 Apr 20 19:01:50.483293 kernel: loop4: detected capacity change from 0 to 178200 Apr 20 19:01:50.512490 kernel: loop4: p1 p2 p3 Apr 20 19:01:51.561151 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 19:01:51.561352 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 19:01:51.591122 kernel: device-mapper: table: 253:4: verity: Unrecognized verity feature request (-EINVAL) Apr 20 19:01:51.591344 kernel: device-mapper: ioctl: error adding target to table Apr 20 19:01:51.592698 systemd-sysext[1518]: device-mapper: reload ioctl on loop4p1-verity (253:4) failed: Invalid argument Apr 20 19:01:51.667906 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 19:01:53.494352 kernel: erofs: (device dm-4): mounted with root inode @ nid 39. Apr 20 19:01:54.477809 kernel: loop4: detected capacity change from 0 to 378016 Apr 20 19:01:54.501312 kernel: loop4: p1 p2 p3 Apr 20 19:01:54.621373 kernel: loop4: p1 p2 p3 Apr 20 19:01:55.539013 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 19:01:55.540506 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 19:01:55.542152 kernel: device-mapper: table: 253:4: verity: Unrecognized verity feature request (-EINVAL) Apr 20 19:01:55.555526 kernel: device-mapper: ioctl: error adding target to table Apr 20 19:01:55.556337 systemd-sysext[1518]: device-mapper: reload ioctl on loop4p1-verity (253:4) failed: Invalid argument Apr 20 19:01:55.582911 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 19:01:56.582359 kernel: erofs: (device dm-4): mounted with root inode @ nid 39. Apr 20 19:01:57.300446 kernel: loop4: detected capacity change from 0 to 228704 Apr 20 19:01:57.865308 kernel: loop5: detected capacity change from 0 to 178200 Apr 20 19:01:57.883452 kernel: loop5: p1 p2 p3 Apr 20 19:01:58.228280 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 19:01:58.234277 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 19:01:58.235162 kernel: device-mapper: table: 253:4: verity: Unrecognized verity feature request (-EINVAL) Apr 20 19:01:58.238403 kernel: device-mapper: ioctl: error adding target to table Apr 20 19:01:58.243512 (sd-merge)[1538]: device-mapper: reload ioctl on loop5p1-verity (253:4) failed: Invalid argument Apr 20 19:01:58.270419 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 19:01:58.996236 kernel: erofs: (device dm-4): mounted with root inode @ nid 39. Apr 20 19:01:59.188216 kernel: loop6: detected capacity change from 0 to 378016 Apr 20 19:01:59.216019 kernel: loop6: p1 p2 p3 Apr 20 19:01:59.872327 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 19:01:59.876860 kernel: device-mapper: verity: Unrecognized verity feature request: root_hash_sig_key_desc Apr 20 19:01:59.882379 kernel: device-mapper: table: 253:5: verity: Unrecognized verity feature request (-EINVAL) Apr 20 19:01:59.884733 kernel: device-mapper: ioctl: error adding target to table Apr 20 19:01:59.890115 (sd-merge)[1538]: device-mapper: reload ioctl on loop6p1-verity (253:5) failed: Invalid argument Apr 20 19:01:59.964875 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 20 19:02:01.108466 kernel: erofs: (device dm-5): mounted with root inode @ nid 39. Apr 20 19:02:01.313274 (sd-merge)[1538]: Skipping extension refresh because no change was found, use --always-refresh=yes to always do a refresh. Apr 20 19:02:01.454862 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 20 19:02:01.467000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:02:01.498914 kernel: audit: type=1130 audit(1776711721.467:176): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:02:01.503465 kernel: device-mapper: ioctl: remove_all left 4 open device(s) Apr 20 19:02:01.503496 kernel: device-mapper: ioctl: remove_all left 4 open device(s) Apr 20 19:02:01.527085 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 20 19:02:02.296333 systemd-tmpfiles[1555]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Apr 20 19:02:02.296419 systemd-tmpfiles[1555]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Apr 20 19:02:02.296968 systemd-tmpfiles[1555]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 20 19:02:02.331493 systemd-tmpfiles[1555]: ACLs are not supported, ignoring. Apr 20 19:02:02.426623 systemd-tmpfiles[1555]: ACLs are not supported, ignoring. Apr 20 19:02:03.111396 systemd-tmpfiles[1555]: Detected autofs mount point /boot during canonicalization of boot. Apr 20 19:02:03.113036 systemd-tmpfiles[1555]: Skipping /boot Apr 20 19:02:04.349014 systemd-tmpfiles[1555]: Detected autofs mount point /boot during canonicalization of boot. Apr 20 19:02:04.351139 systemd-tmpfiles[1555]: Skipping /boot Apr 20 19:02:05.310833 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 20 19:02:05.362000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:02:05.419794 kernel: audit: type=1130 audit(1776711725.362:177): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:02:05.688064 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 20 19:02:05.744209 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 20 19:02:05.813260 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 20 19:02:05.868399 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 20 19:02:05.996034 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 20 19:02:06.117000 audit[1571]: AUDIT1127 pid=1571 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Apr 20 19:02:06.151655 kernel: audit: type=1127 audit(1776711726.117:178): pid=1571 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Apr 20 19:02:06.217805 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 20 19:02:06.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:02:06.233890 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 20 19:02:06.259255 kernel: audit: type=1130 audit(1776711726.232:179): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:02:06.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:02:06.323940 kernel: audit: type=1130 audit(1776711726.275:180): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:02:06.649400 augenrules[1588]: No rules Apr 20 19:02:06.646000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Apr 20 19:02:06.658858 systemd[1]: audit-rules.service: Deactivated successfully. Apr 20 19:02:06.659244 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 20 19:02:06.646000 audit[1588]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe0d1fac80 a2=420 a3=0 items=0 ppid=1561 pid=1588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:02:06.701079 kernel: audit: type=1305 audit(1776711726.646:181): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Apr 20 19:02:06.705227 kernel: audit: type=1300 audit(1776711726.646:181): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe0d1fac80 a2=420 a3=0 items=0 ppid=1561 pid=1588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:02:06.646000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Apr 20 19:02:06.720232 kernel: audit: type=1327 audit(1776711726.646:181): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Apr 20 19:02:07.128935 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 20 19:02:07.176903 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 20 19:02:27.619172 ldconfig[1563]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 20 19:02:28.108406 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 20 19:02:28.241985 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 20 19:02:29.620318 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 20 19:02:29.646872 systemd[1]: Reached target sysinit.target - System Initialization. Apr 20 19:02:29.754101 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 20 19:02:29.817499 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 20 19:02:29.845502 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Apr 20 19:02:29.901781 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 20 19:02:29.936901 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 20 19:02:30.045514 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Apr 20 19:02:30.119520 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Apr 20 19:02:30.237294 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 20 19:02:30.301255 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 20 19:02:30.303081 systemd[1]: Reached target paths.target - Path Units. Apr 20 19:02:30.323917 systemd[1]: Reached target timers.target - Timer Units. Apr 20 19:02:30.527945 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 20 19:02:30.746297 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 20 19:02:30.798912 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Apr 20 19:02:31.072891 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 20 19:02:31.107085 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Apr 20 19:02:31.264433 systemd[1]: Listening on systemd-logind-varlink.socket - User Login Management Varlink Socket. Apr 20 19:02:31.330503 systemd[1]: Listening on systemd-machined.socket - Virtual Machine and Container Registration Service Socket. Apr 20 19:02:31.562483 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 20 19:02:31.676377 systemd[1]: Reached target sockets.target - Socket Units. Apr 20 19:02:31.748670 systemd[1]: Reached target basic.target - Basic System. Apr 20 19:02:31.798263 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 20 19:02:31.799987 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 20 19:02:31.962707 systemd[1]: Starting containerd.service - containerd container runtime... Apr 20 19:02:32.030316 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 20 19:02:32.081406 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 20 19:02:32.183828 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 20 19:02:32.217085 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 20 19:02:32.268256 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 20 19:02:32.338374 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 20 19:02:32.354662 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Apr 20 19:02:32.373474 jq[1604]: false Apr 20 19:02:32.422429 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 19:02:32.484324 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 20 19:02:32.517779 extend-filesystems[1605]: Found /dev/vda6 Apr 20 19:02:32.534340 google_oslogin_nss_cache[1606]: oslogin_cache_refresh[1606]: Refreshing passwd entry cache Apr 20 19:02:32.534369 oslogin_cache_refresh[1606]: Refreshing passwd entry cache Apr 20 19:02:32.538170 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 20 19:02:32.624262 extend-filesystems[1605]: Found /dev/vda9 Apr 20 19:02:32.626284 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 20 19:02:32.652431 google_oslogin_nss_cache[1606]: oslogin_cache_refresh[1606]: Failure getting users, quitting Apr 20 19:02:32.652431 google_oslogin_nss_cache[1606]: oslogin_cache_refresh[1606]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Apr 20 19:02:32.638235 oslogin_cache_refresh[1606]: Failure getting users, quitting Apr 20 19:02:32.652147 oslogin_cache_refresh[1606]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Apr 20 19:02:32.653771 oslogin_cache_refresh[1606]: Refreshing group entry cache Apr 20 19:02:32.653837 google_oslogin_nss_cache[1606]: oslogin_cache_refresh[1606]: Refreshing group entry cache Apr 20 19:02:32.656440 extend-filesystems[1605]: Checking size of /dev/vda9 Apr 20 19:02:32.678207 google_oslogin_nss_cache[1606]: oslogin_cache_refresh[1606]: Failure getting groups, quitting Apr 20 19:02:32.680668 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 20 19:02:32.695143 oslogin_cache_refresh[1606]: Failure getting groups, quitting Apr 20 19:02:32.697010 google_oslogin_nss_cache[1606]: oslogin_cache_refresh[1606]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Apr 20 19:02:32.695290 oslogin_cache_refresh[1606]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Apr 20 19:02:32.706293 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 20 19:02:32.756190 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 20 19:02:32.762374 extend-filesystems[1605]: Resized partition /dev/vda9 Apr 20 19:02:32.833165 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 20 19:02:32.839004 extend-filesystems[1632]: resize2fs 1.47.3 (8-Jul-2025) Apr 20 19:02:32.863692 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Apr 20 19:02:32.882122 systemd[1]: Starting update-engine.service - Update Engine... Apr 20 19:02:32.916077 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 20 19:02:33.028248 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 20 19:02:33.355372 update_engine[1636]: I20260420 19:02:33.221313 1636 main.cc:92] Flatcar Update Engine starting Apr 20 19:02:33.355971 jq[1640]: true Apr 20 19:02:33.053149 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 20 19:02:33.055365 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 20 19:02:33.068446 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Apr 20 19:02:33.071142 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Apr 20 19:02:33.134308 systemd[1]: motdgen.service: Deactivated successfully. Apr 20 19:02:33.139525 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 20 19:02:33.485154 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Apr 20 19:02:33.181193 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 20 19:02:33.530793 extend-filesystems[1632]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 20 19:02:33.530793 extend-filesystems[1632]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 20 19:02:33.530793 extend-filesystems[1632]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Apr 20 19:02:33.230805 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 20 19:02:33.668347 extend-filesystems[1605]: Resized filesystem in /dev/vda9 Apr 20 19:02:33.730300 jq[1658]: true Apr 20 19:02:33.239764 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 20 19:02:33.532457 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 20 19:02:33.563006 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 20 19:02:33.616766 systemd-logind[1627]: Watching system buttons on /dev/input/event2 (Power Button) Apr 20 19:02:33.616888 systemd-logind[1627]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 20 19:02:33.628024 systemd-logind[1627]: New seat seat0. Apr 20 19:02:33.667404 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 20 19:02:33.669362 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 20 19:02:33.681458 systemd[1]: Started systemd-logind.service - User Login Management. Apr 20 19:02:33.770788 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 20 19:02:33.789446 tar[1657]: linux-amd64/LICENSE Apr 20 19:02:33.795966 tar[1657]: linux-amd64/helm Apr 20 19:02:34.353277 dbus-daemon[1602]: [system] SELinux support is enabled Apr 20 19:02:34.359314 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 20 19:02:34.381222 bash[1706]: Updated "/home/core/.ssh/authorized_keys" Apr 20 19:02:34.413134 update_engine[1636]: I20260420 19:02:34.412834 1636 update_check_scheduler.cc:74] Next update check in 5m8s Apr 20 19:02:34.521331 dbus-daemon[1602]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 20 19:02:34.524003 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 20 19:02:34.747217 systemd[1]: Started update-engine.service - Update Engine. Apr 20 19:02:34.842457 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 20 19:02:34.847408 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 20 19:02:34.853402 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 20 19:02:34.877064 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 20 19:02:34.877382 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 20 19:02:34.948021 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 20 19:02:35.092397 sshd_keygen[1649]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 20 19:02:35.433353 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 20 19:02:35.455121 locksmithd[1713]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 20 19:02:35.499832 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 20 19:02:35.666986 systemd[1]: issuegen.service: Deactivated successfully. Apr 20 19:02:35.671310 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 20 19:02:35.725090 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 20 19:02:35.911017 tar[1657]: linux-amd64/README.md Apr 20 19:02:35.934201 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 20 19:02:36.046340 containerd[1659]: time="2026-04-20T19:02:36Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Apr 20 19:02:36.051999 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 20 19:02:36.072875 containerd[1659]: time="2026-04-20T19:02:36.071488221Z" level=info msg="starting containerd" revision=dea7da592f5d1d2b7755e3a161be07f43fad8f75 version=v2.2.1 Apr 20 19:02:36.105119 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 20 19:02:36.132252 systemd[1]: Reached target getty.target - Login Prompts. Apr 20 19:02:36.159294 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 20 19:02:36.244942 containerd[1659]: time="2026-04-20T19:02:36.242910598Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="32.243µs" Apr 20 19:02:36.244942 containerd[1659]: time="2026-04-20T19:02:36.243142625Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Apr 20 19:02:36.244942 containerd[1659]: time="2026-04-20T19:02:36.243362555Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Apr 20 19:02:36.244942 containerd[1659]: time="2026-04-20T19:02:36.243382644Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Apr 20 19:02:36.259198 containerd[1659]: time="2026-04-20T19:02:36.258970537Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Apr 20 19:02:36.259198 containerd[1659]: time="2026-04-20T19:02:36.259091004Z" level=info msg="loading plugin" id=io.containerd.mount-handler.v1.erofs type=io.containerd.mount-handler.v1 Apr 20 19:02:36.259198 containerd[1659]: time="2026-04-20T19:02:36.259109256Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 20 19:02:36.259479 containerd[1659]: time="2026-04-20T19:02:36.259237293Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 20 19:02:36.259479 containerd[1659]: time="2026-04-20T19:02:36.259252401Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 20 19:02:36.260332 containerd[1659]: time="2026-04-20T19:02:36.259519192Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 20 19:02:36.260332 containerd[1659]: time="2026-04-20T19:02:36.260177520Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 20 19:02:36.260332 containerd[1659]: time="2026-04-20T19:02:36.260236081Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 20 19:02:36.260332 containerd[1659]: time="2026-04-20T19:02:36.260243385Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Apr 20 19:02:36.262668 containerd[1659]: time="2026-04-20T19:02:36.262218383Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Apr 20 19:02:36.262668 containerd[1659]: time="2026-04-20T19:02:36.262439353Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Apr 20 19:02:36.263335 containerd[1659]: time="2026-04-20T19:02:36.263306622Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 20 19:02:36.263471 containerd[1659]: time="2026-04-20T19:02:36.263459238Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 20 19:02:36.263504 containerd[1659]: time="2026-04-20T19:02:36.263498514Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Apr 20 19:02:36.269422 containerd[1659]: time="2026-04-20T19:02:36.269358276Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Apr 20 19:02:36.321391 containerd[1659]: time="2026-04-20T19:02:36.317392071Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Apr 20 19:02:36.321843 containerd[1659]: time="2026-04-20T19:02:36.321391014Z" level=info msg="metadata content store policy set" policy=shared Apr 20 19:02:36.391455 containerd[1659]: time="2026-04-20T19:02:36.391337743Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Apr 20 19:02:36.393971 containerd[1659]: time="2026-04-20T19:02:36.393279320Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Apr 20 19:02:36.396242 containerd[1659]: time="2026-04-20T19:02:36.396020830Z" level=info msg="built-in NRI default validator is disabled" Apr 20 19:02:36.396242 containerd[1659]: time="2026-04-20T19:02:36.396168954Z" level=info msg="runtime interface created" Apr 20 19:02:36.396242 containerd[1659]: time="2026-04-20T19:02:36.396184108Z" level=info msg="created NRI interface" Apr 20 19:02:36.396242 containerd[1659]: time="2026-04-20T19:02:36.396210125Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Apr 20 19:02:36.396891 containerd[1659]: time="2026-04-20T19:02:36.396798817Z" level=info msg="skip loading plugin" error="failed to check mkfs.erofs availability: failed to run mkfs.erofs --help: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Apr 20 19:02:36.396891 containerd[1659]: time="2026-04-20T19:02:36.396889051Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Apr 20 19:02:36.396974 containerd[1659]: time="2026-04-20T19:02:36.396909339Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Apr 20 19:02:36.396974 containerd[1659]: time="2026-04-20T19:02:36.396925516Z" level=info msg="loading plugin" id=io.containerd.mount-manager.v1.bolt type=io.containerd.mount-manager.v1 Apr 20 19:02:36.397298 containerd[1659]: time="2026-04-20T19:02:36.397196974Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Apr 20 19:02:36.397325 containerd[1659]: time="2026-04-20T19:02:36.397312551Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Apr 20 19:02:36.397340 containerd[1659]: time="2026-04-20T19:02:36.397326808Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Apr 20 19:02:36.397355 containerd[1659]: time="2026-04-20T19:02:36.397342398Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Apr 20 19:02:36.397371 containerd[1659]: time="2026-04-20T19:02:36.397357280Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Apr 20 19:02:36.397384 containerd[1659]: time="2026-04-20T19:02:36.397370599Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Apr 20 19:02:36.397397 containerd[1659]: time="2026-04-20T19:02:36.397382743Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Apr 20 19:02:36.397419 containerd[1659]: time="2026-04-20T19:02:36.397393816Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Apr 20 19:02:36.397419 containerd[1659]: time="2026-04-20T19:02:36.397407327Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Apr 20 19:02:36.399781 containerd[1659]: time="2026-04-20T19:02:36.399055778Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Apr 20 19:02:36.400054 containerd[1659]: time="2026-04-20T19:02:36.399907081Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Apr 20 19:02:36.400054 containerd[1659]: time="2026-04-20T19:02:36.399945118Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Apr 20 19:02:36.400054 containerd[1659]: time="2026-04-20T19:02:36.399960788Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Apr 20 19:02:36.400054 containerd[1659]: time="2026-04-20T19:02:36.399972579Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Apr 20 19:02:36.400054 containerd[1659]: time="2026-04-20T19:02:36.399983413Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Apr 20 19:02:36.400054 containerd[1659]: time="2026-04-20T19:02:36.400036961Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Apr 20 19:02:36.400054 containerd[1659]: time="2026-04-20T19:02:36.400049361Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Apr 20 19:02:36.400231 containerd[1659]: time="2026-04-20T19:02:36.400063390Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.mounts type=io.containerd.grpc.v1 Apr 20 19:02:36.400231 containerd[1659]: time="2026-04-20T19:02:36.400073643Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Apr 20 19:02:36.400231 containerd[1659]: time="2026-04-20T19:02:36.400090416Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Apr 20 19:02:36.400231 containerd[1659]: time="2026-04-20T19:02:36.400101526Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Apr 20 19:02:36.401504 containerd[1659]: time="2026-04-20T19:02:36.401364465Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Apr 20 19:02:36.404415 containerd[1659]: time="2026-04-20T19:02:36.404308957Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Apr 20 19:02:36.404415 containerd[1659]: time="2026-04-20T19:02:36.404409811Z" level=info msg="Start snapshots syncer" Apr 20 19:02:36.405157 containerd[1659]: time="2026-04-20T19:02:36.405063951Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Apr 20 19:02:36.405870 containerd[1659]: time="2026-04-20T19:02:36.405794857Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Apr 20 19:02:36.406106 containerd[1659]: time="2026-04-20T19:02:36.405938237Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Apr 20 19:02:36.407250 containerd[1659]: time="2026-04-20T19:02:36.406306445Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Apr 20 19:02:36.408902 containerd[1659]: time="2026-04-20T19:02:36.408785357Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Apr 20 19:02:36.409102 containerd[1659]: time="2026-04-20T19:02:36.408957862Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Apr 20 19:02:36.409135 containerd[1659]: time="2026-04-20T19:02:36.409123483Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Apr 20 19:02:36.412177 containerd[1659]: time="2026-04-20T19:02:36.410297377Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Apr 20 19:02:36.412177 containerd[1659]: time="2026-04-20T19:02:36.411794306Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Apr 20 19:02:36.412177 containerd[1659]: time="2026-04-20T19:02:36.411838245Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Apr 20 19:02:36.412177 containerd[1659]: time="2026-04-20T19:02:36.411856351Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Apr 20 19:02:36.412177 containerd[1659]: time="2026-04-20T19:02:36.411875766Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Apr 20 19:02:36.412177 containerd[1659]: time="2026-04-20T19:02:36.411926428Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Apr 20 19:02:36.412177 containerd[1659]: time="2026-04-20T19:02:36.411972517Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 20 19:02:36.412177 containerd[1659]: time="2026-04-20T19:02:36.411991497Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 20 19:02:36.412177 containerd[1659]: time="2026-04-20T19:02:36.412001593Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 20 19:02:36.412177 containerd[1659]: time="2026-04-20T19:02:36.412011389Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 20 19:02:36.412177 containerd[1659]: time="2026-04-20T19:02:36.412019284Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Apr 20 19:02:36.412177 containerd[1659]: time="2026-04-20T19:02:36.412029377Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Apr 20 19:02:36.412177 containerd[1659]: time="2026-04-20T19:02:36.412041643Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Apr 20 19:02:36.412177 containerd[1659]: time="2026-04-20T19:02:36.412060593Z" level=info msg="Connect containerd service" Apr 20 19:02:36.413108 containerd[1659]: time="2026-04-20T19:02:36.412086518Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 20 19:02:36.423205 containerd[1659]: time="2026-04-20T19:02:36.422019390Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 20 19:02:38.236065 containerd[1659]: time="2026-04-20T19:02:38.233861591Z" level=info msg="Start subscribing containerd event" Apr 20 19:02:38.236065 containerd[1659]: time="2026-04-20T19:02:38.234415932Z" level=info msg="Start recovering state" Apr 20 19:02:38.244970 containerd[1659]: time="2026-04-20T19:02:38.244901945Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 20 19:02:38.246115 containerd[1659]: time="2026-04-20T19:02:38.245981546Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 20 19:02:38.254937 containerd[1659]: time="2026-04-20T19:02:38.254464639Z" level=info msg="Start event monitor" Apr 20 19:02:38.261169 containerd[1659]: time="2026-04-20T19:02:38.257134520Z" level=info msg="Start cni network conf syncer for default" Apr 20 19:02:38.263494 containerd[1659]: time="2026-04-20T19:02:38.262346928Z" level=info msg="Start streaming server" Apr 20 19:02:38.263494 containerd[1659]: time="2026-04-20T19:02:38.262475738Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Apr 20 19:02:38.263494 containerd[1659]: time="2026-04-20T19:02:38.262489799Z" level=info msg="runtime interface starting up..." Apr 20 19:02:38.265912 containerd[1659]: time="2026-04-20T19:02:38.265410501Z" level=info msg="starting plugins..." Apr 20 19:02:38.278461 containerd[1659]: time="2026-04-20T19:02:38.278239802Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Apr 20 19:02:38.290417 containerd[1659]: time="2026-04-20T19:02:38.282497152Z" level=info msg="containerd successfully booted in 2.240831s" Apr 20 19:02:38.295520 systemd[1]: Started containerd.service - containerd container runtime. Apr 20 19:02:39.720823 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 20 19:02:39.784897 systemd[1]: Started sshd@0-1-10.0.0.14:22-10.0.0.1:35078.service - OpenSSH per-connection server daemon (10.0.0.1:35078). Apr 20 19:02:41.441915 sshd[1758]: Accepted publickey for core from 10.0.0.1 port 35078 ssh2: RSA SHA256:ZNIzts6V4KYKlrJxXaosrimCRlmsV/+NkZ5UtjwHrjE Apr 20 19:02:41.534264 sshd-session[1758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 19:02:41.823348 systemd-logind[1627]: New session '1' of user 'core' with class 'user' and type 'tty'. Apr 20 19:02:41.833399 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 20 19:02:41.878777 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 20 19:02:42.229191 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 20 19:02:42.299775 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 20 19:02:42.468294 (systemd)[1768]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Apr 20 19:02:42.511941 systemd-logind[1627]: New session '2' of user 'core' with class 'manager-early' and type 'unspecified'. Apr 20 19:02:42.659795 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 19:02:42.674221 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 20 19:02:42.696097 (kubelet)[1776]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 19:02:45.903111 systemd[1768]: Queued start job for default target default.target. Apr 20 19:02:45.932076 systemd[1768]: Created slice app.slice - User Application Slice. Apr 20 19:02:45.932219 systemd[1768]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Apr 20 19:02:45.932238 systemd[1768]: Reached target machines.target - Virtual Machines and Containers. Apr 20 19:02:45.932367 systemd[1768]: Reached target paths.target - Paths. Apr 20 19:02:45.932472 systemd[1768]: Reached target timers.target - Timers. Apr 20 19:02:45.993816 systemd[1768]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 20 19:02:46.000214 systemd[1768]: Listening on systemd-ask-password.socket - Query the User Interactively for a Password. Apr 20 19:02:46.017043 systemd[1768]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Apr 20 19:02:46.147180 systemd[1768]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 20 19:02:46.148214 systemd[1768]: Reached target sockets.target - Sockets. Apr 20 19:02:46.211185 systemd[1768]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Apr 20 19:02:46.212841 systemd[1768]: Reached target basic.target - Basic System. Apr 20 19:02:46.213863 systemd[1768]: Reached target default.target - Main User Target. Apr 20 19:02:46.214010 systemd[1768]: Startup finished in 3.581s. Apr 20 19:02:46.214101 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 20 19:02:46.238856 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 20 19:02:46.245030 systemd[1]: Startup finished in 20.864s (kernel) + 1min 2.601s (initrd) + 1min 30.680s (userspace) = 2min 54.147s. Apr 20 19:02:46.444709 systemd[1]: Started sshd@1-2-10.0.0.14:22-10.0.0.1:41820.service - OpenSSH per-connection server daemon (10.0.0.1:41820). Apr 20 19:02:47.284508 sshd[1794]: Accepted publickey for core from 10.0.0.1 port 41820 ssh2: RSA SHA256:ZNIzts6V4KYKlrJxXaosrimCRlmsV/+NkZ5UtjwHrjE Apr 20 19:02:47.331843 sshd-session[1794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 19:02:47.456795 systemd-logind[1627]: New session '3' of user 'core' with class 'user' and type 'tty'. Apr 20 19:02:47.547764 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 20 19:02:47.918277 sshd[1798]: Connection closed by 10.0.0.1 port 41820 Apr 20 19:02:47.920047 sshd-session[1794]: pam_unix(sshd:session): session closed for user core Apr 20 19:02:48.038772 systemd[1]: sshd@1-2-10.0.0.14:22-10.0.0.1:41820.service: Deactivated successfully. Apr 20 19:02:48.096841 systemd[1]: session-3.scope: Deactivated successfully. Apr 20 19:02:48.122337 systemd-logind[1627]: Session 3 logged out. Waiting for processes to exit. Apr 20 19:02:48.230232 systemd[1]: Started sshd@2-4097-10.0.0.14:22-10.0.0.1:41824.service - OpenSSH per-connection server daemon (10.0.0.1:41824). Apr 20 19:02:48.236310 systemd-logind[1627]: Removed session 3. Apr 20 19:02:48.649963 kubelet[1776]: E0420 19:02:48.643729 1776 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 19:02:48.678867 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 19:02:48.681255 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 19:02:48.688192 systemd[1]: kubelet.service: Consumed 4.771s CPU time, 271.4M memory peak. Apr 20 19:02:49.062976 sshd[1806]: Accepted publickey for core from 10.0.0.1 port 41824 ssh2: RSA SHA256:ZNIzts6V4KYKlrJxXaosrimCRlmsV/+NkZ5UtjwHrjE Apr 20 19:02:49.119115 sshd-session[1806]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 19:02:49.529314 systemd-logind[1627]: New session '4' of user 'core' with class 'user' and type 'tty'. Apr 20 19:02:49.691246 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 20 19:02:50.203824 sshd[1811]: Connection closed by 10.0.0.1 port 41824 Apr 20 19:02:50.206723 sshd-session[1806]: pam_unix(sshd:session): session closed for user core Apr 20 19:02:50.400158 systemd[1]: sshd@2-4097-10.0.0.14:22-10.0.0.1:41824.service: Deactivated successfully. Apr 20 19:02:50.420220 systemd[1]: session-4.scope: Deactivated successfully. Apr 20 19:02:50.533958 systemd-logind[1627]: Session 4 logged out. Waiting for processes to exit. Apr 20 19:02:50.746251 systemd[1]: Started sshd@3-4098-10.0.0.14:22-10.0.0.1:41832.service - OpenSSH per-connection server daemon (10.0.0.1:41832). Apr 20 19:02:50.761877 systemd-logind[1627]: Removed session 4. Apr 20 19:02:52.035161 sshd[1817]: Accepted publickey for core from 10.0.0.1 port 41832 ssh2: RSA SHA256:ZNIzts6V4KYKlrJxXaosrimCRlmsV/+NkZ5UtjwHrjE Apr 20 19:02:52.136238 sshd-session[1817]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 19:02:52.420483 systemd-logind[1627]: New session '5' of user 'core' with class 'user' and type 'tty'. Apr 20 19:02:52.486761 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 20 19:02:53.120795 sshd[1821]: Connection closed by 10.0.0.1 port 41832 Apr 20 19:02:53.124885 sshd-session[1817]: pam_unix(sshd:session): session closed for user core Apr 20 19:02:53.246841 systemd[1]: sshd@3-4098-10.0.0.14:22-10.0.0.1:41832.service: Deactivated successfully. Apr 20 19:02:53.365611 systemd[1]: session-5.scope: Deactivated successfully. Apr 20 19:02:53.412997 systemd-logind[1627]: Session 5 logged out. Waiting for processes to exit. Apr 20 19:02:53.519251 systemd[1]: Started sshd@4-8193-10.0.0.14:22-10.0.0.1:41842.service - OpenSSH per-connection server daemon (10.0.0.1:41842). Apr 20 19:02:53.528616 systemd-logind[1627]: Removed session 5. Apr 20 19:02:54.077843 sshd[1827]: Accepted publickey for core from 10.0.0.1 port 41842 ssh2: RSA SHA256:ZNIzts6V4KYKlrJxXaosrimCRlmsV/+NkZ5UtjwHrjE Apr 20 19:02:54.147370 sshd-session[1827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 19:02:54.450655 systemd-logind[1627]: New session '6' of user 'core' with class 'user' and type 'tty'. Apr 20 19:02:54.478661 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 20 19:02:55.410144 sudo[1832]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 20 19:02:55.410472 sudo[1832]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 20 19:02:55.549341 sudo[1832]: pam_unix(sudo:session): session closed for user root Apr 20 19:02:55.618588 sshd[1831]: Connection closed by 10.0.0.1 port 41842 Apr 20 19:02:55.619131 sshd-session[1827]: pam_unix(sshd:session): session closed for user core Apr 20 19:02:55.732294 systemd[1]: sshd@4-8193-10.0.0.14:22-10.0.0.1:41842.service: Deactivated successfully. Apr 20 19:02:55.803924 systemd[1]: session-6.scope: Deactivated successfully. Apr 20 19:02:55.818665 systemd-logind[1627]: Session 6 logged out. Waiting for processes to exit. Apr 20 19:02:55.930182 systemd[1]: Started sshd@5-12289-10.0.0.14:22-10.0.0.1:41710.service - OpenSSH per-connection server daemon (10.0.0.1:41710). Apr 20 19:02:55.940796 systemd-logind[1627]: Removed session 6. Apr 20 19:02:57.827125 sshd[1839]: Accepted publickey for core from 10.0.0.1 port 41710 ssh2: RSA SHA256:ZNIzts6V4KYKlrJxXaosrimCRlmsV/+NkZ5UtjwHrjE Apr 20 19:02:57.975320 sshd-session[1839]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 19:02:58.325113 systemd-logind[1627]: New session '7' of user 'core' with class 'user' and type 'tty'. Apr 20 19:02:58.511303 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 20 19:02:58.916727 sudo[1845]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 20 19:02:58.917424 sudo[1845]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 20 19:02:58.920415 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 20 19:02:58.965583 sudo[1845]: pam_unix(sudo:session): session closed for user root Apr 20 19:02:58.979635 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 19:02:59.041897 sudo[1844]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Apr 20 19:02:59.043256 sudo[1844]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 20 19:02:59.162300 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 20 19:02:59.471000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Apr 20 19:02:59.478120 augenrules[1872]: No rules Apr 20 19:02:59.471000 audit[1872]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffdb5acc80 a2=420 a3=0 items=0 ppid=1853 pid=1872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:02:59.471000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Apr 20 19:02:59.532891 kernel: audit: type=1305 audit(1776711779.471:182): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Apr 20 19:02:59.532928 kernel: audit: type=1300 audit(1776711779.471:182): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffdb5acc80 a2=420 a3=0 items=0 ppid=1853 pid=1872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:02:59.532943 kernel: audit: type=1327 audit(1776711779.471:182): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Apr 20 19:02:59.533157 systemd[1]: audit-rules.service: Deactivated successfully. Apr 20 19:02:59.537085 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 20 19:02:59.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:02:59.552040 sudo[1844]: pam_unix(sudo:session): session closed for user root Apr 20 19:02:59.561855 kernel: audit: type=1130 audit(1776711779.539:183): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:02:59.563448 sshd[1843]: Connection closed by 10.0.0.1 port 41710 Apr 20 19:02:59.539000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:02:59.569083 sshd-session[1839]: pam_unix(sshd:session): session closed for user core Apr 20 19:02:59.581179 kernel: audit: type=1131 audit(1776711779.539:184): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:02:59.550000 audit[1844]: AUDIT1106 pid=1844 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Apr 20 19:02:59.550000 audit[1844]: AUDIT1104 pid=1844 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Apr 20 19:02:59.611826 kernel: audit: type=1106 audit(1776711779.550:185): pid=1844 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Apr 20 19:02:59.577000 audit[1839]: AUDIT1106 pid=1839 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:02:59.616269 kernel: audit: type=1104 audit(1776711779.550:186): pid=1844 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Apr 20 19:02:59.617187 kernel: audit: type=1106 audit(1776711779.577:187): pid=1839 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:02:59.577000 audit[1839]: AUDIT1104 pid=1839 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:02:59.644765 kernel: audit: type=1104 audit(1776711779.577:188): pid=1839 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:02:59.676289 systemd[1]: sshd@5-12289-10.0.0.14:22-10.0.0.1:41710.service: Deactivated successfully. Apr 20 19:02:59.679000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-12289-10.0.0.14:22-10.0.0.1:41710 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:02:59.732471 kernel: audit: type=1131 audit(1776711779.679:189): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-12289-10.0.0.14:22-10.0.0.1:41710 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:02:59.756431 systemd[1]: session-7.scope: Deactivated successfully. Apr 20 19:02:59.773075 systemd-logind[1627]: Session 7 logged out. Waiting for processes to exit. Apr 20 19:02:59.833000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-12290-10.0.0.14:22-10.0.0.1:41716 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:02:59.833949 systemd[1]: Started sshd@6-12290-10.0.0.14:22-10.0.0.1:41716.service - OpenSSH per-connection server daemon (10.0.0.1:41716). Apr 20 19:02:59.840374 systemd-logind[1627]: Removed session 7. Apr 20 19:02:59.993446 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 19:02:59.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:03:00.039154 (kubelet)[1888]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 19:03:00.339000 audit[1883]: AUDIT1101 pid=1883 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:03:00.350002 sshd[1883]: Accepted publickey for core from 10.0.0.1 port 41716 ssh2: RSA SHA256:ZNIzts6V4KYKlrJxXaosrimCRlmsV/+NkZ5UtjwHrjE Apr 20 19:03:00.354000 audit[1883]: AUDIT1103 pid=1883 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:03:00.355000 audit[1883]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc21751350 a2=3 a3=0 items=0 ppid=1 pid=1883 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:03:00.355000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:03:00.361968 sshd-session[1883]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 19:03:00.408377 systemd-logind[1627]: New session '8' of user 'core' with class 'user' and type 'tty'. Apr 20 19:03:00.413779 kubelet[1888]: E0420 19:03:00.412981 1888 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 19:03:00.422483 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 20 19:03:00.447446 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 19:03:00.450182 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 19:03:00.449000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 20 19:03:00.451128 systemd[1]: kubelet.service: Consumed 798ms CPU time, 110.5M memory peak. Apr 20 19:03:00.529000 audit[1883]: AUDIT1105 pid=1883 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:03:00.605000 audit[1899]: AUDIT1103 pid=1899 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:03:01.129000 audit[1900]: AUDIT1101 pid=1900 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Apr 20 19:03:01.131337 sudo[1900]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 20 19:03:01.129000 audit[1900]: AUDIT1110 pid=1900 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Apr 20 19:03:01.131000 audit[1900]: AUDIT1105 pid=1900 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Apr 20 19:03:01.133425 sudo[1900]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 20 19:03:10.738635 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 20 19:03:10.779167 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 19:03:13.347046 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 19:03:13.349000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:03:13.354517 kernel: kauditd_printk_skb: 13 callbacks suppressed Apr 20 19:03:13.355356 kernel: audit: type=1130 audit(1776711793.349:201): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:03:13.396391 (kubelet)[1928]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 19:03:13.749614 kubelet[1928]: E0420 19:03:13.749298 1928 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 19:03:13.809730 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 19:03:13.810216 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 19:03:13.811000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 20 19:03:13.813707 systemd[1]: kubelet.service: Consumed 1.929s CPU time, 110.6M memory peak. Apr 20 19:03:13.820638 kernel: audit: type=1131 audit(1776711793.811:202): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 20 19:03:16.350673 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 20 19:03:16.527100 (dockerd)[1937]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 20 19:03:20.136287 update_engine[1636]: I20260420 19:03:20.135043 1636 update_attempter.cc:509] Updating boot flags... Apr 20 19:03:23.942007 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 20 19:03:23.954368 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 19:03:26.263726 dockerd[1937]: time="2026-04-20T19:03:26.263337126Z" level=info msg="Starting up" Apr 20 19:03:26.415061 dockerd[1937]: time="2026-04-20T19:03:26.413926249Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Apr 20 19:03:26.470014 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 19:03:26.477000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:03:26.489455 kernel: audit: type=1130 audit(1776711806.477:203): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:03:26.527147 (kubelet)[1982]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 19:03:26.620132 dockerd[1937]: time="2026-04-20T19:03:26.619823792Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Apr 20 19:03:27.020808 kubelet[1982]: E0420 19:03:27.020651 1982 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 19:03:27.026995 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 19:03:27.027155 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 19:03:27.028000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 20 19:03:27.029133 systemd[1]: kubelet.service: Consumed 1.817s CPU time, 110.6M memory peak. Apr 20 19:03:27.039493 kernel: audit: type=1131 audit(1776711807.028:204): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 20 19:03:27.809203 dockerd[1937]: time="2026-04-20T19:03:27.807742829Z" level=info msg="Loading containers: start." Apr 20 19:03:28.136078 kernel: Initializing XFRM netlink socket Apr 20 19:03:30.757000 audit[2028]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=2028 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 20 19:03:30.757000 audit[2028]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffe3f597580 a2=0 a3=0 items=0 ppid=1937 pid=2028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:03:30.783397 kernel: audit: type=1325 audit(1776711810.757:205): table=nat:2 family=2 entries=2 op=nft_register_chain pid=2028 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 20 19:03:30.783718 kernel: audit: type=1300 audit(1776711810.757:205): arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffe3f597580 a2=0 a3=0 items=0 ppid=1937 pid=2028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:03:30.757000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Apr 20 19:03:30.792948 kernel: audit: type=1327 audit(1776711810.757:205): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Apr 20 19:03:30.910000 audit[2030]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2030 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 20 19:03:30.910000 audit[2030]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fff33849d80 a2=0 a3=0 items=0 ppid=1937 pid=2030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:03:30.968485 kernel: audit: type=1325 audit(1776711810.910:206): table=filter:3 family=2 entries=2 op=nft_register_chain pid=2030 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 20 19:03:30.910000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Apr 20 19:03:30.969385 kernel: audit: type=1300 audit(1776711810.910:206): arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fff33849d80 a2=0 a3=0 items=0 ppid=1937 pid=2030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:03:30.969413 kernel: audit: type=1327 audit(1776711810.910:206): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Apr 20 19:03:31.290000 audit[2032]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=2032 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 20 19:03:31.290000 audit[2032]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcaee4deb0 a2=0 a3=0 items=0 ppid=1937 pid=2032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:03:31.301411 kernel: audit: type=1325 audit(1776711811.290:207): table=filter:4 family=2 entries=1 op=nft_register_chain pid=2032 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 20 19:03:31.301455 kernel: audit: type=1300 audit(1776711811.290:207): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcaee4deb0 a2=0 a3=0 items=0 ppid=1937 pid=2032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:03:31.290000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Apr 20 19:03:31.442000 audit[2034]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=2034 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 20 19:03:31.442000 audit[2034]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd90917560 a2=0 a3=0 items=0 ppid=1937 pid=2034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:03:31.442000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Apr 20 19:03:31.642000 audit[2036]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=2036 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 20 19:03:31.646998 kernel: kauditd_printk_skb: 4 callbacks suppressed Apr 20 19:03:31.647295 kernel: audit: type=1325 audit(1776711811.642:209): table=filter:6 family=2 entries=1 op=nft_register_chain pid=2036 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 20 19:03:31.642000 audit[2036]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff62013980 a2=0 a3=0 items=0 ppid=1937 pid=2036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:03:31.672092 kernel: audit: type=1300 audit(1776711811.642:209): arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff62013980 a2=0 a3=0 items=0 ppid=1937 pid=2036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:03:31.642000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Apr 20 19:03:31.673387 kernel: audit: type=1327 audit(1776711811.642:209): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Apr 20 19:03:31.720000 audit[2038]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=2038 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 20 19:03:31.731174 kernel: audit: type=1325 audit(1776711811.720:210): table=filter:7 family=2 entries=1 op=nft_register_chain pid=2038 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 20 19:03:31.720000 audit[2038]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fffa92ebff0 a2=0 a3=0 items=0 ppid=1937 pid=2038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:03:31.738890 kernel: audit: type=1300 audit(1776711811.720:210): arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fffa92ebff0 a2=0 a3=0 items=0 ppid=1937 pid=2038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:03:31.720000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Apr 20 19:03:31.757381 kernel: audit: type=1327 audit(1776711811.720:210): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Apr 20 19:03:31.815000 audit[2040]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2040 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 20 19:03:31.815000 audit[2040]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd7ac50420 a2=0 a3=0 items=0 ppid=1937 pid=2040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:03:31.815000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Apr 20 19:03:31.843131 kernel: audit: type=1325 audit(1776711811.815:211): table=filter:8 family=2 entries=1 op=nft_register_chain pid=2040 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 20 19:03:31.843258 kernel: audit: type=1300 audit(1776711811.815:211): arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd7ac50420 a2=0 a3=0 items=0 ppid=1937 pid=2040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:03:31.843288 kernel: audit: type=1327 audit(1776711811.815:211): proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Apr 20 19:03:32.019000 audit[2042]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=2042 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 20 19:03:32.019000 audit[2042]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffd8c12d2e0 a2=0 a3=0 items=0 ppid=1937 pid=2042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:03:32.019000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Apr 20 19:03:32.272204 kernel: audit: type=1325 audit(1776711812.019:212): table=nat:9 family=2 entries=2 op=nft_register_chain pid=2042 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 20 19:03:32.438000 audit[2047]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=2047 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 20 19:03:32.438000 audit[2047]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7fffedb3cc00 a2=0 a3=0 items=0 ppid=1937 pid=2047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:03:32.438000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Apr 20 19:03:32.751000 audit[2049]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=2049 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 20 19:03:32.751000 audit[2049]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffe61b25a00 a2=0 a3=0 items=0 ppid=1937 pid=2049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:03:32.751000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Apr 20 19:03:32.806000 audit[2051]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2051 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 20 19:03:32.806000 audit[2051]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7fff5a441bb0 a2=0 a3=0 items=0 ppid=1937 pid=2051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:03:32.806000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Apr 20 19:03:32.991000 audit[2053]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=2053 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 20 19:03:32.991000 audit[2053]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffd1d13ed30 a2=0 a3=0 items=0 ppid=1937 pid=2053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:03:32.991000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Apr 20 19:03:33.035000 audit[2055]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=2055 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 20 19:03:33.035000 audit[2055]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffde3d6ad20 a2=0 a3=0 items=0 ppid=1937 pid=2055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:03:33.035000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Apr 20 19:03:35.191000 audit[2085]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=2085 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Apr 20 19:03:35.191000 audit[2085]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffc40169bc0 a2=0 a3=0 items=0 ppid=1937 pid=2085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:03:35.191000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Apr 20 19:03:35.244000 audit[2087]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=2087 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Apr 20 19:03:35.244000 audit[2087]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fff391bc1b0 a2=0 a3=0 items=0 ppid=1937 pid=2087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:03:35.244000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Apr 20 19:03:35.389000 audit[2089]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=2089 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Apr 20 19:03:35.389000 audit[2089]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc8a422610 a2=0 a3=0 items=0 ppid=1937 pid=2089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:03:35.389000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Apr 20 19:03:35.523000 audit[2091]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=2091 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Apr 20 19:03:35.523000 audit[2091]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcd4973b00 a2=0 a3=0 items=0 ppid=1937 pid=2091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:03:35.523000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Apr 20 19:03:35.720000 audit[2093]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2093 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Apr 20 19:03:35.720000 audit[2093]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe47694740 a2=0 a3=0 items=0 ppid=1937 pid=2093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:03:35.720000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Apr 20 19:03:35.902000 audit[2095]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2095 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Apr 20 19:03:35.902000 audit[2095]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fffe0340ae0 a2=0 a3=0 items=0 ppid=1937 pid=2095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:03:35.902000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Apr 20 19:03:36.017000 audit[2097]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2097 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Apr 20 19:03:36.017000 audit[2097]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffcc230d330 a2=0 a3=0 items=0 ppid=1937 pid=2097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:03:36.017000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Apr 20 19:03:36.176000 audit[2099]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2099 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Apr 20 19:03:36.176000 audit[2099]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffe262bfd00 a2=0 a3=0 items=0 ppid=1937 pid=2099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:03:36.176000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Apr 20 19:03:36.310000 audit[2101]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2101 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Apr 20 19:03:36.310000 audit[2101]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffcc7d0f990 a2=0 a3=0 items=0 ppid=1937 pid=2101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:03:36.310000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Apr 20 19:03:36.601000 audit[2103]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2103 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Apr 20 19:03:36.601000 audit[2103]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffd7b0758f0 a2=0 a3=0 items=0 ppid=1937 pid=2103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:03:36.601000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Apr 20 19:03:36.747000 audit[2105]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2105 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Apr 20 19:03:36.752792 kernel: kauditd_printk_skb: 47 callbacks suppressed Apr 20 19:03:36.753062 kernel: audit: type=1325 audit(1776711816.747:228): table=filter:25 family=10 entries=1 op=nft_register_rule pid=2105 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Apr 20 19:03:36.747000 audit[2105]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffca2daae20 a2=0 a3=0 items=0 ppid=1937 pid=2105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:03:36.801681 kernel: audit: type=1300 audit(1776711816.747:228): arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffca2daae20 a2=0 a3=0 items=0 ppid=1937 pid=2105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:03:36.747000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Apr 20 19:03:36.802359 kernel: audit: type=1327 audit(1776711816.747:228): proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Apr 20 19:03:37.088000 audit[2107]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2107 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Apr 20 19:03:37.088000 audit[2107]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffe9bf743e0 a2=0 a3=0 items=0 ppid=1937 pid=2107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:03:37.107793 kernel: audit: type=1325 audit(1776711817.088:229): table=filter:26 family=10 entries=1 op=nft_register_rule pid=2107 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Apr 20 19:03:37.109438 kernel: audit: type=1300 audit(1776711817.088:229): arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffe9bf743e0 a2=0 a3=0 items=0 ppid=1937 pid=2107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:03:37.088000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Apr 20 19:03:37.121337 kernel: audit: type=1327 audit(1776711817.088:229): proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Apr 20 19:03:37.184332 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 20 19:03:37.243000 audit[2109]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2109 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Apr 20 19:03:37.243000 audit[2109]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7fffb400f460 a2=0 a3=0 items=0 ppid=1937 pid=2109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:03:37.275762 kernel: audit: type=1325 audit(1776711817.243:230): table=filter:27 family=10 entries=1 op=nft_register_rule pid=2109 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Apr 20 19:03:37.257749 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 19:03:37.275935 kernel: audit: type=1300 audit(1776711817.243:230): arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7fffb400f460 a2=0 a3=0 items=0 ppid=1937 pid=2109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:03:37.243000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Apr 20 19:03:37.299855 kernel: audit: type=1327 audit(1776711817.243:230): proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Apr 20 19:03:37.511000 audit[2117]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2117 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 20 19:03:37.511000 audit[2117]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffffa74a7d0 a2=0 a3=0 items=0 ppid=1937 pid=2117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:03:37.511000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Apr 20 19:03:37.527753 kernel: audit: type=1325 audit(1776711817.511:231): table=filter:28 family=2 entries=1 op=nft_register_chain pid=2117 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 20 19:03:37.538000 audit[2119]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2119 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 20 19:03:37.538000 audit[2119]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffd18a602d0 a2=0 a3=0 items=0 ppid=1937 pid=2119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:03:37.538000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Apr 20 19:03:37.646000 audit[2121]: NETFILTER_CFG table=filter:30 family=10 entries=1 op=nft_register_chain pid=2121 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Apr 20 19:03:37.646000 audit[2121]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc46d363d0 a2=0 a3=0 items=0 ppid=1937 pid=2121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:03:37.646000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Apr 20 19:03:37.709000 audit[2123]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_rule pid=2123 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Apr 20 19:03:37.709000 audit[2123]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffe00cd57d0 a2=0 a3=0 items=0 ppid=1937 pid=2123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:03:37.709000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Apr 20 19:03:38.205000 audit[2129]: NETFILTER_CFG table=nat:32 family=2 entries=2 op=nft_register_chain pid=2129 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 20 19:03:38.205000 audit[2129]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7fff459764a0 a2=0 a3=0 items=0 ppid=1937 pid=2129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:03:38.205000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Apr 20 19:03:38.390000 audit[2131]: NETFILTER_CFG table=nat:33 family=2 entries=1 op=nft_register_rule pid=2131 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 20 19:03:38.390000 audit[2131]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffcea439d00 a2=0 a3=0 items=0 ppid=1937 pid=2131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:03:38.390000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Apr 20 19:03:38.637000 audit[2139]: NETFILTER_CFG table=filter:34 family=2 entries=1 op=nft_register_rule pid=2139 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 20 19:03:38.637000 audit[2139]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7fff7ae22960 a2=0 a3=0 items=0 ppid=1937 pid=2139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:03:38.637000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Apr 20 19:03:38.977000 audit[2147]: NETFILTER_CFG table=filter:35 family=2 entries=1 op=nft_register_rule pid=2147 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 20 19:03:38.977000 audit[2147]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffc046fc7b0 a2=0 a3=0 items=0 ppid=1937 pid=2147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:03:38.977000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Apr 20 19:03:39.215000 audit[2151]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2151 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 20 19:03:39.215000 audit[2151]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffd0037f5d0 a2=0 a3=0 items=0 ppid=1937 pid=2151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:03:39.215000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Apr 20 19:03:39.271000 audit[2154]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2154 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 20 19:03:39.271000 audit[2154]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd43e55eb0 a2=0 a3=0 items=0 ppid=1937 pid=2154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:03:39.271000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Apr 20 19:03:39.275961 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 19:03:39.283000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:03:39.342000 audit[2157]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2157 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 20 19:03:39.342000 audit[2157]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffcce8a2c30 a2=0 a3=0 items=0 ppid=1937 pid=2157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:03:39.342000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Apr 20 19:03:39.362766 (kubelet)[2155]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 19:03:39.425000 audit[2159]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2159 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 20 19:03:39.425000 audit[2159]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffe5e361cd0 a2=0 a3=0 items=0 ppid=1937 pid=2159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:03:39.425000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Apr 20 19:03:39.506167 systemd-networkd[1441]: docker0: Link UP Apr 20 19:03:39.651008 dockerd[1937]: time="2026-04-20T19:03:39.649971450Z" level=info msg="Loading containers: done." Apr 20 19:03:40.195960 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2569287485-merged.mount: Deactivated successfully. Apr 20 19:03:40.213307 dockerd[1937]: time="2026-04-20T19:03:40.213131734Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 20 19:03:40.224671 dockerd[1937]: time="2026-04-20T19:03:40.222722958Z" level=info msg="Docker daemon" commit=45873be4ae3f5488c9498b3d9f17deaddaf609f4 containerd-snapshotter=false storage-driver=overlay2 version=28.2.2 Apr 20 19:03:40.233946 kubelet[2155]: E0420 19:03:40.233764 2155 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 19:03:40.234655 dockerd[1937]: time="2026-04-20T19:03:40.234450322Z" level=info msg="Initializing buildkit" Apr 20 19:03:40.347423 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 19:03:40.349441 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 19:03:40.352000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 20 19:03:40.352839 systemd[1]: kubelet.service: Consumed 1.783s CPU time, 110.3M memory peak. Apr 20 19:03:40.467911 dockerd[1937]: time="2026-04-20T19:03:40.465259950Z" level=warning msg="CDI setup error /etc/cdi: failed to monitor for changes: no such file or directory" Apr 20 19:03:40.467911 dockerd[1937]: time="2026-04-20T19:03:40.465402643Z" level=warning msg="CDI setup error /var/run/cdi: failed to monitor for changes: no such file or directory" Apr 20 19:03:41.440669 dockerd[1937]: time="2026-04-20T19:03:41.438714366Z" level=info msg="Completed buildkit initialization" Apr 20 19:03:42.279359 dockerd[1937]: time="2026-04-20T19:03:42.273450444Z" level=info msg="Daemon has completed initialization" Apr 20 19:03:42.285938 dockerd[1937]: time="2026-04-20T19:03:42.281340904Z" level=info msg="API listen on /run/docker.sock" Apr 20 19:03:42.290120 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 20 19:03:42.292000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:03:42.294623 kernel: kauditd_printk_skb: 37 callbacks suppressed Apr 20 19:03:42.294686 kernel: audit: type=1130 audit(1776711822.292:245): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:03:50.481222 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Apr 20 19:03:50.663791 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 19:03:53.948183 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 19:03:53.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:03:53.962262 kernel: audit: type=1130 audit(1776711833.948:246): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:03:54.125663 (kubelet)[2218]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 19:03:54.960823 kubelet[2218]: E0420 19:03:54.960628 2218 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 19:03:54.985099 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 19:03:54.995177 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 19:03:55.037000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 20 19:03:55.042499 systemd[1]: kubelet.service: Consumed 2.367s CPU time, 108.8M memory peak. Apr 20 19:03:55.060239 kernel: audit: type=1131 audit(1776711835.037:247): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 20 19:03:58.716032 containerd[1659]: time="2026-04-20T19:03:58.715881483Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\"" Apr 20 19:04:04.179155 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4075686148.mount: Deactivated successfully. Apr 20 19:04:05.196135 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Apr 20 19:04:05.256469 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 19:04:07.493456 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 19:04:07.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:04:07.506708 kernel: audit: type=1130 audit(1776711847.494:248): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:04:07.544961 (kubelet)[2251]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 19:04:08.827296 kubelet[2251]: E0420 19:04:08.827059 2251 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 19:04:08.866505 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 19:04:08.867266 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 19:04:08.882000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 20 19:04:08.886009 systemd[1]: kubelet.service: Consumed 2.172s CPU time, 110.6M memory peak. Apr 20 19:04:08.896795 kernel: audit: type=1131 audit(1776711848.882:249): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 20 19:04:18.970511 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Apr 20 19:04:19.023012 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 19:04:20.805909 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 19:04:20.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:04:20.833282 kernel: audit: type=1130 audit(1776711860.812:250): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:04:20.847334 (kubelet)[2311]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 19:04:21.147187 kubelet[2311]: E0420 19:04:21.144959 2311 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 19:04:21.160678 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 19:04:21.159000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 20 19:04:21.160771 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 19:04:21.161282 systemd[1]: kubelet.service: Consumed 1.183s CPU time, 109.9M memory peak. Apr 20 19:04:21.176096 kernel: audit: type=1131 audit(1776711861.159:251): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 20 19:04:21.579217 containerd[1659]: time="2026-04-20T19:04:21.576894239Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 19:04:21.581115 containerd[1659]: time="2026-04-20T19:04:21.580970205Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.11: active requests=0, bytes read=30182451" Apr 20 19:04:21.601129 containerd[1659]: time="2026-04-20T19:04:21.600893985Z" level=info msg="ImageCreate event name:\"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 19:04:21.626153 containerd[1659]: time="2026-04-20T19:04:21.625065139Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 19:04:21.637782 containerd[1659]: time="2026-04-20T19:04:21.637460513Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.11\" with image id \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\", size \"30190588\" in 22.921448929s" Apr 20 19:04:21.637782 containerd[1659]: time="2026-04-20T19:04:21.637775337Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\" returns image reference \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\"" Apr 20 19:04:21.681746 containerd[1659]: time="2026-04-20T19:04:21.680753590Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\"" Apr 20 19:04:28.271029 containerd[1659]: time="2026-04-20T19:04:28.270377419Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 19:04:28.282425 containerd[1659]: time="2026-04-20T19:04:28.281451979Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.11: active requests=1, bytes read=22023329" Apr 20 19:04:28.310679 containerd[1659]: time="2026-04-20T19:04:28.309241731Z" level=info msg="ImageCreate event name:\"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 19:04:28.343442 containerd[1659]: time="2026-04-20T19:04:28.340508162Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 19:04:28.345344 containerd[1659]: time="2026-04-20T19:04:28.345071762Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.11\" with image id \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\", size \"27737794\" in 6.662932476s" Apr 20 19:04:28.345344 containerd[1659]: time="2026-04-20T19:04:28.345232659Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\" returns image reference \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\"" Apr 20 19:04:28.379723 containerd[1659]: time="2026-04-20T19:04:28.377665404Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\"" Apr 20 19:04:31.449250 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Apr 20 19:04:31.476445 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 19:04:33.127188 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 19:04:33.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:04:33.165324 kernel: audit: type=1130 audit(1776711873.144:252): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:04:33.222080 (kubelet)[2336]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 19:04:33.711723 kubelet[2336]: E0420 19:04:33.711647 2336 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 19:04:33.719469 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 19:04:33.719831 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 19:04:33.740000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 20 19:04:33.742837 systemd[1]: kubelet.service: Consumed 1.190s CPU time, 110.3M memory peak. Apr 20 19:04:33.755863 kernel: audit: type=1131 audit(1776711873.740:253): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 20 19:04:39.655121 containerd[1659]: time="2026-04-20T19:04:39.652177698Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 19:04:39.678927 containerd[1659]: time="2026-04-20T19:04:39.672472615Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.11: active requests=0, bytes read=20280985" Apr 20 19:04:39.723214 containerd[1659]: time="2026-04-20T19:04:39.721503356Z" level=info msg="ImageCreate event name:\"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 19:04:39.859351 containerd[1659]: time="2026-04-20T19:04:39.853435013Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 19:04:40.013288 containerd[1659]: time="2026-04-20T19:04:40.009321528Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.11\" with image id \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\", size \"21856121\" in 11.631506393s" Apr 20 19:04:40.025033 containerd[1659]: time="2026-04-20T19:04:40.021521739Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\" returns image reference \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\"" Apr 20 19:04:40.168968 containerd[1659]: time="2026-04-20T19:04:40.165498151Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\"" Apr 20 19:04:43.911404 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Apr 20 19:04:43.973168 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 19:04:49.381189 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 19:04:49.381000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:04:49.427374 kernel: audit: type=1130 audit(1776711889.381:254): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:04:49.606164 (kubelet)[2357]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 19:04:51.654506 kubelet[2357]: E0420 19:04:51.652361 2357 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 19:04:51.719944 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 19:04:51.726302 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 19:04:51.736000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 20 19:04:51.738265 systemd[1]: kubelet.service: Consumed 3.527s CPU time, 110.3M memory peak. Apr 20 19:04:51.755367 kernel: audit: type=1131 audit(1776711891.736:255): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 20 19:05:01.974206 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Apr 20 19:05:02.009781 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 19:05:05.682218 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 19:05:05.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:05:05.735234 kernel: audit: type=1130 audit(1776711905.720:256): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:05:05.791994 (kubelet)[2373]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 19:05:07.107336 kubelet[2373]: E0420 19:05:07.106923 2373 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 19:05:07.168605 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 19:05:07.170757 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 19:05:07.175000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 20 19:05:07.239129 kernel: audit: type=1131 audit(1776711907.175:257): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 20 19:05:07.239739 systemd[1]: kubelet.service: Consumed 2.826s CPU time, 108.6M memory peak. Apr 20 19:05:13.405466 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount653915396.mount: Deactivated successfully. Apr 20 19:05:17.428464 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Apr 20 19:05:17.489866 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 19:05:19.620281 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 19:05:19.643000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:05:19.660228 kernel: audit: type=1130 audit(1776711919.643:258): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:05:19.742791 (kubelet)[2394]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 19:05:21.343915 kubelet[2394]: E0420 19:05:21.342270 2394 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 19:05:21.409473 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 19:05:21.476814 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 19:05:21.498000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 20 19:05:21.500104 systemd[1]: kubelet.service: Consumed 2.444s CPU time, 110.5M memory peak. Apr 20 19:05:21.516760 kernel: audit: type=1131 audit(1776711921.498:259): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 20 19:05:25.124023 containerd[1659]: time="2026-04-20T19:05:25.122006217Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 19:05:25.132334 containerd[1659]: time="2026-04-20T19:05:25.127420854Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.11: active requests=0, bytes read=32006989" Apr 20 19:05:25.136903 containerd[1659]: time="2026-04-20T19:05:25.136372110Z" level=info msg="ImageCreate event name:\"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 19:05:25.342302 containerd[1659]: time="2026-04-20T19:05:25.341724396Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 19:05:25.379173 containerd[1659]: time="2026-04-20T19:05:25.378388036Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.11\" with image id \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\", repo tag \"registry.k8s.io/kube-proxy:v1.33.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\", size \"32009730\" in 45.205835439s" Apr 20 19:05:25.379173 containerd[1659]: time="2026-04-20T19:05:25.378867748Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\" returns image reference \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\"" Apr 20 19:05:25.483654 containerd[1659]: time="2026-04-20T19:05:25.482750164Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 20 19:05:31.318205 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1704636731.mount: Deactivated successfully. Apr 20 19:05:31.820224 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Apr 20 19:05:31.841776 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 19:05:33.427415 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 19:05:33.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:05:33.453292 kernel: audit: type=1130 audit(1776711933.440:260): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:05:33.479061 (kubelet)[2422]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 19:05:33.813998 kubelet[2422]: E0420 19:05:33.813514 2422 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 19:05:33.841803 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 19:05:33.841954 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 19:05:33.881000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 20 19:05:33.882416 systemd[1]: kubelet.service: Consumed 1.026s CPU time, 110.1M memory peak. Apr 20 19:05:33.908723 kernel: audit: type=1131 audit(1776711933.881:261): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 20 19:05:43.130770 containerd[1659]: time="2026-04-20T19:05:43.129855339Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 19:05:43.139273 containerd[1659]: time="2026-04-20T19:05:43.138609902Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20931441" Apr 20 19:05:43.161916 containerd[1659]: time="2026-04-20T19:05:43.158796150Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 19:05:43.200521 containerd[1659]: time="2026-04-20T19:05:43.200217608Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 19:05:43.208765 containerd[1659]: time="2026-04-20T19:05:43.208183131Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 17.724835698s" Apr 20 19:05:43.208765 containerd[1659]: time="2026-04-20T19:05:43.208695367Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Apr 20 19:05:43.244954 containerd[1659]: time="2026-04-20T19:05:43.243845128Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 20 19:05:43.913641 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Apr 20 19:05:43.976880 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 19:05:44.774058 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1228725970.mount: Deactivated successfully. Apr 20 19:05:44.864260 containerd[1659]: time="2026-04-20T19:05:44.863415532Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 19:05:44.879025 containerd[1659]: time="2026-04-20T19:05:44.878097734Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=881" Apr 20 19:05:44.955872 containerd[1659]: time="2026-04-20T19:05:44.952311256Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 19:05:44.986596 containerd[1659]: time="2026-04-20T19:05:44.983745141Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 19:05:44.988857 containerd[1659]: time="2026-04-20T19:05:44.988587949Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.744191212s" Apr 20 19:05:44.989211 containerd[1659]: time="2026-04-20T19:05:44.989190323Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 20 19:05:45.001335 containerd[1659]: time="2026-04-20T19:05:44.999157524Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 20 19:05:45.242326 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 19:05:45.242000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:05:45.258725 kernel: audit: type=1130 audit(1776711945.242:262): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:05:45.339735 (kubelet)[2483]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 19:05:45.995493 kubelet[2483]: E0420 19:05:45.995001 2483 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 19:05:46.018923 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 19:05:46.019076 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 19:05:46.033000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 20 19:05:46.034137 systemd[1]: kubelet.service: Consumed 1.175s CPU time, 108.9M memory peak. Apr 20 19:05:46.046088 kernel: audit: type=1131 audit(1776711946.033:263): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 20 19:05:48.552296 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3233894136.mount: Deactivated successfully. Apr 20 19:05:56.213041 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 14. Apr 20 19:05:56.241622 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 19:05:57.911933 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 19:05:57.913000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:05:57.929478 kernel: audit: type=1130 audit(1776711957.913:264): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:05:57.982255 (kubelet)[2531]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 19:05:58.763363 kubelet[2531]: E0420 19:05:58.761622 2531 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 19:05:58.776849 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 19:05:58.777016 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 19:05:58.788000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 20 19:05:58.789213 systemd[1]: kubelet.service: Consumed 1.658s CPU time, 110.5M memory peak. Apr 20 19:05:58.799692 kernel: audit: type=1131 audit(1776711958.788:265): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 20 19:06:06.020028 containerd[1659]: time="2026-04-20T19:06:06.018747510Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 19:06:06.036212 containerd[1659]: time="2026-04-20T19:06:06.035524116Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23711543" Apr 20 19:06:06.065764 containerd[1659]: time="2026-04-20T19:06:06.065156280Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 19:06:06.153331 containerd[1659]: time="2026-04-20T19:06:06.152138485Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 19:06:06.166896 containerd[1659]: time="2026-04-20T19:06:06.166250041Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 21.163929046s" Apr 20 19:06:06.168140 containerd[1659]: time="2026-04-20T19:06:06.166880433Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Apr 20 19:06:08.909859 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 15. Apr 20 19:06:08.957421 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 19:06:12.295844 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 19:06:12.303000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:06:12.319426 kernel: audit: type=1130 audit(1776711972.303:266): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:06:12.366098 (kubelet)[2601]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 19:06:13.496075 kubelet[2601]: E0420 19:06:13.495933 2601 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 19:06:13.515923 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 19:06:13.516098 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 19:06:13.519000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 20 19:06:13.520237 systemd[1]: kubelet.service: Consumed 2.524s CPU time, 110.6M memory peak. Apr 20 19:06:13.535596 kernel: audit: type=1131 audit(1776711973.519:267): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 20 19:06:23.759460 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 16. Apr 20 19:06:23.829182 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 19:06:28.349459 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 19:06:28.350000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:06:28.409064 kernel: audit: type=1130 audit(1776711988.350:268): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:06:28.571808 (kubelet)[2621]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 19:06:29.941647 kubelet[2621]: E0420 19:06:29.936334 2621 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 19:06:30.047197 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 19:06:30.047662 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 19:06:30.053000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 20 19:06:30.055816 systemd[1]: kubelet.service: Consumed 2.982s CPU time, 111M memory peak. Apr 20 19:06:30.070902 kernel: audit: type=1131 audit(1776711990.053:269): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 20 19:06:40.202348 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 17. Apr 20 19:06:40.432053 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 19:06:45.481317 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 19:06:45.506000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:06:45.533963 kernel: audit: type=1130 audit(1776712005.506:270): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:06:45.580947 (kubelet)[2639]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 20 19:06:46.559102 kubelet[2639]: E0420 19:06:46.558446 2639 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 20 19:06:46.675315 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 20 19:06:46.677135 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 20 19:06:46.691000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 20 19:06:46.692013 systemd[1]: kubelet.service: Consumed 3.172s CPU time, 109.4M memory peak. Apr 20 19:06:46.701674 kernel: audit: type=1131 audit(1776712006.691:271): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 20 19:06:53.923375 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 19:06:53.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:06:53.925000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:06:53.932971 systemd[1]: kubelet.service: Consumed 3.172s CPU time, 109.4M memory peak. Apr 20 19:06:53.943106 kernel: audit: type=1130 audit(1776712013.924:272): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:06:53.943672 kernel: audit: type=1131 audit(1776712013.925:273): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:06:54.023768 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 19:06:54.421453 systemd[1]: Reload requested from client PID 2663 ('systemctl') (unit session-8.scope)... Apr 20 19:06:54.422099 systemd[1]: Reloading... Apr 20 19:06:57.981098 systemd-ssh-generator[2712]: Failed to query local AF_VSOCK CID: Cannot assign requested address Apr 20 19:06:58.031890 (sd-exec-[2694]: /usr/lib/systemd/system-generators/systemd-ssh-generator failed with exit status 1. Apr 20 19:06:58.036607 zram_generator::config[2717]: No configuration found. Apr 20 19:07:01.821501 systemd[1]: /usr/lib/systemd/system/update-engine.service:10: Support for option BlockIOWeight= has been removed and it is ignored Apr 20 19:07:03.969980 systemd[1]: Reloading finished in 9538 ms. Apr 20 19:07:04.041000 audit: BPF prog-id=39 op=LOAD Apr 20 19:07:04.045000 audit: BPF prog-id=22 op=UNLOAD Apr 20 19:07:04.052000 audit: BPF prog-id=40 op=LOAD Apr 20 19:07:04.060191 kernel: audit: type=1334 audit(1776712024.041:274): prog-id=39 op=LOAD Apr 20 19:07:04.067876 kernel: audit: type=1334 audit(1776712024.045:275): prog-id=22 op=UNLOAD Apr 20 19:07:04.067000 audit: BPF prog-id=41 op=LOAD Apr 20 19:07:04.069989 kernel: audit: type=1334 audit(1776712024.052:276): prog-id=40 op=LOAD Apr 20 19:07:04.070029 kernel: audit: type=1334 audit(1776712024.067:277): prog-id=41 op=LOAD Apr 20 19:07:04.069000 audit: BPF prog-id=23 op=UNLOAD Apr 20 19:07:04.069000 audit: BPF prog-id=24 op=UNLOAD Apr 20 19:07:04.073879 kernel: audit: type=1334 audit(1776712024.069:278): prog-id=23 op=UNLOAD Apr 20 19:07:04.075472 kernel: audit: type=1334 audit(1776712024.069:279): prog-id=24 op=UNLOAD Apr 20 19:07:04.143000 audit: BPF prog-id=42 op=LOAD Apr 20 19:07:04.145000 audit: BPF prog-id=19 op=UNLOAD Apr 20 19:07:04.149218 kernel: audit: type=1334 audit(1776712024.143:280): prog-id=42 op=LOAD Apr 20 19:07:04.149256 kernel: audit: type=1334 audit(1776712024.145:281): prog-id=19 op=UNLOAD Apr 20 19:07:04.148000 audit: BPF prog-id=43 op=LOAD Apr 20 19:07:04.150720 kernel: audit: type=1334 audit(1776712024.148:282): prog-id=43 op=LOAD Apr 20 19:07:04.148000 audit: BPF prog-id=44 op=LOAD Apr 20 19:07:04.148000 audit: BPF prog-id=20 op=UNLOAD Apr 20 19:07:04.148000 audit: BPF prog-id=21 op=UNLOAD Apr 20 19:07:04.156000 audit: BPF prog-id=45 op=LOAD Apr 20 19:07:04.156000 audit: BPF prog-id=30 op=UNLOAD Apr 20 19:07:04.156000 audit: BPF prog-id=46 op=LOAD Apr 20 19:07:04.156000 audit: BPF prog-id=47 op=LOAD Apr 20 19:07:04.156000 audit: BPF prog-id=31 op=UNLOAD Apr 20 19:07:04.156000 audit: BPF prog-id=32 op=UNLOAD Apr 20 19:07:04.157946 kernel: audit: type=1334 audit(1776712024.148:283): prog-id=44 op=LOAD Apr 20 19:07:04.158000 audit: BPF prog-id=48 op=LOAD Apr 20 19:07:04.159000 audit: BPF prog-id=26 op=UNLOAD Apr 20 19:07:04.164000 audit: BPF prog-id=49 op=LOAD Apr 20 19:07:04.167000 audit: BPF prog-id=35 op=UNLOAD Apr 20 19:07:04.184000 audit: BPF prog-id=50 op=LOAD Apr 20 19:07:04.184000 audit: BPF prog-id=27 op=UNLOAD Apr 20 19:07:04.195000 audit: BPF prog-id=51 op=LOAD Apr 20 19:07:04.202000 audit: BPF prog-id=52 op=LOAD Apr 20 19:07:04.202000 audit: BPF prog-id=28 op=UNLOAD Apr 20 19:07:04.202000 audit: BPF prog-id=29 op=UNLOAD Apr 20 19:07:04.345000 audit: BPF prog-id=53 op=LOAD Apr 20 19:07:04.347000 audit: BPF prog-id=36 op=UNLOAD Apr 20 19:07:04.350000 audit: BPF prog-id=54 op=LOAD Apr 20 19:07:04.351000 audit: BPF prog-id=55 op=LOAD Apr 20 19:07:04.351000 audit: BPF prog-id=37 op=UNLOAD Apr 20 19:07:04.351000 audit: BPF prog-id=38 op=UNLOAD Apr 20 19:07:04.389000 audit: BPF prog-id=56 op=LOAD Apr 20 19:07:04.391000 audit: BPF prog-id=25 op=UNLOAD Apr 20 19:07:04.442000 audit: BPF prog-id=57 op=LOAD Apr 20 19:07:04.444000 audit: BPF prog-id=58 op=LOAD Apr 20 19:07:04.444000 audit: BPF prog-id=33 op=UNLOAD Apr 20 19:07:04.444000 audit: BPF prog-id=34 op=UNLOAD Apr 20 19:07:04.956505 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 20 19:07:04.956645 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 20 19:07:04.960164 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 19:07:04.960000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Apr 20 19:07:04.963510 systemd[1]: kubelet.service: Consumed 1.036s CPU time, 98.6M memory peak. Apr 20 19:07:05.037076 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 19:07:09.994148 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 19:07:09.999000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:07:10.018599 kernel: kauditd_printk_skb: 31 callbacks suppressed Apr 20 19:07:10.018680 kernel: audit: type=1130 audit(1776712029.999:315): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:07:10.037485 (kubelet)[2765]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 20 19:07:10.714446 kubelet[2765]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 20 19:07:10.717738 kubelet[2765]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 20 19:07:10.717738 kubelet[2765]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 20 19:07:10.717738 kubelet[2765]: I0420 19:07:10.716824 2765 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 20 19:07:14.651645 kubelet[2765]: I0420 19:07:14.650311 2765 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 20 19:07:14.659123 kubelet[2765]: I0420 19:07:14.653741 2765 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 20 19:07:14.659123 kubelet[2765]: I0420 19:07:14.654103 2765 server.go:956] "Client rotation is on, will bootstrap in background" Apr 20 19:07:15.013929 kubelet[2765]: E0420 19:07:15.008127 2765 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 20 19:07:15.073458 kubelet[2765]: I0420 19:07:15.070765 2765 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 20 19:07:15.205428 kubelet[2765]: I0420 19:07:15.205322 2765 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 20 19:07:15.413240 kubelet[2765]: I0420 19:07:15.413090 2765 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 20 19:07:15.417347 kubelet[2765]: I0420 19:07:15.416078 2765 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 20 19:07:15.421443 kubelet[2765]: I0420 19:07:15.418341 2765 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 20 19:07:15.421443 kubelet[2765]: I0420 19:07:15.421433 2765 topology_manager.go:138] "Creating topology manager with none policy" Apr 20 19:07:15.422389 kubelet[2765]: I0420 19:07:15.421601 2765 container_manager_linux.go:303] "Creating device plugin manager" Apr 20 19:07:15.422389 kubelet[2765]: I0420 19:07:15.422238 2765 state_mem.go:36] "Initialized new in-memory state store" Apr 20 19:07:15.502693 kubelet[2765]: I0420 19:07:15.501910 2765 kubelet.go:480] "Attempting to sync node with API server" Apr 20 19:07:15.502693 kubelet[2765]: I0420 19:07:15.502166 2765 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 20 19:07:15.503294 kubelet[2765]: I0420 19:07:15.502915 2765 kubelet.go:386] "Adding apiserver pod source" Apr 20 19:07:15.503294 kubelet[2765]: I0420 19:07:15.502986 2765 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 20 19:07:15.508864 kubelet[2765]: E0420 19:07:15.508812 2765 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 20 19:07:15.512641 kubelet[2765]: E0420 19:07:15.509187 2765 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 20 19:07:15.514588 kubelet[2765]: I0420 19:07:15.514524 2765 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.2.1" apiVersion="v1" Apr 20 19:07:15.531330 kubelet[2765]: I0420 19:07:15.525332 2765 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 20 19:07:15.541114 kubelet[2765]: W0420 19:07:15.539336 2765 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 20 19:07:15.640382 kubelet[2765]: I0420 19:07:15.635395 2765 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 20 19:07:15.640382 kubelet[2765]: I0420 19:07:15.635508 2765 server.go:1289] "Started kubelet" Apr 20 19:07:15.644712 kubelet[2765]: I0420 19:07:15.644511 2765 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 20 19:07:15.649515 kubelet[2765]: I0420 19:07:15.647413 2765 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 20 19:07:15.649515 kubelet[2765]: I0420 19:07:15.647664 2765 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 20 19:07:15.649515 kubelet[2765]: I0420 19:07:15.648663 2765 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 20 19:07:15.654233 kubelet[2765]: I0420 19:07:15.651944 2765 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 20 19:07:15.655672 kubelet[2765]: I0420 19:07:15.655628 2765 server.go:317] "Adding debug handlers to kubelet server" Apr 20 19:07:15.657333 kubelet[2765]: E0420 19:07:15.656981 2765 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:07:15.657716 kubelet[2765]: I0420 19:07:15.657680 2765 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 20 19:07:15.657887 kubelet[2765]: I0420 19:07:15.657855 2765 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 20 19:07:15.657912 kubelet[2765]: I0420 19:07:15.657906 2765 reconciler.go:26] "Reconciler: start to sync state" Apr 20 19:07:15.658746 kubelet[2765]: E0420 19:07:15.658696 2765 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="200ms" Apr 20 19:07:15.658837 kubelet[2765]: E0420 19:07:15.658820 2765 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 20 19:07:15.668254 kubelet[2765]: I0420 19:07:15.665176 2765 factory.go:223] Registration of the systemd container factory successfully Apr 20 19:07:15.668254 kubelet[2765]: E0420 19:07:15.656709 2765 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a8262ed83606db default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 19:07:15.635431131 +0000 UTC m=+5.487396571,LastTimestamp:2026-04-20 19:07:15.635431131 +0000 UTC m=+5.487396571,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 19:07:15.672817 kubelet[2765]: I0420 19:07:15.668709 2765 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 20 19:07:15.683221 kubelet[2765]: I0420 19:07:15.682356 2765 factory.go:223] Registration of the containerd container factory successfully Apr 20 19:07:15.683221 kubelet[2765]: E0420 19:07:15.682888 2765 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 20 19:07:15.709000 audit[2785]: NETFILTER_CFG table=mangle:40 family=2 entries=2 op=nft_register_chain pid=2785 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 20 19:07:15.709000 audit[2785]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff51da6610 a2=0 a3=0 items=0 ppid=2765 pid=2785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:07:15.718370 kernel: audit: type=1325 audit(1776712035.709:316): table=mangle:40 family=2 entries=2 op=nft_register_chain pid=2785 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 20 19:07:15.718398 kernel: audit: type=1300 audit(1776712035.709:316): arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff51da6610 a2=0 a3=0 items=0 ppid=2765 pid=2785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:07:15.709000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Apr 20 19:07:15.734590 kernel: audit: type=1327 audit(1776712035.709:316): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Apr 20 19:07:15.734000 audit[2786]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_chain pid=2786 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 20 19:07:15.734000 audit[2786]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdadb17cf0 a2=0 a3=0 items=0 ppid=2765 pid=2786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:07:15.756102 kernel: audit: type=1325 audit(1776712035.734:317): table=filter:41 family=2 entries=1 op=nft_register_chain pid=2786 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 20 19:07:15.756507 kernel: audit: type=1300 audit(1776712035.734:317): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdadb17cf0 a2=0 a3=0 items=0 ppid=2765 pid=2786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:07:15.734000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Apr 20 19:07:15.772984 kernel: audit: type=1327 audit(1776712035.734:317): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Apr 20 19:07:15.773313 kubelet[2765]: E0420 19:07:15.763749 2765 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:07:15.800000 audit[2788]: NETFILTER_CFG table=filter:42 family=2 entries=2 op=nft_register_chain pid=2788 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 20 19:07:15.808869 kernel: audit: type=1325 audit(1776712035.800:318): table=filter:42 family=2 entries=2 op=nft_register_chain pid=2788 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 20 19:07:15.800000 audit[2788]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffc36340870 a2=0 a3=0 items=0 ppid=2765 pid=2788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:07:15.830883 kernel: audit: type=1300 audit(1776712035.800:318): arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffc36340870 a2=0 a3=0 items=0 ppid=2765 pid=2788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:07:15.800000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Apr 20 19:07:15.843079 kernel: audit: type=1327 audit(1776712035.800:318): proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Apr 20 19:07:15.880979 kubelet[2765]: E0420 19:07:15.879926 2765 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:07:15.886937 kubelet[2765]: E0420 19:07:15.886252 2765 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="400ms" Apr 20 19:07:15.896247 kubelet[2765]: I0420 19:07:15.896170 2765 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 20 19:07:15.896247 kubelet[2765]: I0420 19:07:15.896213 2765 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 20 19:07:15.896721 kubelet[2765]: I0420 19:07:15.896433 2765 state_mem.go:36] "Initialized new in-memory state store" Apr 20 19:07:15.933000 audit[2793]: NETFILTER_CFG table=filter:43 family=2 entries=2 op=nft_register_chain pid=2793 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 20 19:07:15.933000 audit[2793]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffd11f8aec0 a2=0 a3=0 items=0 ppid=2765 pid=2793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:07:15.933000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Apr 20 19:07:15.971306 kubelet[2765]: I0420 19:07:15.933603 2765 policy_none.go:49] "None policy: Start" Apr 20 19:07:15.971306 kubelet[2765]: I0420 19:07:15.934970 2765 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 20 19:07:15.971306 kubelet[2765]: I0420 19:07:15.935089 2765 state_mem.go:35] "Initializing new in-memory state store" Apr 20 19:07:15.971613 kernel: audit: type=1325 audit(1776712035.933:319): table=filter:43 family=2 entries=2 op=nft_register_chain pid=2793 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 20 19:07:15.981592 kubelet[2765]: E0420 19:07:15.981328 2765 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:07:16.135798 kubelet[2765]: E0420 19:07:16.085452 2765 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:07:16.160000 audit[2796]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_rule pid=2796 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 20 19:07:16.160000 audit[2796]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffdefc3f350 a2=0 a3=0 items=0 ppid=2765 pid=2796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:07:16.160000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Apr 20 19:07:16.162126 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 20 19:07:16.173488 kubelet[2765]: I0420 19:07:16.161972 2765 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 20 19:07:16.177000 audit[2799]: NETFILTER_CFG table=mangle:45 family=2 entries=1 op=nft_register_chain pid=2799 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 20 19:07:16.177000 audit[2799]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff77d1fe80 a2=0 a3=0 items=0 ppid=2765 pid=2799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:07:16.177000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Apr 20 19:07:16.181000 audit[2798]: NETFILTER_CFG table=mangle:46 family=10 entries=2 op=nft_register_chain pid=2798 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Apr 20 19:07:16.181000 audit[2798]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffdd9e945e0 a2=0 a3=0 items=0 ppid=2765 pid=2798 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:07:16.181000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Apr 20 19:07:16.188975 kubelet[2765]: I0420 19:07:16.185217 2765 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 20 19:07:16.188975 kubelet[2765]: I0420 19:07:16.188914 2765 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 20 19:07:16.188975 kubelet[2765]: I0420 19:07:16.188950 2765 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 20 19:07:16.188975 kubelet[2765]: I0420 19:07:16.188960 2765 kubelet.go:2436] "Starting kubelet main sync loop" Apr 20 19:07:16.189615 kubelet[2765]: E0420 19:07:16.189216 2765 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 20 19:07:16.199000 audit[2800]: NETFILTER_CFG table=nat:47 family=2 entries=1 op=nft_register_chain pid=2800 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 20 19:07:16.199000 audit[2800]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffed8da8c30 a2=0 a3=0 items=0 ppid=2765 pid=2800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:07:16.199000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Apr 20 19:07:16.200832 kubelet[2765]: E0420 19:07:16.200321 2765 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 20 19:07:16.200000 audit[2801]: NETFILTER_CFG table=mangle:48 family=10 entries=1 op=nft_register_chain pid=2801 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Apr 20 19:07:16.200000 audit[2801]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc2425e350 a2=0 a3=0 items=0 ppid=2765 pid=2801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:07:16.200000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Apr 20 19:07:16.211000 audit[2803]: NETFILTER_CFG table=nat:49 family=10 entries=1 op=nft_register_chain pid=2803 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Apr 20 19:07:16.211000 audit[2803]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff8caee940 a2=0 a3=0 items=0 ppid=2765 pid=2803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:07:16.211000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Apr 20 19:07:16.214000 audit[2802]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_chain pid=2802 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 20 19:07:16.214000 audit[2802]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffee04ebb50 a2=0 a3=0 items=0 ppid=2765 pid=2802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:07:16.214000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Apr 20 19:07:16.227000 audit[2804]: NETFILTER_CFG table=filter:51 family=10 entries=1 op=nft_register_chain pid=2804 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Apr 20 19:07:16.227000 audit[2804]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc9fab3710 a2=0 a3=0 items=0 ppid=2765 pid=2804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:07:16.227000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Apr 20 19:07:16.243168 kubelet[2765]: E0420 19:07:16.241519 2765 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:07:16.330634 kubelet[2765]: E0420 19:07:16.327262 2765 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 20 19:07:16.334709 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 20 19:07:16.335236 kubelet[2765]: E0420 19:07:16.335178 2765 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="800ms" Apr 20 19:07:16.348098 kubelet[2765]: E0420 19:07:16.344678 2765 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:07:16.408603 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 20 19:07:16.452939 kubelet[2765]: E0420 19:07:16.452640 2765 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 20 19:07:16.469352 kubelet[2765]: E0420 19:07:16.469284 2765 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 20 19:07:16.472750 kubelet[2765]: I0420 19:07:16.472681 2765 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 20 19:07:16.472750 kubelet[2765]: I0420 19:07:16.472717 2765 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 20 19:07:16.473715 kubelet[2765]: I0420 19:07:16.473358 2765 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 20 19:07:16.554700 kubelet[2765]: E0420 19:07:16.552345 2765 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 20 19:07:16.588753 kubelet[2765]: E0420 19:07:16.587882 2765 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 20 19:07:16.596600 kubelet[2765]: E0420 19:07:16.596427 2765 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 20 19:07:16.664239 kubelet[2765]: I0420 19:07:16.664061 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5ef51a6b32499d3d1e531fb8b3a83d4f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"5ef51a6b32499d3d1e531fb8b3a83d4f\") " pod="kube-system/kube-apiserver-localhost" Apr 20 19:07:16.664239 kubelet[2765]: I0420 19:07:16.664131 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5ef51a6b32499d3d1e531fb8b3a83d4f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"5ef51a6b32499d3d1e531fb8b3a83d4f\") " pod="kube-system/kube-apiserver-localhost" Apr 20 19:07:16.664239 kubelet[2765]: I0420 19:07:16.664205 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5ef51a6b32499d3d1e531fb8b3a83d4f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"5ef51a6b32499d3d1e531fb8b3a83d4f\") " pod="kube-system/kube-apiserver-localhost" Apr 20 19:07:16.664239 kubelet[2765]: I0420 19:07:16.664273 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 20 19:07:16.747087 kubelet[2765]: I0420 19:07:16.746967 2765 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 20 19:07:16.754335 kubelet[2765]: E0420 19:07:16.754207 2765 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 20 19:07:16.770433 kubelet[2765]: I0420 19:07:16.769215 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 20 19:07:16.773697 kubelet[2765]: I0420 19:07:16.769400 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 20 19:07:16.780117 kubelet[2765]: I0420 19:07:16.777015 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 20 19:07:16.780867 kubelet[2765]: I0420 19:07:16.780738 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 20 19:07:16.783220 kubelet[2765]: I0420 19:07:16.782609 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33fee6ba1581201eda98a989140db110-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"33fee6ba1581201eda98a989140db110\") " pod="kube-system/kube-scheduler-localhost" Apr 20 19:07:16.804189 systemd[1]: Created slice kubepods-burstable-pod5ef51a6b32499d3d1e531fb8b3a83d4f.slice - libcontainer container kubepods-burstable-pod5ef51a6b32499d3d1e531fb8b3a83d4f.slice. Apr 20 19:07:16.827080 kubelet[2765]: E0420 19:07:16.825648 2765 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 20 19:07:16.934616 kubelet[2765]: E0420 19:07:16.934326 2765 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 19:07:16.941450 kubelet[2765]: E0420 19:07:16.941418 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:07:16.962634 containerd[1659]: time="2026-04-20T19:07:16.962249933Z" level=info msg="RunPodSandbox for name:\"kube-apiserver-localhost\" uid:\"5ef51a6b32499d3d1e531fb8b3a83d4f\" namespace:\"kube-system\"" Apr 20 19:07:16.963751 systemd[1]: Created slice kubepods-burstable-pode9ca41790ae21be9f4cbd451ade0acec.slice - libcontainer container kubepods-burstable-pode9ca41790ae21be9f4cbd451ade0acec.slice. Apr 20 19:07:16.975468 kubelet[2765]: I0420 19:07:16.974109 2765 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 20 19:07:16.982603 kubelet[2765]: E0420 19:07:16.982245 2765 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 20 19:07:17.002058 kubelet[2765]: E0420 19:07:17.002011 2765 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 20 19:07:17.011284 kubelet[2765]: E0420 19:07:17.008478 2765 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 19:07:17.018340 kubelet[2765]: E0420 19:07:17.017010 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:07:17.058050 containerd[1659]: time="2026-04-20T19:07:17.057769914Z" level=info msg="RunPodSandbox for name:\"kube-controller-manager-localhost\" uid:\"e9ca41790ae21be9f4cbd451ade0acec\" namespace:\"kube-system\"" Apr 20 19:07:17.058126 systemd[1]: Created slice kubepods-burstable-pod33fee6ba1581201eda98a989140db110.slice - libcontainer container kubepods-burstable-pod33fee6ba1581201eda98a989140db110.slice. Apr 20 19:07:17.128872 kubelet[2765]: E0420 19:07:17.127586 2765 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 19:07:17.140206 kubelet[2765]: E0420 19:07:17.139508 2765 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="1.6s" Apr 20 19:07:17.140206 kubelet[2765]: E0420 19:07:17.139978 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:07:17.149270 containerd[1659]: time="2026-04-20T19:07:17.148208442Z" level=info msg="RunPodSandbox for name:\"kube-scheduler-localhost\" uid:\"33fee6ba1581201eda98a989140db110\" namespace:\"kube-system\"" Apr 20 19:07:17.181607 kubelet[2765]: E0420 19:07:17.180394 2765 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 20 19:07:17.437657 kubelet[2765]: I0420 19:07:17.437363 2765 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 20 19:07:17.451697 kubelet[2765]: E0420 19:07:17.449356 2765 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 20 19:07:17.538204 kubelet[2765]: E0420 19:07:17.535944 2765 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 20 19:07:18.321441 kubelet[2765]: I0420 19:07:18.320319 2765 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 20 19:07:18.356461 kubelet[2765]: E0420 19:07:18.346155 2765 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 20 19:07:18.487194 kubelet[2765]: E0420 19:07:18.487103 2765 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 20 19:07:18.749697 kubelet[2765]: E0420 19:07:18.749600 2765 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="3.2s" Apr 20 19:07:18.776369 kubelet[2765]: E0420 19:07:18.773941 2765 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 20 19:07:18.930565 kubelet[2765]: E0420 19:07:18.926249 2765 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 20 19:07:19.411609 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2913628910.mount: Deactivated successfully. Apr 20 19:07:19.610664 containerd[1659]: time="2026-04-20T19:07:19.608743607Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 20 19:07:19.620904 containerd[1659]: time="2026-04-20T19:07:19.620498538Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=0" Apr 20 19:07:19.625325 containerd[1659]: time="2026-04-20T19:07:19.625068530Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 20 19:07:19.641700 kubelet[2765]: E0420 19:07:19.639835 2765 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 20 19:07:19.642986 containerd[1659]: time="2026-04-20T19:07:19.642397206Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 20 19:07:19.646705 containerd[1659]: time="2026-04-20T19:07:19.646171394Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=0" Apr 20 19:07:19.653054 containerd[1659]: time="2026-04-20T19:07:19.650226550Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 20 19:07:19.667160 containerd[1659]: time="2026-04-20T19:07:19.664699355Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=0" Apr 20 19:07:19.814757 containerd[1659]: time="2026-04-20T19:07:19.811387081Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 20 19:07:19.947432 containerd[1659]: time="2026-04-20T19:07:19.946758628Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 2.822867436s" Apr 20 19:07:19.948062 containerd[1659]: time="2026-04-20T19:07:19.948012266Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 2.718243247s" Apr 20 19:07:19.949076 containerd[1659]: time="2026-04-20T19:07:19.948387518Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 2.959288993s" Apr 20 19:07:19.963185 kubelet[2765]: I0420 19:07:19.962346 2765 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 20 19:07:19.963829 kubelet[2765]: E0420 19:07:19.963749 2765 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 20 19:07:20.273625 containerd[1659]: time="2026-04-20T19:07:20.273254657Z" level=info msg="connecting to shim b20307ecdf6e20705e0fdd182059b7830a4642f157174536595c44ea5ac2f131" address="unix:///run/containerd/s/2de9e38222e994fa67b88e4210b4b8c6d1d2095391be368ffb059e8fad5a87a8" namespace=k8s.io protocol=ttrpc version=3 Apr 20 19:07:20.299930 containerd[1659]: time="2026-04-20T19:07:20.299055799Z" level=info msg="connecting to shim 4541a931ad8dcecb05315d64587cd8ba7190629062d02fc0133cc1309c2941e5" address="unix:///run/containerd/s/a3573af0687a06f6f28b48e733fe9c56ab42bf434cb334a982435e3700d4ec37" namespace=k8s.io protocol=ttrpc version=3 Apr 20 19:07:20.321278 containerd[1659]: time="2026-04-20T19:07:20.316929582Z" level=info msg="connecting to shim a0a1c013bb9119be3e83c967343167afaabfa5d3210072f49e9de991e138aad2" address="unix:///run/containerd/s/80102222aa3ed4b7ee78377cd8f0cd98fe2254d5e4d09c655e1726e3fa17fed4" namespace=k8s.io protocol=ttrpc version=3 Apr 20 19:07:20.754636 systemd[1]: Started cri-containerd-4541a931ad8dcecb05315d64587cd8ba7190629062d02fc0133cc1309c2941e5.scope - libcontainer container 4541a931ad8dcecb05315d64587cd8ba7190629062d02fc0133cc1309c2941e5. Apr 20 19:07:20.847191 systemd[1]: Started cri-containerd-b20307ecdf6e20705e0fdd182059b7830a4642f157174536595c44ea5ac2f131.scope - libcontainer container b20307ecdf6e20705e0fdd182059b7830a4642f157174536595c44ea5ac2f131. Apr 20 19:07:21.060032 systemd[1]: Started cri-containerd-a0a1c013bb9119be3e83c967343167afaabfa5d3210072f49e9de991e138aad2.scope - libcontainer container a0a1c013bb9119be3e83c967343167afaabfa5d3210072f49e9de991e138aad2. Apr 20 19:07:21.087000 audit: BPF prog-id=59 op=LOAD Apr 20 19:07:21.092874 kernel: kauditd_printk_skb: 26 callbacks suppressed Apr 20 19:07:21.092950 kernel: audit: type=1334 audit(1776712041.087:328): prog-id=59 op=LOAD Apr 20 19:07:21.099000 audit: BPF prog-id=60 op=LOAD Apr 20 19:07:21.103992 kernel: audit: type=1334 audit(1776712041.099:329): prog-id=60 op=LOAD Apr 20 19:07:21.099000 audit[2862]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0240 a2=98 a3=0 items=0 ppid=2837 pid=2862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:07:21.120517 kernel: audit: type=1300 audit(1776712041.099:329): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0240 a2=98 a3=0 items=0 ppid=2837 pid=2862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:07:21.099000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435343161393331616438646365636230353331356436343538376364 Apr 20 19:07:21.099000 audit: BPF prog-id=60 op=UNLOAD Apr 20 19:07:21.099000 audit[2862]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2837 pid=2862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:07:21.159436 kernel: audit: type=1327 audit(1776712041.099:329): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435343161393331616438646365636230353331356436343538376364 Apr 20 19:07:21.159467 kernel: audit: type=1334 audit(1776712041.099:330): prog-id=60 op=UNLOAD Apr 20 19:07:21.159483 kernel: audit: type=1300 audit(1776712041.099:330): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2837 pid=2862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:07:21.099000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435343161393331616438646365636230353331356436343538376364 Apr 20 19:07:21.171160 kernel: audit: type=1327 audit(1776712041.099:330): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435343161393331616438646365636230353331356436343538376364 Apr 20 19:07:21.099000 audit: BPF prog-id=61 op=LOAD Apr 20 19:07:21.178621 kernel: audit: type=1334 audit(1776712041.099:331): prog-id=61 op=LOAD Apr 20 19:07:21.099000 audit[2862]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0490 a2=98 a3=0 items=0 ppid=2837 pid=2862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:07:21.238990 kernel: audit: type=1300 audit(1776712041.099:331): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0490 a2=98 a3=0 items=0 ppid=2837 pid=2862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:07:21.099000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435343161393331616438646365636230353331356436343538376364 Apr 20 19:07:21.248677 kernel: audit: type=1327 audit(1776712041.099:331): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435343161393331616438646365636230353331356436343538376364 Apr 20 19:07:21.099000 audit: BPF prog-id=62 op=LOAD Apr 20 19:07:21.099000 audit[2862]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0220 a2=98 a3=0 items=0 ppid=2837 pid=2862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:07:21.099000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435343161393331616438646365636230353331356436343538376364 Apr 20 19:07:21.099000 audit: BPF prog-id=62 op=UNLOAD Apr 20 19:07:21.099000 audit[2862]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2837 pid=2862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:07:21.099000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435343161393331616438646365636230353331356436343538376364 Apr 20 19:07:21.099000 audit: BPF prog-id=61 op=UNLOAD Apr 20 19:07:21.099000 audit[2862]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2837 pid=2862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:07:21.099000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435343161393331616438646365636230353331356436343538376364 Apr 20 19:07:21.099000 audit: BPF prog-id=63 op=LOAD Apr 20 19:07:21.099000 audit[2862]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06f0 a2=98 a3=0 items=0 ppid=2837 pid=2862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:07:21.099000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435343161393331616438646365636230353331356436343538376364 Apr 20 19:07:21.152000 audit: BPF prog-id=64 op=LOAD Apr 20 19:07:21.157000 audit: BPF prog-id=65 op=LOAD Apr 20 19:07:21.157000 audit[2856]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000128240 a2=98 a3=0 items=0 ppid=2824 pid=2856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:07:21.157000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232303330376563646636653230373035653066646431383230353962 Apr 20 19:07:21.158000 audit: BPF prog-id=65 op=UNLOAD Apr 20 19:07:21.158000 audit[2856]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2824 pid=2856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:07:21.158000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232303330376563646636653230373035653066646431383230353962 Apr 20 19:07:21.160000 audit: BPF prog-id=66 op=LOAD Apr 20 19:07:21.160000 audit[2856]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000128490 a2=98 a3=0 items=0 ppid=2824 pid=2856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:07:21.160000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232303330376563646636653230373035653066646431383230353962 Apr 20 19:07:21.163000 audit: BPF prog-id=67 op=LOAD Apr 20 19:07:21.163000 audit[2856]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000128220 a2=98 a3=0 items=0 ppid=2824 pid=2856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:07:21.163000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232303330376563646636653230373035653066646431383230353962 Apr 20 19:07:21.163000 audit: BPF prog-id=67 op=UNLOAD Apr 20 19:07:21.163000 audit[2856]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2824 pid=2856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:07:21.163000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232303330376563646636653230373035653066646431383230353962 Apr 20 19:07:21.166000 audit: BPF prog-id=66 op=UNLOAD Apr 20 19:07:21.166000 audit[2856]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2824 pid=2856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:07:21.166000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232303330376563646636653230373035653066646431383230353962 Apr 20 19:07:21.166000 audit: BPF prog-id=68 op=LOAD Apr 20 19:07:21.166000 audit[2856]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001286f0 a2=98 a3=0 items=0 ppid=2824 pid=2856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:07:21.166000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232303330376563646636653230373035653066646431383230353962 Apr 20 19:07:21.342000 audit: BPF prog-id=69 op=LOAD Apr 20 19:07:21.345000 audit: BPF prog-id=70 op=LOAD Apr 20 19:07:21.345000 audit[2884]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130240 a2=98 a3=0 items=0 ppid=2849 pid=2884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:07:21.345000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130613163303133626239313139626533653833633936373334333136 Apr 20 19:07:21.345000 audit: BPF prog-id=70 op=UNLOAD Apr 20 19:07:21.345000 audit[2884]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2849 pid=2884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:07:21.345000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130613163303133626239313139626533653833633936373334333136 Apr 20 19:07:21.346000 audit: BPF prog-id=71 op=LOAD Apr 20 19:07:21.346000 audit[2884]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130490 a2=98 a3=0 items=0 ppid=2849 pid=2884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:07:21.346000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130613163303133626239313139626533653833633936373334333136 Apr 20 19:07:21.352000 audit: BPF prog-id=72 op=LOAD Apr 20 19:07:21.352000 audit[2884]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130220 a2=98 a3=0 items=0 ppid=2849 pid=2884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:07:21.352000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130613163303133626239313139626533653833633936373334333136 Apr 20 19:07:21.353000 audit: BPF prog-id=72 op=UNLOAD Apr 20 19:07:21.353000 audit[2884]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2849 pid=2884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:07:21.353000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130613163303133626239313139626533653833633936373334333136 Apr 20 19:07:21.353000 audit: BPF prog-id=71 op=UNLOAD Apr 20 19:07:21.353000 audit[2884]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2849 pid=2884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:07:21.353000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130613163303133626239313139626533653833633936373334333136 Apr 20 19:07:21.353000 audit: BPF prog-id=73 op=LOAD Apr 20 19:07:21.353000 audit[2884]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306f0 a2=98 a3=0 items=0 ppid=2849 pid=2884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:07:21.353000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130613163303133626239313139626533653833633936373334333136 Apr 20 19:07:21.518087 kubelet[2765]: E0420 19:07:21.515893 2765 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 20 19:07:21.558891 containerd[1659]: time="2026-04-20T19:07:21.557917522Z" level=info msg="RunPodSandbox for name:\"kube-controller-manager-localhost\" uid:\"e9ca41790ae21be9f4cbd451ade0acec\" namespace:\"kube-system\" returns sandbox id \"4541a931ad8dcecb05315d64587cd8ba7190629062d02fc0133cc1309c2941e5\"" Apr 20 19:07:21.626910 containerd[1659]: time="2026-04-20T19:07:21.624976765Z" level=info msg="RunPodSandbox for name:\"kube-scheduler-localhost\" uid:\"33fee6ba1581201eda98a989140db110\" namespace:\"kube-system\" returns sandbox id \"b20307ecdf6e20705e0fdd182059b7830a4642f157174536595c44ea5ac2f131\"" Apr 20 19:07:21.633760 kubelet[2765]: E0420 19:07:21.633528 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:07:21.636669 kubelet[2765]: E0420 19:07:21.634204 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:07:21.713761 containerd[1659]: time="2026-04-20T19:07:21.711766279Z" level=info msg="RunPodSandbox for name:\"kube-apiserver-localhost\" uid:\"5ef51a6b32499d3d1e531fb8b3a83d4f\" namespace:\"kube-system\" returns sandbox id \"a0a1c013bb9119be3e83c967343167afaabfa5d3210072f49e9de991e138aad2\"" Apr 20 19:07:21.755269 kubelet[2765]: E0420 19:07:21.755125 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:07:21.836588 containerd[1659]: time="2026-04-20T19:07:21.835842499Z" level=info msg="CreateContainer within sandbox \"4541a931ad8dcecb05315d64587cd8ba7190629062d02fc0133cc1309c2941e5\" for container name:\"kube-controller-manager\"" Apr 20 19:07:21.851489 containerd[1659]: time="2026-04-20T19:07:21.848412833Z" level=info msg="CreateContainer within sandbox \"b20307ecdf6e20705e0fdd182059b7830a4642f157174536595c44ea5ac2f131\" for container name:\"kube-scheduler\"" Apr 20 19:07:21.907155 containerd[1659]: time="2026-04-20T19:07:21.905659949Z" level=info msg="CreateContainer within sandbox \"a0a1c013bb9119be3e83c967343167afaabfa5d3210072f49e9de991e138aad2\" for container name:\"kube-apiserver\"" Apr 20 19:07:21.963008 containerd[1659]: time="2026-04-20T19:07:21.961377711Z" level=info msg="Container d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf: CDI devices from CRI Config.CDIDevices: []" Apr 20 19:07:22.021964 kubelet[2765]: E0420 19:07:22.021703 2765 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="6.4s" Apr 20 19:07:22.030515 containerd[1659]: time="2026-04-20T19:07:22.030348941Z" level=info msg="Container ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729: CDI devices from CRI Config.CDIDevices: []" Apr 20 19:07:22.116187 containerd[1659]: time="2026-04-20T19:07:22.116047201Z" level=info msg="Container 336c04308422bb47be47dbabd9a54c52608564fda7f37c62a62203892575bd65: CDI devices from CRI Config.CDIDevices: []" Apr 20 19:07:22.201089 containerd[1659]: time="2026-04-20T19:07:22.187949722Z" level=info msg="CreateContainer within sandbox \"b20307ecdf6e20705e0fdd182059b7830a4642f157174536595c44ea5ac2f131\" for name:\"kube-scheduler\" returns container id \"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\"" Apr 20 19:07:22.201089 containerd[1659]: time="2026-04-20T19:07:22.188290343Z" level=info msg="CreateContainer within sandbox \"4541a931ad8dcecb05315d64587cd8ba7190629062d02fc0133cc1309c2941e5\" for name:\"kube-controller-manager\" returns container id \"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\"" Apr 20 19:07:22.201089 containerd[1659]: time="2026-04-20T19:07:22.195816772Z" level=info msg="StartContainer for \"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\"" Apr 20 19:07:22.201089 containerd[1659]: time="2026-04-20T19:07:22.197261417Z" level=info msg="StartContainer for \"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\"" Apr 20 19:07:22.201089 containerd[1659]: time="2026-04-20T19:07:22.200701196Z" level=info msg="connecting to shim d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf" address="unix:///run/containerd/s/a3573af0687a06f6f28b48e733fe9c56ab42bf434cb334a982435e3700d4ec37" protocol=ttrpc version=3 Apr 20 19:07:22.219664 containerd[1659]: time="2026-04-20T19:07:22.218156890Z" level=info msg="connecting to shim ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729" address="unix:///run/containerd/s/2de9e38222e994fa67b88e4210b4b8c6d1d2095391be368ffb059e8fad5a87a8" protocol=ttrpc version=3 Apr 20 19:07:22.220264 containerd[1659]: time="2026-04-20T19:07:22.218495812Z" level=info msg="CreateContainer within sandbox \"a0a1c013bb9119be3e83c967343167afaabfa5d3210072f49e9de991e138aad2\" for name:\"kube-apiserver\" returns container id \"336c04308422bb47be47dbabd9a54c52608564fda7f37c62a62203892575bd65\"" Apr 20 19:07:22.283951 containerd[1659]: time="2026-04-20T19:07:22.260335736Z" level=info msg="StartContainer for \"336c04308422bb47be47dbabd9a54c52608564fda7f37c62a62203892575bd65\"" Apr 20 19:07:22.403652 containerd[1659]: time="2026-04-20T19:07:22.403500866Z" level=info msg="connecting to shim 336c04308422bb47be47dbabd9a54c52608564fda7f37c62a62203892575bd65" address="unix:///run/containerd/s/80102222aa3ed4b7ee78377cd8f0cd98fe2254d5e4d09c655e1726e3fa17fed4" protocol=ttrpc version=3 Apr 20 19:07:22.538872 systemd[1]: Started cri-containerd-ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729.scope - libcontainer container ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729. Apr 20 19:07:22.720396 systemd[1]: Started cri-containerd-d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf.scope - libcontainer container d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf. Apr 20 19:07:22.960000 audit: BPF prog-id=74 op=LOAD Apr 20 19:07:23.036000 audit: BPF prog-id=75 op=LOAD Apr 20 19:07:23.036000 audit[2952]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106240 a2=98 a3=0 items=0 ppid=2824 pid=2952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:07:23.036000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566383532386539376431653332613366396563333665643731393139 Apr 20 19:07:23.050000 audit: BPF prog-id=75 op=UNLOAD Apr 20 19:07:23.050000 audit[2952]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2824 pid=2952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:07:23.050000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566383532386539376431653332613366396563333665643731393139 Apr 20 19:07:23.062000 audit: BPF prog-id=76 op=LOAD Apr 20 19:07:23.062000 audit[2952]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106490 a2=98 a3=0 items=0 ppid=2824 pid=2952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:07:23.062000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566383532386539376431653332613366396563333665643731393139 Apr 20 19:07:23.063000 audit: BPF prog-id=77 op=LOAD Apr 20 19:07:23.063000 audit[2952]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106220 a2=98 a3=0 items=0 ppid=2824 pid=2952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:07:23.063000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566383532386539376431653332613366396563333665643731393139 Apr 20 19:07:23.063000 audit: BPF prog-id=77 op=UNLOAD Apr 20 19:07:23.063000 audit[2952]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2824 pid=2952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:07:23.063000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566383532386539376431653332613366396563333665643731393139 Apr 20 19:07:23.063000 audit: BPF prog-id=76 op=UNLOAD Apr 20 19:07:23.063000 audit[2952]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2824 pid=2952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:07:23.063000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566383532386539376431653332613366396563333665643731393139 Apr 20 19:07:23.063000 audit: BPF prog-id=78 op=LOAD Apr 20 19:07:23.063000 audit[2952]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001066f0 a2=98 a3=0 items=0 ppid=2824 pid=2952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:07:23.063000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566383532386539376431653332613366396563333665643731393139 Apr 20 19:07:23.122896 systemd[1]: Started cri-containerd-336c04308422bb47be47dbabd9a54c52608564fda7f37c62a62203892575bd65.scope - libcontainer container 336c04308422bb47be47dbabd9a54c52608564fda7f37c62a62203892575bd65. Apr 20 19:07:23.273000 audit: BPF prog-id=79 op=LOAD Apr 20 19:07:23.291000 audit: BPF prog-id=80 op=LOAD Apr 20 19:07:23.291000 audit[2951]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000198240 a2=98 a3=0 items=0 ppid=2837 pid=2951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:07:23.291000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432333333333766656430333564376437393235313665313061333336 Apr 20 19:07:23.293000 audit: BPF prog-id=80 op=UNLOAD Apr 20 19:07:23.293000 audit[2951]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2837 pid=2951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:07:23.293000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432333333333766656430333564376437393235313665313061333336 Apr 20 19:07:23.294000 audit: BPF prog-id=81 op=LOAD Apr 20 19:07:23.294000 audit[2951]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000198490 a2=98 a3=0 items=0 ppid=2837 pid=2951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:07:23.294000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432333333333766656430333564376437393235313665313061333336 Apr 20 19:07:23.294000 audit: BPF prog-id=82 op=LOAD Apr 20 19:07:23.294000 audit[2951]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000198220 a2=98 a3=0 items=0 ppid=2837 pid=2951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:07:23.294000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432333333333766656430333564376437393235313665313061333336 Apr 20 19:07:23.294000 audit: BPF prog-id=82 op=UNLOAD Apr 20 19:07:23.294000 audit[2951]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2837 pid=2951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:07:23.294000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432333333333766656430333564376437393235313665313061333336 Apr 20 19:07:23.294000 audit: BPF prog-id=81 op=UNLOAD Apr 20 19:07:23.294000 audit[2951]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2837 pid=2951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:07:23.294000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432333333333766656430333564376437393235313665313061333336 Apr 20 19:07:23.294000 audit: BPF prog-id=83 op=LOAD Apr 20 19:07:23.294000 audit[2951]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001986f0 a2=98 a3=0 items=0 ppid=2837 pid=2951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:07:23.294000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432333333333766656430333564376437393235313665313061333336 Apr 20 19:07:23.426676 kubelet[2765]: I0420 19:07:23.426428 2765 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 20 19:07:23.433487 kubelet[2765]: E0420 19:07:23.429974 2765 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Apr 20 19:07:23.478000 audit: BPF prog-id=84 op=LOAD Apr 20 19:07:23.575000 audit: BPF prog-id=85 op=LOAD Apr 20 19:07:23.575000 audit[2974]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000128240 a2=98 a3=0 items=0 ppid=2849 pid=2974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:07:23.575000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333366330343330383432326262343762653437646261626439613534 Apr 20 19:07:23.575000 audit: BPF prog-id=85 op=UNLOAD Apr 20 19:07:23.575000 audit[2974]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2849 pid=2974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:07:23.575000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333366330343330383432326262343762653437646261626439613534 Apr 20 19:07:23.621000 audit: BPF prog-id=86 op=LOAD Apr 20 19:07:23.621000 audit[2974]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000128490 a2=98 a3=0 items=0 ppid=2849 pid=2974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:07:23.621000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333366330343330383432326262343762653437646261626439613534 Apr 20 19:07:23.621000 audit: BPF prog-id=87 op=LOAD Apr 20 19:07:23.621000 audit[2974]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000128220 a2=98 a3=0 items=0 ppid=2849 pid=2974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:07:23.621000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333366330343330383432326262343762653437646261626439613534 Apr 20 19:07:23.621000 audit: BPF prog-id=87 op=UNLOAD Apr 20 19:07:23.621000 audit[2974]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2849 pid=2974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:07:23.621000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333366330343330383432326262343762653437646261626439613534 Apr 20 19:07:23.621000 audit: BPF prog-id=86 op=UNLOAD Apr 20 19:07:23.621000 audit[2974]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2849 pid=2974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:07:23.621000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333366330343330383432326262343762653437646261626439613534 Apr 20 19:07:23.622000 audit: BPF prog-id=88 op=LOAD Apr 20 19:07:23.622000 audit[2974]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001286f0 a2=98 a3=0 items=0 ppid=2849 pid=2974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:07:23.622000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333366330343330383432326262343762653437646261626439613534 Apr 20 19:07:23.866740 kubelet[2765]: E0420 19:07:23.792226 2765 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a8262ed83606db default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 19:07:15.635431131 +0000 UTC m=+5.487396571,LastTimestamp:2026-04-20 19:07:15.635431131 +0000 UTC m=+5.487396571,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 19:07:24.119941 containerd[1659]: time="2026-04-20T19:07:24.119718955Z" level=info msg="StartContainer for \"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\" returns successfully" Apr 20 19:07:24.121140 containerd[1659]: time="2026-04-20T19:07:24.121078524Z" level=info msg="StartContainer for \"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\" returns successfully" Apr 20 19:07:24.344816 containerd[1659]: time="2026-04-20T19:07:24.343284369Z" level=info msg="StartContainer for \"336c04308422bb47be47dbabd9a54c52608564fda7f37c62a62203892575bd65\" returns successfully" Apr 20 19:07:24.654274 kubelet[2765]: E0420 19:07:24.648645 2765 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 20 19:07:24.878194 kubelet[2765]: E0420 19:07:24.870688 2765 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 19:07:24.898607 kubelet[2765]: E0420 19:07:24.898430 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:07:24.920880 kubelet[2765]: E0420 19:07:24.895396 2765 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 20 19:07:24.965669 kubelet[2765]: E0420 19:07:24.965292 2765 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 20 19:07:25.300096 kubelet[2765]: E0420 19:07:25.299235 2765 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 19:07:25.308473 kubelet[2765]: E0420 19:07:25.304909 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:07:25.396144 kubelet[2765]: E0420 19:07:25.395981 2765 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 20 19:07:26.619849 kubelet[2765]: E0420 19:07:26.619627 2765 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 20 19:07:27.392363 kubelet[2765]: E0420 19:07:27.392311 2765 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 19:07:27.393449 kubelet[2765]: E0420 19:07:27.393430 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:07:27.398368 kubelet[2765]: E0420 19:07:27.392458 2765 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 19:07:27.403440 kubelet[2765]: E0420 19:07:27.392515 2765 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 19:07:27.407647 kubelet[2765]: E0420 19:07:27.407575 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:07:27.408397 kubelet[2765]: E0420 19:07:27.408377 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:07:28.130410 kubelet[2765]: E0420 19:07:28.127997 2765 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 19:07:28.136879 kubelet[2765]: E0420 19:07:28.135416 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:07:28.136879 kubelet[2765]: E0420 19:07:28.136192 2765 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 19:07:28.138501 kubelet[2765]: E0420 19:07:28.137059 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:07:29.435416 kubelet[2765]: E0420 19:07:29.399012 2765 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 19:07:29.441411 kubelet[2765]: E0420 19:07:29.437473 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:07:30.012159 kubelet[2765]: I0420 19:07:30.011928 2765 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 20 19:07:30.516653 kubelet[2765]: E0420 19:07:30.515515 2765 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 19:07:30.574780 kubelet[2765]: E0420 19:07:30.573493 2765 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 19:07:30.579119 kubelet[2765]: E0420 19:07:30.578858 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:07:30.579755 kubelet[2765]: E0420 19:07:30.579286 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:07:31.818289 kubelet[2765]: E0420 19:07:31.818199 2765 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 19:07:31.857462 kubelet[2765]: E0420 19:07:31.856941 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:07:36.631600 kubelet[2765]: E0420 19:07:36.630088 2765 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 20 19:07:38.466417 kubelet[2765]: E0420 19:07:38.445527 2765 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 20 19:07:40.038074 kubelet[2765]: E0420 19:07:40.037372 2765 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 20 19:07:40.043258 kubelet[2765]: E0420 19:07:40.039624 2765 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Apr 20 19:07:41.778747 kubelet[2765]: E0420 19:07:41.776651 2765 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 19:07:41.840949 kubelet[2765]: E0420 19:07:41.818997 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:07:43.115786 update_engine[1636]: I20260420 19:07:43.111490 1636 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Apr 20 19:07:43.118399 update_engine[1636]: I20260420 19:07:43.116017 1636 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Apr 20 19:07:43.118399 update_engine[1636]: I20260420 19:07:43.116840 1636 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Apr 20 19:07:43.123152 update_engine[1636]: I20260420 19:07:43.123024 1636 omaha_request_params.cc:62] Current group set to alpha Apr 20 19:07:43.123418 update_engine[1636]: I20260420 19:07:43.123301 1636 update_attempter.cc:499] Already updated boot flags. Skipping. Apr 20 19:07:43.123418 update_engine[1636]: I20260420 19:07:43.123310 1636 update_attempter.cc:643] Scheduling an action processor start. Apr 20 19:07:43.123418 update_engine[1636]: I20260420 19:07:43.123328 1636 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 20 19:07:43.123641 update_engine[1636]: I20260420 19:07:43.123593 1636 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Apr 20 19:07:43.128675 update_engine[1636]: I20260420 19:07:43.126945 1636 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 20 19:07:43.128675 update_engine[1636]: I20260420 19:07:43.127108 1636 omaha_request_action.cc:272] Request: Apr 20 19:07:43.128675 update_engine[1636]: Apr 20 19:07:43.128675 update_engine[1636]: Apr 20 19:07:43.128675 update_engine[1636]: Apr 20 19:07:43.128675 update_engine[1636]: Apr 20 19:07:43.128675 update_engine[1636]: Apr 20 19:07:43.128675 update_engine[1636]: Apr 20 19:07:43.128675 update_engine[1636]: Apr 20 19:07:43.128675 update_engine[1636]: Apr 20 19:07:43.128675 update_engine[1636]: I20260420 19:07:43.127116 1636 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 20 19:07:43.138409 locksmithd[1713]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Apr 20 19:07:43.145801 update_engine[1636]: I20260420 19:07:43.141783 1636 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 20 19:07:43.262954 update_engine[1636]: I20260420 19:07:43.261415 1636 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 20 19:07:43.275263 update_engine[1636]: E20260420 19:07:43.274312 1636 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Apr 20 19:07:43.276103 update_engine[1636]: I20260420 19:07:43.275760 1636 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Apr 20 19:07:43.963786 kubelet[2765]: E0420 19:07:43.959442 2765 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.18a8262ed83606db default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 19:07:15.635431131 +0000 UTC m=+5.487396571,LastTimestamp:2026-04-20 19:07:15.635431131 +0000 UTC m=+5.487396571,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 19:07:44.182350 kubelet[2765]: E0420 19:07:44.181251 2765 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 20 19:07:44.632438 kubelet[2765]: E0420 19:07:44.631217 2765 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 20 19:07:45.482170 kubelet[2765]: E0420 19:07:45.481860 2765 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 20 19:07:46.177666 kubelet[2765]: E0420 19:07:46.176930 2765 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 20 19:07:46.216913 kubelet[2765]: E0420 19:07:46.215896 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:07:46.306366 kubelet[2765]: E0420 19:07:46.304990 2765 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Apr 20 19:07:46.666991 kubelet[2765]: E0420 19:07:46.666868 2765 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 20 19:07:46.971943 kubelet[2765]: I0420 19:07:46.956653 2765 apiserver.go:52] "Watching apiserver" Apr 20 19:07:47.199899 kubelet[2765]: I0420 19:07:47.199181 2765 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 20 19:07:47.259417 kubelet[2765]: I0420 19:07:47.258994 2765 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 20 19:07:47.393091 kubelet[2765]: I0420 19:07:47.392973 2765 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 20 19:07:47.393091 kubelet[2765]: E0420 19:07:47.393084 2765 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Apr 20 19:07:47.471053 kubelet[2765]: I0420 19:07:47.469925 2765 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 20 19:07:47.820016 kubelet[2765]: I0420 19:07:47.819778 2765 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 20 19:07:47.930906 kubelet[2765]: E0420 19:07:47.927736 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:07:48.001186 kubelet[2765]: E0420 19:07:48.001069 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:07:48.001748 kubelet[2765]: I0420 19:07:48.001242 2765 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 20 19:07:48.074941 kubelet[2765]: E0420 19:07:48.070323 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:07:53.155902 update_engine[1636]: I20260420 19:07:53.149962 1636 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 20 19:07:53.183510 update_engine[1636]: I20260420 19:07:53.159081 1636 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 20 19:07:53.187681 update_engine[1636]: I20260420 19:07:53.181280 1636 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 20 19:07:53.199739 update_engine[1636]: E20260420 19:07:53.196306 1636 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Apr 20 19:07:53.201698 update_engine[1636]: I20260420 19:07:53.200261 1636 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Apr 20 19:07:57.611651 kubelet[2765]: I0420 19:07:57.583340 2765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=10.571387043 podStartE2EDuration="10.571387043s" podCreationTimestamp="2026-04-20 19:07:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-20 19:07:57.50950605 +0000 UTC m=+47.361471502" watchObservedRunningTime="2026-04-20 19:07:57.571387043 +0000 UTC m=+47.423352486" Apr 20 19:07:57.621995 kubelet[2765]: I0420 19:07:57.619333 2765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=9.619228884 podStartE2EDuration="9.619228884s" podCreationTimestamp="2026-04-20 19:07:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-20 19:07:56.644323559 +0000 UTC m=+46.496289015" watchObservedRunningTime="2026-04-20 19:07:57.619228884 +0000 UTC m=+47.471194336" Apr 20 19:07:57.851235 kubelet[2765]: I0420 19:07:57.848967 2765 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=10.848943648 podStartE2EDuration="10.848943648s" podCreationTimestamp="2026-04-20 19:07:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-20 19:07:57.826801843 +0000 UTC m=+47.678767291" watchObservedRunningTime="2026-04-20 19:07:57.848943648 +0000 UTC m=+47.700909108" Apr 20 19:08:03.106070 update_engine[1636]: I20260420 19:08:03.104789 1636 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 20 19:08:03.119463 update_engine[1636]: I20260420 19:08:03.116207 1636 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 20 19:08:03.120405 update_engine[1636]: I20260420 19:08:03.120335 1636 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 20 19:08:03.188246 update_engine[1636]: E20260420 19:08:03.178506 1636 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Apr 20 19:08:03.188246 update_engine[1636]: I20260420 19:08:03.188020 1636 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Apr 20 19:08:06.339106 kubelet[2765]: E0420 19:08:06.338280 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:08:07.282778 systemd[1]: Reload requested from client PID 3059 ('systemctl') (unit session-8.scope)... Apr 20 19:08:07.282879 systemd[1]: Reloading... Apr 20 19:08:10.002261 systemd-ssh-generator[3109]: Failed to query local AF_VSOCK CID: Cannot assign requested address Apr 20 19:08:10.163156 (sd-exec-[3090]: /usr/lib/systemd/system-generators/systemd-ssh-generator failed with exit status 1. Apr 20 19:08:10.198008 zram_generator::config[3116]: No configuration found. Apr 20 19:08:10.513441 kubelet[2765]: E0420 19:08:10.511714 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:08:11.025165 kubelet[2765]: E0420 19:08:11.024983 2765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:08:12.051315 kubelet[2765]: I0420 19:08:12.050199 2765 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 20 19:08:12.077950 containerd[1659]: time="2026-04-20T19:08:12.077305846Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 20 19:08:12.180233 kubelet[2765]: I0420 19:08:12.180157 2765 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 20 19:08:13.169601 update_engine[1636]: I20260420 19:08:13.152614 1636 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 20 19:08:13.169601 update_engine[1636]: I20260420 19:08:13.154973 1636 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 20 19:08:13.169601 update_engine[1636]: I20260420 19:08:13.168040 1636 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 20 19:08:13.171976 systemd[1]: /usr/lib/systemd/system/update-engine.service:10: Support for option BlockIOWeight= has been removed and it is ignored Apr 20 19:08:13.229954 update_engine[1636]: E20260420 19:08:13.189411 1636 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Apr 20 19:08:13.229954 update_engine[1636]: I20260420 19:08:13.189638 1636 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 20 19:08:13.229954 update_engine[1636]: I20260420 19:08:13.189651 1636 omaha_request_action.cc:617] Omaha request response: Apr 20 19:08:13.229954 update_engine[1636]: E20260420 19:08:13.190172 1636 omaha_request_action.cc:636] Omaha request network transfer failed. Apr 20 19:08:13.229954 update_engine[1636]: I20260420 19:08:13.190380 1636 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Apr 20 19:08:13.229954 update_engine[1636]: I20260420 19:08:13.190392 1636 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 20 19:08:13.229954 update_engine[1636]: I20260420 19:08:13.190397 1636 update_attempter.cc:306] Processing Done. Apr 20 19:08:13.229954 update_engine[1636]: E20260420 19:08:13.190408 1636 update_attempter.cc:619] Update failed. Apr 20 19:08:13.229954 update_engine[1636]: I20260420 19:08:13.190413 1636 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Apr 20 19:08:13.229954 update_engine[1636]: I20260420 19:08:13.190418 1636 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Apr 20 19:08:13.229954 update_engine[1636]: I20260420 19:08:13.190423 1636 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Apr 20 19:08:13.229954 update_engine[1636]: I20260420 19:08:13.190495 1636 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 20 19:08:13.229954 update_engine[1636]: I20260420 19:08:13.197225 1636 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 20 19:08:13.229954 update_engine[1636]: I20260420 19:08:13.198233 1636 omaha_request_action.cc:272] Request: Apr 20 19:08:13.229954 update_engine[1636]: Apr 20 19:08:13.229954 update_engine[1636]: Apr 20 19:08:13.240474 locksmithd[1713]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Apr 20 19:08:13.244446 update_engine[1636]: Apr 20 19:08:13.244446 update_engine[1636]: Apr 20 19:08:13.244446 update_engine[1636]: Apr 20 19:08:13.244446 update_engine[1636]: Apr 20 19:08:13.244446 update_engine[1636]: I20260420 19:08:13.198402 1636 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 20 19:08:13.244446 update_engine[1636]: I20260420 19:08:13.198573 1636 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 20 19:08:13.244446 update_engine[1636]: I20260420 19:08:13.208302 1636 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 20 19:08:13.244446 update_engine[1636]: E20260420 19:08:13.215451 1636 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Apr 20 19:08:13.244446 update_engine[1636]: I20260420 19:08:13.222461 1636 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 20 19:08:13.244446 update_engine[1636]: I20260420 19:08:13.222808 1636 omaha_request_action.cc:617] Omaha request response: Apr 20 19:08:13.244446 update_engine[1636]: I20260420 19:08:13.222822 1636 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 20 19:08:13.244446 update_engine[1636]: I20260420 19:08:13.222855 1636 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 20 19:08:13.244446 update_engine[1636]: I20260420 19:08:13.222862 1636 update_attempter.cc:306] Processing Done. Apr 20 19:08:13.244446 update_engine[1636]: I20260420 19:08:13.222872 1636 update_attempter.cc:310] Error event sent. Apr 20 19:08:13.244446 update_engine[1636]: I20260420 19:08:13.222920 1636 update_check_scheduler.cc:74] Next update check in 41m31s Apr 20 19:08:13.282917 locksmithd[1713]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Apr 20 19:08:14.834118 kubelet[2765]: I0420 19:08:14.833344 2765 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Apr 20 19:08:16.141107 systemd[1]: Reloading finished in 8856 ms. Apr 20 19:08:16.190000 audit: BPF prog-id=89 op=LOAD Apr 20 19:08:16.195625 kernel: kauditd_printk_skb: 122 callbacks suppressed Apr 20 19:08:16.195689 kernel: audit: type=1334 audit(1776712096.190:376): prog-id=89 op=LOAD Apr 20 19:08:16.195000 audit: BPF prog-id=59 op=UNLOAD Apr 20 19:08:16.198616 kernel: audit: type=1334 audit(1776712096.195:377): prog-id=59 op=UNLOAD Apr 20 19:08:16.199000 audit: BPF prog-id=90 op=LOAD Apr 20 19:08:16.202219 kernel: audit: type=1334 audit(1776712096.199:378): prog-id=90 op=LOAD Apr 20 19:08:16.199000 audit: BPF prog-id=39 op=UNLOAD Apr 20 19:08:16.211622 kernel: audit: type=1334 audit(1776712096.199:379): prog-id=39 op=UNLOAD Apr 20 19:08:16.199000 audit: BPF prog-id=91 op=LOAD Apr 20 19:08:16.199000 audit: BPF prog-id=92 op=LOAD Apr 20 19:08:16.213130 kernel: audit: type=1334 audit(1776712096.199:380): prog-id=91 op=LOAD Apr 20 19:08:16.213158 kernel: audit: type=1334 audit(1776712096.199:381): prog-id=92 op=LOAD Apr 20 19:08:16.199000 audit: BPF prog-id=40 op=UNLOAD Apr 20 19:08:16.216635 kernel: audit: type=1334 audit(1776712096.199:382): prog-id=40 op=UNLOAD Apr 20 19:08:16.199000 audit: BPF prog-id=41 op=UNLOAD Apr 20 19:08:16.220424 kernel: audit: type=1334 audit(1776712096.199:383): prog-id=41 op=UNLOAD Apr 20 19:08:16.213000 audit: BPF prog-id=93 op=LOAD Apr 20 19:08:16.222705 kernel: audit: type=1334 audit(1776712096.213:384): prog-id=93 op=LOAD Apr 20 19:08:16.213000 audit: BPF prog-id=64 op=UNLOAD Apr 20 19:08:16.216000 audit: BPF prog-id=94 op=LOAD Apr 20 19:08:16.216000 audit: BPF prog-id=42 op=UNLOAD Apr 20 19:08:16.224653 kernel: audit: type=1334 audit(1776712096.213:385): prog-id=64 op=UNLOAD Apr 20 19:08:16.216000 audit: BPF prog-id=95 op=LOAD Apr 20 19:08:16.216000 audit: BPF prog-id=96 op=LOAD Apr 20 19:08:16.216000 audit: BPF prog-id=43 op=UNLOAD Apr 20 19:08:16.216000 audit: BPF prog-id=44 op=UNLOAD Apr 20 19:08:16.222000 audit: BPF prog-id=97 op=LOAD Apr 20 19:08:16.222000 audit: BPF prog-id=69 op=UNLOAD Apr 20 19:08:16.223000 audit: BPF prog-id=98 op=LOAD Apr 20 19:08:16.223000 audit: BPF prog-id=45 op=UNLOAD Apr 20 19:08:16.223000 audit: BPF prog-id=99 op=LOAD Apr 20 19:08:16.223000 audit: BPF prog-id=100 op=LOAD Apr 20 19:08:16.223000 audit: BPF prog-id=46 op=UNLOAD Apr 20 19:08:16.223000 audit: BPF prog-id=47 op=UNLOAD Apr 20 19:08:16.224000 audit: BPF prog-id=101 op=LOAD Apr 20 19:08:16.224000 audit: BPF prog-id=48 op=UNLOAD Apr 20 19:08:16.279000 audit: BPF prog-id=102 op=LOAD Apr 20 19:08:16.279000 audit: BPF prog-id=49 op=UNLOAD Apr 20 19:08:16.315000 audit: BPF prog-id=103 op=LOAD Apr 20 19:08:16.315000 audit: BPF prog-id=50 op=UNLOAD Apr 20 19:08:16.315000 audit: BPF prog-id=104 op=LOAD Apr 20 19:08:16.315000 audit: BPF prog-id=105 op=LOAD Apr 20 19:08:16.315000 audit: BPF prog-id=51 op=UNLOAD Apr 20 19:08:16.315000 audit: BPF prog-id=52 op=UNLOAD Apr 20 19:08:16.322000 audit: BPF prog-id=106 op=LOAD Apr 20 19:08:16.322000 audit: BPF prog-id=79 op=UNLOAD Apr 20 19:08:16.333000 audit: BPF prog-id=107 op=LOAD Apr 20 19:08:16.333000 audit: BPF prog-id=74 op=UNLOAD Apr 20 19:08:16.344000 audit: BPF prog-id=108 op=LOAD Apr 20 19:08:16.345000 audit: BPF prog-id=53 op=UNLOAD Apr 20 19:08:16.346000 audit: BPF prog-id=109 op=LOAD Apr 20 19:08:16.346000 audit: BPF prog-id=110 op=LOAD Apr 20 19:08:16.346000 audit: BPF prog-id=54 op=UNLOAD Apr 20 19:08:16.346000 audit: BPF prog-id=55 op=UNLOAD Apr 20 19:08:16.346000 audit: BPF prog-id=111 op=LOAD Apr 20 19:08:16.346000 audit: BPF prog-id=84 op=UNLOAD Apr 20 19:08:16.347000 audit: BPF prog-id=112 op=LOAD Apr 20 19:08:16.348000 audit: BPF prog-id=56 op=UNLOAD Apr 20 19:08:16.359000 audit: BPF prog-id=113 op=LOAD Apr 20 19:08:16.359000 audit: BPF prog-id=114 op=LOAD Apr 20 19:08:16.359000 audit: BPF prog-id=57 op=UNLOAD Apr 20 19:08:16.359000 audit: BPF prog-id=58 op=UNLOAD Apr 20 19:08:17.078662 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 19:08:17.162787 systemd[1]: kubelet.service: Deactivated successfully. Apr 20 19:08:17.163358 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 19:08:17.162000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:08:17.163481 systemd[1]: kubelet.service: Consumed 27.950s CPU time, 134.8M memory peak. Apr 20 19:08:17.342108 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 20 19:08:18.154298 systemd[1768]: Created slice background.slice - User Background Tasks Slice. Apr 20 19:08:18.155975 systemd[1768]: Starting systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories... Apr 20 19:08:18.383322 systemd[1768]: Finished systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories. Apr 20 19:08:21.840895 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 20 19:08:21.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:08:21.846595 kernel: kauditd_printk_skb: 43 callbacks suppressed Apr 20 19:08:21.846673 kernel: audit: type=1130 audit(1776712101.843:429): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:08:21.884174 (kubelet)[3163]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 20 19:08:22.147271 kubelet[3163]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 20 19:08:22.147271 kubelet[3163]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 20 19:08:22.147271 kubelet[3163]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 20 19:08:22.147271 kubelet[3163]: I0420 19:08:22.134448 3163 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 20 19:08:22.301957 kubelet[3163]: I0420 19:08:22.301811 3163 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 20 19:08:22.301957 kubelet[3163]: I0420 19:08:22.301889 3163 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 20 19:08:22.307628 kubelet[3163]: I0420 19:08:22.302335 3163 server.go:956] "Client rotation is on, will bootstrap in background" Apr 20 19:08:22.362751 kubelet[3163]: I0420 19:08:22.361762 3163 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 20 19:08:22.489419 kubelet[3163]: I0420 19:08:22.488989 3163 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 20 19:08:22.563483 kubelet[3163]: I0420 19:08:22.563413 3163 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 20 19:08:22.664162 kubelet[3163]: I0420 19:08:22.660691 3163 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 20 19:08:22.664162 kubelet[3163]: I0420 19:08:22.660952 3163 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 20 19:08:22.664162 kubelet[3163]: I0420 19:08:22.660977 3163 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 20 19:08:22.664162 kubelet[3163]: I0420 19:08:22.661502 3163 topology_manager.go:138] "Creating topology manager with none policy" Apr 20 19:08:22.667660 kubelet[3163]: I0420 19:08:22.661861 3163 container_manager_linux.go:303] "Creating device plugin manager" Apr 20 19:08:22.667660 kubelet[3163]: I0420 19:08:22.662067 3163 state_mem.go:36] "Initialized new in-memory state store" Apr 20 19:08:22.667660 kubelet[3163]: I0420 19:08:22.662493 3163 kubelet.go:480] "Attempting to sync node with API server" Apr 20 19:08:22.667660 kubelet[3163]: I0420 19:08:22.662504 3163 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 20 19:08:22.667660 kubelet[3163]: I0420 19:08:22.662526 3163 kubelet.go:386] "Adding apiserver pod source" Apr 20 19:08:22.667660 kubelet[3163]: I0420 19:08:22.664681 3163 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 20 19:08:22.674436 kubelet[3163]: I0420 19:08:22.671955 3163 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.2.1" apiVersion="v1" Apr 20 19:08:22.689079 kubelet[3163]: I0420 19:08:22.676364 3163 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 20 19:08:23.036372 kubelet[3163]: I0420 19:08:23.036274 3163 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 20 19:08:23.036372 kubelet[3163]: I0420 19:08:23.036382 3163 server.go:1289] "Started kubelet" Apr 20 19:08:23.037435 kubelet[3163]: I0420 19:08:23.037292 3163 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 20 19:08:23.040342 kubelet[3163]: I0420 19:08:23.037964 3163 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 20 19:08:23.044323 kubelet[3163]: I0420 19:08:23.041478 3163 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 20 19:08:23.044323 kubelet[3163]: I0420 19:08:23.041715 3163 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 20 19:08:23.088665 kubelet[3163]: I0420 19:08:23.088208 3163 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 20 19:08:23.112954 kubelet[3163]: I0420 19:08:23.111209 3163 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 20 19:08:23.150630 kubelet[3163]: I0420 19:08:23.147141 3163 server.go:317] "Adding debug handlers to kubelet server" Apr 20 19:08:23.228704 kubelet[3163]: I0420 19:08:23.228376 3163 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 20 19:08:23.297904 kubelet[3163]: I0420 19:08:23.297663 3163 factory.go:223] Registration of the systemd container factory successfully Apr 20 19:08:23.338596 kubelet[3163]: I0420 19:08:23.336280 3163 reconciler.go:26] "Reconciler: start to sync state" Apr 20 19:08:23.416066 kubelet[3163]: I0420 19:08:23.415802 3163 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 20 19:08:23.457494 kubelet[3163]: E0420 19:08:23.456602 3163 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 20 19:08:23.613611 kubelet[3163]: I0420 19:08:23.597325 3163 factory.go:223] Registration of the containerd container factory successfully Apr 20 19:08:23.674027 kubelet[3163]: I0420 19:08:23.669285 3163 apiserver.go:52] "Watching apiserver" Apr 20 19:08:24.664611 kubelet[3163]: I0420 19:08:24.656601 3163 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 20 19:08:24.856290 kubelet[3163]: I0420 19:08:24.854836 3163 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 20 19:08:24.857211 kubelet[3163]: I0420 19:08:24.857189 3163 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 20 19:08:24.857350 kubelet[3163]: I0420 19:08:24.857341 3163 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 20 19:08:24.857662 kubelet[3163]: I0420 19:08:24.857653 3163 kubelet.go:2436] "Starting kubelet main sync loop" Apr 20 19:08:24.857917 kubelet[3163]: E0420 19:08:24.857890 3163 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 20 19:08:25.060494 kubelet[3163]: E0420 19:08:25.055980 3163 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 20 19:08:25.257314 kubelet[3163]: E0420 19:08:25.257089 3163 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 20 19:08:25.282929 kubelet[3163]: I0420 19:08:25.281354 3163 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 20 19:08:25.284796 kubelet[3163]: I0420 19:08:25.283411 3163 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 20 19:08:25.284796 kubelet[3163]: I0420 19:08:25.284252 3163 state_mem.go:36] "Initialized new in-memory state store" Apr 20 19:08:25.285137 kubelet[3163]: I0420 19:08:25.285031 3163 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 20 19:08:25.285137 kubelet[3163]: I0420 19:08:25.285042 3163 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 20 19:08:25.285137 kubelet[3163]: I0420 19:08:25.285058 3163 policy_none.go:49] "None policy: Start" Apr 20 19:08:25.285137 kubelet[3163]: I0420 19:08:25.285071 3163 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 20 19:08:25.285137 kubelet[3163]: I0420 19:08:25.285079 3163 state_mem.go:35] "Initializing new in-memory state store" Apr 20 19:08:25.285365 kubelet[3163]: I0420 19:08:25.285214 3163 state_mem.go:75] "Updated machine memory state" Apr 20 19:08:25.454920 kubelet[3163]: E0420 19:08:25.454812 3163 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 20 19:08:25.464721 kubelet[3163]: I0420 19:08:25.464418 3163 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 20 19:08:25.470007 kubelet[3163]: I0420 19:08:25.468973 3163 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 20 19:08:25.476952 kubelet[3163]: I0420 19:08:25.472733 3163 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 20 19:08:25.536896 kubelet[3163]: I0420 19:08:25.482494 3163 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 20 19:08:25.551737 kubelet[3163]: E0420 19:08:25.551705 3163 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 20 19:08:25.572970 containerd[1659]: time="2026-04-20T19:08:25.570993072Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 20 19:08:25.632326 kubelet[3163]: I0420 19:08:25.623317 3163 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 20 19:08:25.761420 kubelet[3163]: I0420 19:08:25.760166 3163 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 20 19:08:25.858755 kubelet[3163]: I0420 19:08:25.858705 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ncsk\" (UniqueName: \"kubernetes.io/projected/526e8f89-8d32-4504-b20c-956610c7bb82-kube-api-access-6ncsk\") pod \"kube-proxy-c6mkn\" (UID: \"526e8f89-8d32-4504-b20c-956610c7bb82\") " pod="kube-system/kube-proxy-c6mkn" Apr 20 19:08:25.862585 kubelet[3163]: I0420 19:08:25.859212 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5ef51a6b32499d3d1e531fb8b3a83d4f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"5ef51a6b32499d3d1e531fb8b3a83d4f\") " pod="kube-system/kube-apiserver-localhost" Apr 20 19:08:25.862585 kubelet[3163]: I0420 19:08:25.862172 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 20 19:08:25.862585 kubelet[3163]: I0420 19:08:25.862341 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 20 19:08:25.862585 kubelet[3163]: I0420 19:08:25.862363 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 20 19:08:25.862585 kubelet[3163]: I0420 19:08:25.862435 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 20 19:08:25.863590 kubelet[3163]: I0420 19:08:25.862528 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/526e8f89-8d32-4504-b20c-956610c7bb82-lib-modules\") pod \"kube-proxy-c6mkn\" (UID: \"526e8f89-8d32-4504-b20c-956610c7bb82\") " pod="kube-system/kube-proxy-c6mkn" Apr 20 19:08:25.863590 kubelet[3163]: I0420 19:08:25.863414 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5ef51a6b32499d3d1e531fb8b3a83d4f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"5ef51a6b32499d3d1e531fb8b3a83d4f\") " pod="kube-system/kube-apiserver-localhost" Apr 20 19:08:25.863590 kubelet[3163]: I0420 19:08:25.863436 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5ef51a6b32499d3d1e531fb8b3a83d4f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"5ef51a6b32499d3d1e531fb8b3a83d4f\") " pod="kube-system/kube-apiserver-localhost" Apr 20 19:08:25.863590 kubelet[3163]: I0420 19:08:25.863455 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 20 19:08:25.863590 kubelet[3163]: I0420 19:08:25.863473 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33fee6ba1581201eda98a989140db110-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"33fee6ba1581201eda98a989140db110\") " pod="kube-system/kube-scheduler-localhost" Apr 20 19:08:25.959831 kubelet[3163]: I0420 19:08:25.863490 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/526e8f89-8d32-4504-b20c-956610c7bb82-kube-proxy\") pod \"kube-proxy-c6mkn\" (UID: \"526e8f89-8d32-4504-b20c-956610c7bb82\") " pod="kube-system/kube-proxy-c6mkn" Apr 20 19:08:25.959831 kubelet[3163]: I0420 19:08:25.863506 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/526e8f89-8d32-4504-b20c-956610c7bb82-xtables-lock\") pod \"kube-proxy-c6mkn\" (UID: \"526e8f89-8d32-4504-b20c-956610c7bb82\") " pod="kube-system/kube-proxy-c6mkn" Apr 20 19:08:25.985182 kubelet[3163]: I0420 19:08:25.984775 3163 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 20 19:08:25.985051 systemd[1]: Created slice kubepods-besteffort-pod526e8f89_8d32_4504_b20c_956610c7bb82.slice - libcontainer container kubepods-besteffort-pod526e8f89_8d32_4504_b20c_956610c7bb82.slice. Apr 20 19:08:26.158340 kubelet[3163]: E0420 19:08:26.081094 3163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:08:26.158340 kubelet[3163]: E0420 19:08:26.146234 3163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:08:26.158340 kubelet[3163]: E0420 19:08:26.148863 3163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:08:26.186505 kubelet[3163]: I0420 19:08:26.185973 3163 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Apr 20 19:08:26.216483 kubelet[3163]: I0420 19:08:26.214432 3163 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 20 19:08:26.550953 kubelet[3163]: E0420 19:08:26.546781 3163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:08:26.553877 kubelet[3163]: E0420 19:08:26.553378 3163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:08:26.562053 kubelet[3163]: E0420 19:08:26.554178 3163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:08:26.630978 kubelet[3163]: E0420 19:08:26.630448 3163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:08:26.698751 containerd[1659]: time="2026-04-20T19:08:26.698014359Z" level=info msg="RunPodSandbox for name:\"kube-proxy-c6mkn\" uid:\"526e8f89-8d32-4504-b20c-956610c7bb82\" namespace:\"kube-system\"" Apr 20 19:08:27.102468 containerd[1659]: time="2026-04-20T19:08:27.102022748Z" level=info msg="connecting to shim 2c13164970d6ef1f6141d749c815cb3471b4fde57d2ae638fe20a36a6b16d239" address="unix:///run/containerd/s/ca2ce10a5fbc4a1747a7d8ffe39d9ca9a75c825f0fb20d447441e2d2c47dff75" namespace=k8s.io protocol=ttrpc version=3 Apr 20 19:08:27.655596 kubelet[3163]: E0420 19:08:27.655419 3163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:08:27.658715 kubelet[3163]: E0420 19:08:27.658631 3163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:08:27.861325 systemd[1]: Started cri-containerd-2c13164970d6ef1f6141d749c815cb3471b4fde57d2ae638fe20a36a6b16d239.scope - libcontainer container 2c13164970d6ef1f6141d749c815cb3471b4fde57d2ae638fe20a36a6b16d239. Apr 20 19:08:28.420000 audit: BPF prog-id=115 op=LOAD Apr 20 19:08:28.439041 kernel: audit: type=1334 audit(1776712108.420:430): prog-id=115 op=LOAD Apr 20 19:08:28.458000 audit: BPF prog-id=116 op=LOAD Apr 20 19:08:28.458000 audit[3231]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a240 a2=98 a3=0 items=0 ppid=3218 pid=3231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:08:28.458000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263313331363439373064366566316636313431643734396338313563 Apr 20 19:08:28.458000 audit: BPF prog-id=116 op=UNLOAD Apr 20 19:08:28.458000 audit[3231]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3218 pid=3231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:08:28.539782 kernel: audit: type=1334 audit(1776712108.458:431): prog-id=116 op=LOAD Apr 20 19:08:28.540023 kernel: audit: type=1300 audit(1776712108.458:431): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a240 a2=98 a3=0 items=0 ppid=3218 pid=3231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:08:28.458000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263313331363439373064366566316636313431643734396338313563 Apr 20 19:08:28.544019 kernel: audit: type=1327 audit(1776712108.458:431): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263313331363439373064366566316636313431643734396338313563 Apr 20 19:08:28.544878 kernel: audit: type=1334 audit(1776712108.458:432): prog-id=116 op=UNLOAD Apr 20 19:08:28.545100 kernel: audit: type=1300 audit(1776712108.458:432): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3218 pid=3231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:08:28.545133 kernel: audit: type=1327 audit(1776712108.458:432): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263313331363439373064366566316636313431643734396338313563 Apr 20 19:08:28.464000 audit: BPF prog-id=117 op=LOAD Apr 20 19:08:28.464000 audit[3231]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a490 a2=98 a3=0 items=0 ppid=3218 pid=3231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:08:28.562141 kernel: audit: type=1334 audit(1776712108.464:433): prog-id=117 op=LOAD Apr 20 19:08:28.562359 kernel: audit: type=1300 audit(1776712108.464:433): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a490 a2=98 a3=0 items=0 ppid=3218 pid=3231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:08:28.464000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263313331363439373064366566316636313431643734396338313563 Apr 20 19:08:28.568778 kernel: audit: type=1327 audit(1776712108.464:433): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263313331363439373064366566316636313431643734396338313563 Apr 20 19:08:28.470000 audit: BPF prog-id=118 op=LOAD Apr 20 19:08:28.470000 audit[3231]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a220 a2=98 a3=0 items=0 ppid=3218 pid=3231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:08:28.470000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263313331363439373064366566316636313431643734396338313563 Apr 20 19:08:28.470000 audit: BPF prog-id=118 op=UNLOAD Apr 20 19:08:28.470000 audit[3231]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3218 pid=3231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:08:28.470000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263313331363439373064366566316636313431643734396338313563 Apr 20 19:08:28.470000 audit: BPF prog-id=117 op=UNLOAD Apr 20 19:08:28.470000 audit[3231]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3218 pid=3231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:08:28.470000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263313331363439373064366566316636313431643734396338313563 Apr 20 19:08:28.470000 audit: BPF prog-id=119 op=LOAD Apr 20 19:08:28.470000 audit[3231]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6f0 a2=98 a3=0 items=0 ppid=3218 pid=3231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:08:28.470000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263313331363439373064366566316636313431643734396338313563 Apr 20 19:08:29.049403 containerd[1659]: time="2026-04-20T19:08:29.049201114Z" level=info msg="RunPodSandbox for name:\"kube-proxy-c6mkn\" uid:\"526e8f89-8d32-4504-b20c-956610c7bb82\" namespace:\"kube-system\" returns sandbox id \"2c13164970d6ef1f6141d749c815cb3471b4fde57d2ae638fe20a36a6b16d239\"" Apr 20 19:08:29.156071 kubelet[3163]: E0420 19:08:29.154519 3163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:08:29.429349 containerd[1659]: time="2026-04-20T19:08:29.429066287Z" level=info msg="CreateContainer within sandbox \"2c13164970d6ef1f6141d749c815cb3471b4fde57d2ae638fe20a36a6b16d239\" for container name:\"kube-proxy\"" Apr 20 19:08:29.754341 kubelet[3163]: E0420 19:08:29.754067 3163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:08:29.931690 containerd[1659]: time="2026-04-20T19:08:29.924698599Z" level=info msg="Container 3b0a4a3b835e85fc99f3ab2b047e3f77e9f11959e0cedf296580685144662c2a: CDI devices from CRI Config.CDIDevices: []" Apr 20 19:08:29.984977 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2078156993.mount: Deactivated successfully. Apr 20 19:08:30.310930 containerd[1659]: time="2026-04-20T19:08:30.310611937Z" level=info msg="CreateContainer within sandbox \"2c13164970d6ef1f6141d749c815cb3471b4fde57d2ae638fe20a36a6b16d239\" for name:\"kube-proxy\" returns container id \"3b0a4a3b835e85fc99f3ab2b047e3f77e9f11959e0cedf296580685144662c2a\"" Apr 20 19:08:30.339314 containerd[1659]: time="2026-04-20T19:08:30.334509382Z" level=info msg="StartContainer for \"3b0a4a3b835e85fc99f3ab2b047e3f77e9f11959e0cedf296580685144662c2a\"" Apr 20 19:08:30.360623 containerd[1659]: time="2026-04-20T19:08:30.360189139Z" level=info msg="connecting to shim 3b0a4a3b835e85fc99f3ab2b047e3f77e9f11959e0cedf296580685144662c2a" address="unix:///run/containerd/s/ca2ce10a5fbc4a1747a7d8ffe39d9ca9a75c825f0fb20d447441e2d2c47dff75" protocol=ttrpc version=3 Apr 20 19:08:30.702110 kubelet[3163]: E0420 19:08:30.700221 3163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:08:31.056497 systemd[1]: Started cri-containerd-3b0a4a3b835e85fc99f3ab2b047e3f77e9f11959e0cedf296580685144662c2a.scope - libcontainer container 3b0a4a3b835e85fc99f3ab2b047e3f77e9f11959e0cedf296580685144662c2a. Apr 20 19:08:31.606644 kubelet[3163]: E0420 19:08:31.606512 3163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:08:31.606644 kubelet[3163]: E0420 19:08:31.612186 3163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:08:32.315000 audit: BPF prog-id=120 op=LOAD Apr 20 19:08:32.315000 audit[3257]: SYSCALL arch=c000003e syscall=321 success=yes exit=19 a0=5 a1=c0001bc490 a2=98 a3=0 items=0 ppid=3218 pid=3257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:08:32.315000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362306134613362383335653835666339396633616232623034376533 Apr 20 19:08:32.316000 audit: BPF prog-id=121 op=LOAD Apr 20 19:08:32.316000 audit[3257]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001bc220 a2=98 a3=0 items=0 ppid=3218 pid=3257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:08:32.316000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362306134613362383335653835666339396633616232623034376533 Apr 20 19:08:32.317000 audit: BPF prog-id=121 op=UNLOAD Apr 20 19:08:32.317000 audit[3257]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3218 pid=3257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:08:32.317000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362306134613362383335653835666339396633616232623034376533 Apr 20 19:08:32.317000 audit: BPF prog-id=120 op=UNLOAD Apr 20 19:08:32.317000 audit[3257]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=13 a1=0 a2=0 a3=0 items=0 ppid=3218 pid=3257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:08:32.317000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362306134613362383335653835666339396633616232623034376533 Apr 20 19:08:32.317000 audit: BPF prog-id=122 op=LOAD Apr 20 19:08:32.317000 audit[3257]: SYSCALL arch=c000003e syscall=321 success=yes exit=19 a0=5 a1=c0001bc6f0 a2=98 a3=0 items=0 ppid=3218 pid=3257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:08:32.317000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362306134613362383335653835666339396633616232623034376533 Apr 20 19:08:32.745798 containerd[1659]: time="2026-04-20T19:08:32.745286839Z" level=info msg="StartContainer for \"3b0a4a3b835e85fc99f3ab2b047e3f77e9f11959e0cedf296580685144662c2a\" returns successfully" Apr 20 19:08:33.964816 kubelet[3163]: E0420 19:08:33.962400 3163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:08:35.185632 kubelet[3163]: E0420 19:08:35.185500 3163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:08:35.335436 kubelet[3163]: E0420 19:08:35.335272 3163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:08:35.875783 kubelet[3163]: I0420 19:08:35.865778 3163 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-c6mkn" podStartSLOduration=11.865721446 podStartE2EDuration="11.865721446s" podCreationTimestamp="2026-04-20 19:08:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-20 19:08:34.35415746 +0000 UTC m=+12.448407167" watchObservedRunningTime="2026-04-20 19:08:35.865721446 +0000 UTC m=+13.959971154" Apr 20 19:08:36.139250 kubelet[3163]: E0420 19:08:36.136509 3163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:08:38.853000 audit[3324]: NETFILTER_CFG table=mangle:52 family=2 entries=1 op=nft_register_chain pid=3324 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 20 19:08:38.867840 kernel: kauditd_printk_skb: 27 callbacks suppressed Apr 20 19:08:38.867962 kernel: audit: type=1325 audit(1776712118.853:443): table=mangle:52 family=2 entries=1 op=nft_register_chain pid=3324 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 20 19:08:38.853000 audit[3324]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe6f8d13e0 a2=0 a3=7ffe6f8d13cc items=0 ppid=3270 pid=3324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:08:38.925458 kernel: audit: type=1300 audit(1776712118.853:443): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe6f8d13e0 a2=0 a3=7ffe6f8d13cc items=0 ppid=3270 pid=3324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:08:38.853000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Apr 20 19:08:38.947613 kernel: audit: type=1327 audit(1776712118.853:443): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Apr 20 19:08:38.960000 audit[3326]: NETFILTER_CFG table=nat:53 family=2 entries=1 op=nft_register_chain pid=3326 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 20 19:08:38.965662 kernel: audit: type=1325 audit(1776712118.960:444): table=nat:53 family=2 entries=1 op=nft_register_chain pid=3326 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 20 19:08:38.960000 audit[3326]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcce475570 a2=0 a3=7ffcce47555c items=0 ppid=3270 pid=3326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:08:38.974205 kernel: audit: type=1300 audit(1776712118.960:444): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcce475570 a2=0 a3=7ffcce47555c items=0 ppid=3270 pid=3326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:08:38.960000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Apr 20 19:08:39.004218 kernel: audit: type=1327 audit(1776712118.960:444): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Apr 20 19:08:39.012000 audit[3327]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=3327 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 20 19:08:39.021217 kernel: audit: type=1325 audit(1776712119.012:445): table=filter:54 family=2 entries=1 op=nft_register_chain pid=3327 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 20 19:08:39.012000 audit[3327]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd8a861330 a2=0 a3=7ffd8a86131c items=0 ppid=3270 pid=3327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:08:39.044632 kernel: audit: type=1300 audit(1776712119.012:445): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd8a861330 a2=0 a3=7ffd8a86131c items=0 ppid=3270 pid=3327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:08:39.012000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Apr 20 19:08:39.051778 kernel: audit: type=1327 audit(1776712119.012:445): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Apr 20 19:08:39.241000 audit[3329]: NETFILTER_CFG table=mangle:55 family=10 entries=1 op=nft_register_chain pid=3329 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Apr 20 19:08:39.241000 audit[3329]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffed6cacd0 a2=0 a3=7fffed6cacbc items=0 ppid=3270 pid=3329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:08:39.241000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Apr 20 19:08:39.257357 kernel: audit: type=1325 audit(1776712119.241:446): table=mangle:55 family=10 entries=1 op=nft_register_chain pid=3329 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Apr 20 19:08:39.271000 audit[3333]: NETFILTER_CFG table=nat:56 family=10 entries=1 op=nft_register_chain pid=3333 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Apr 20 19:08:39.271000 audit[3333]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd9be43010 a2=0 a3=7ffd9be42ffc items=0 ppid=3270 pid=3333 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:08:39.271000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Apr 20 19:08:39.412000 audit[3334]: NETFILTER_CFG table=filter:57 family=10 entries=1 op=nft_register_chain pid=3334 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Apr 20 19:08:39.412000 audit[3334]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff32b25b70 a2=0 a3=7fff32b25b5c items=0 ppid=3270 pid=3334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:08:39.412000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Apr 20 19:08:39.659000 audit[3335]: NETFILTER_CFG table=filter:58 family=2 entries=1 op=nft_register_chain pid=3335 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 20 19:08:39.659000 audit[3335]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffc2cea5b10 a2=0 a3=7ffc2cea5afc items=0 ppid=3270 pid=3335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:08:39.659000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Apr 20 19:08:40.002000 audit[3337]: NETFILTER_CFG table=filter:59 family=2 entries=1 op=nft_register_rule pid=3337 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 20 19:08:40.002000 audit[3337]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffcde4434f0 a2=0 a3=7ffcde4434dc items=0 ppid=3270 pid=3337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:08:40.002000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Apr 20 19:08:40.409000 audit[3340]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_rule pid=3340 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 20 19:08:40.409000 audit[3340]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc19d0c620 a2=0 a3=7ffc19d0c60c items=0 ppid=3270 pid=3340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:08:40.409000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Apr 20 19:08:40.458000 audit[3341]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_chain pid=3341 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 20 19:08:40.458000 audit[3341]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd176881e0 a2=0 a3=7ffd176881cc items=0 ppid=3270 pid=3341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:08:40.458000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Apr 20 19:08:40.678000 audit[3343]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3343 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 20 19:08:40.678000 audit[3343]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcf25e3130 a2=0 a3=7ffcf25e311c items=0 ppid=3270 pid=3343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:08:40.678000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Apr 20 19:08:40.792000 audit[3344]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3344 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 20 19:08:40.792000 audit[3344]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffff5553eb0 a2=0 a3=7ffff5553e9c items=0 ppid=3270 pid=3344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:08:40.792000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Apr 20 19:08:40.817000 audit[3346]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3346 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 20 19:08:40.817000 audit[3346]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffff3bf7540 a2=0 a3=7ffff3bf752c items=0 ppid=3270 pid=3346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:08:40.817000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Apr 20 19:08:41.008000 audit[3349]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_rule pid=3349 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 20 19:08:41.008000 audit[3349]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffcfd9d3300 a2=0 a3=7ffcfd9d32ec items=0 ppid=3270 pid=3349 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:08:41.008000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Apr 20 19:08:41.057000 audit[3350]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_chain pid=3350 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 20 19:08:41.057000 audit[3350]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffca4d3fa00 a2=0 a3=7ffca4d3f9ec items=0 ppid=3270 pid=3350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:08:41.057000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Apr 20 19:08:41.177000 audit[3352]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3352 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 20 19:08:41.177000 audit[3352]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc809a3940 a2=0 a3=7ffc809a392c items=0 ppid=3270 pid=3352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:08:41.177000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Apr 20 19:08:41.187000 audit[3353]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3353 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 20 19:08:41.187000 audit[3353]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcc51c2b90 a2=0 a3=7ffcc51c2b7c items=0 ppid=3270 pid=3353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:08:41.187000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Apr 20 19:08:41.240000 audit[3355]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3355 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 20 19:08:41.240000 audit[3355]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffee0ccece0 a2=0 a3=7ffee0cceccc items=0 ppid=3270 pid=3355 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:08:41.240000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Apr 20 19:08:41.477000 audit[3358]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_rule pid=3358 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 20 19:08:41.477000 audit[3358]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffee2304980 a2=0 a3=7ffee230496c items=0 ppid=3270 pid=3358 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:08:41.477000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Apr 20 19:08:41.622000 audit[3361]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3361 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 20 19:08:41.622000 audit[3361]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffce5c5e6f0 a2=0 a3=7ffce5c5e6dc items=0 ppid=3270 pid=3361 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:08:41.622000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Apr 20 19:08:41.637000 audit[3362]: NETFILTER_CFG table=nat:72 family=2 entries=1 op=nft_register_chain pid=3362 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 20 19:08:41.637000 audit[3362]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff569598b0 a2=0 a3=7fff5695989c items=0 ppid=3270 pid=3362 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:08:41.637000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Apr 20 19:08:41.649000 audit[3364]: NETFILTER_CFG table=nat:73 family=2 entries=1 op=nft_register_rule pid=3364 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 20 19:08:41.649000 audit[3364]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffd70f42d20 a2=0 a3=7ffd70f42d0c items=0 ppid=3270 pid=3364 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:08:41.649000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Apr 20 19:08:41.812000 audit[3367]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_rule pid=3367 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 20 19:08:41.812000 audit[3367]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe2e300ee0 a2=0 a3=7ffe2e300ecc items=0 ppid=3270 pid=3367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:08:41.812000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Apr 20 19:08:41.824000 audit[3368]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_chain pid=3368 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 20 19:08:41.824000 audit[3368]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc2eaaf800 a2=0 a3=7ffc2eaaf7ec items=0 ppid=3270 pid=3368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:08:41.824000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Apr 20 19:08:42.005000 audit[3370]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3370 subj=system_u:system_r:kernel_t:s0 comm="iptables" Apr 20 19:08:42.005000 audit[3370]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7fff104ed200 a2=0 a3=7fff104ed1ec items=0 ppid=3270 pid=3370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:08:42.005000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Apr 20 19:08:42.902000 audit[3376]: NETFILTER_CFG table=filter:77 family=2 entries=8 op=nft_register_rule pid=3376 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Apr 20 19:08:42.902000 audit[3376]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffcc9c44ee0 a2=0 a3=7ffcc9c44ecc items=0 ppid=3270 pid=3376 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:08:42.902000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Apr 20 19:08:42.978000 audit[3376]: NETFILTER_CFG table=nat:78 family=2 entries=14 op=nft_register_chain pid=3376 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Apr 20 19:08:42.978000 audit[3376]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffcc9c44ee0 a2=0 a3=7ffcc9c44ecc items=0 ppid=3270 pid=3376 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:08:42.978000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Apr 20 19:08:43.040000 audit[3385]: NETFILTER_CFG table=filter:79 family=10 entries=1 op=nft_register_chain pid=3385 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Apr 20 19:08:43.040000 audit[3385]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffeb411eda0 a2=0 a3=7ffeb411ed8c items=0 ppid=3270 pid=3385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:08:43.040000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Apr 20 19:08:43.067000 audit[3387]: NETFILTER_CFG table=filter:80 family=10 entries=2 op=nft_register_chain pid=3387 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Apr 20 19:08:43.067000 audit[3387]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffe10779dc0 a2=0 a3=7ffe10779dac items=0 ppid=3270 pid=3387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:08:43.067000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Apr 20 19:08:43.314000 audit[3390]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_rule pid=3390 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Apr 20 19:08:43.314000 audit[3390]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffdac395ab0 a2=0 a3=7ffdac395a9c items=0 ppid=3270 pid=3390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:08:43.314000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Apr 20 19:08:43.375000 audit[3391]: NETFILTER_CFG table=filter:82 family=10 entries=1 op=nft_register_chain pid=3391 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Apr 20 19:08:43.375000 audit[3391]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd9226eac0 a2=0 a3=7ffd9226eaac items=0 ppid=3270 pid=3391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:08:43.375000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Apr 20 19:08:43.816000 audit[3393]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3393 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Apr 20 19:08:43.816000 audit[3393]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd7d829420 a2=0 a3=7ffd7d82940c items=0 ppid=3270 pid=3393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:08:43.816000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Apr 20 19:08:43.982000 audit[3394]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3394 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Apr 20 19:08:44.002777 kernel: kauditd_printk_skb: 86 callbacks suppressed Apr 20 19:08:44.002830 kernel: audit: type=1325 audit(1776712123.982:475): table=filter:84 family=10 entries=1 op=nft_register_chain pid=3394 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Apr 20 19:08:43.982000 audit[3394]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffee7be2de0 a2=0 a3=7ffee7be2dcc items=0 ppid=3270 pid=3394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:08:44.025697 kernel: audit: type=1300 audit(1776712123.982:475): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffee7be2de0 a2=0 a3=7ffee7be2dcc items=0 ppid=3270 pid=3394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:08:43.982000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Apr 20 19:08:44.028942 kernel: audit: type=1327 audit(1776712123.982:475): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Apr 20 19:08:44.249000 audit[3396]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3396 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Apr 20 19:08:44.264856 kernel: audit: type=1325 audit(1776712124.249:476): table=filter:85 family=10 entries=1 op=nft_register_rule pid=3396 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Apr 20 19:08:44.249000 audit[3396]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe26c4d1a0 a2=0 a3=7ffe26c4d18c items=0 ppid=3270 pid=3396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:08:44.249000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Apr 20 19:08:44.323625 kernel: audit: type=1300 audit(1776712124.249:476): arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe26c4d1a0 a2=0 a3=7ffe26c4d18c items=0 ppid=3270 pid=3396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:08:44.323833 kernel: audit: type=1327 audit(1776712124.249:476): proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Apr 20 19:08:44.396000 audit[3399]: NETFILTER_CFG table=filter:86 family=10 entries=2 op=nft_register_chain pid=3399 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Apr 20 19:08:44.396000 audit[3399]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffe4cc08df0 a2=0 a3=7ffe4cc08ddc items=0 ppid=3270 pid=3399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:08:44.396000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Apr 20 19:08:44.437107 kernel: audit: type=1325 audit(1776712124.396:477): table=filter:86 family=10 entries=2 op=nft_register_chain pid=3399 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Apr 20 19:08:44.437316 kernel: audit: type=1300 audit(1776712124.396:477): arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffe4cc08df0 a2=0 a3=7ffe4cc08ddc items=0 ppid=3270 pid=3399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:08:44.437399 kernel: audit: type=1327 audit(1776712124.396:477): proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Apr 20 19:08:44.509000 audit[3400]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_chain pid=3400 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Apr 20 19:08:44.509000 audit[3400]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffca204db60 a2=0 a3=7ffca204db4c items=0 ppid=3270 pid=3400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:08:44.509000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Apr 20 19:08:44.517701 kernel: audit: type=1325 audit(1776712124.509:478): table=filter:87 family=10 entries=1 op=nft_register_chain pid=3400 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Apr 20 19:08:44.644000 audit[3402]: NETFILTER_CFG table=filter:88 family=10 entries=1 op=nft_register_rule pid=3402 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Apr 20 19:08:44.644000 audit[3402]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc79cf4090 a2=0 a3=7ffc79cf407c items=0 ppid=3270 pid=3402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:08:44.644000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Apr 20 19:08:44.676000 audit[3403]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3403 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Apr 20 19:08:44.676000 audit[3403]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd065ad320 a2=0 a3=7ffd065ad30c items=0 ppid=3270 pid=3403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:08:44.676000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Apr 20 19:08:44.692000 audit[3405]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3405 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Apr 20 19:08:44.692000 audit[3405]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffda9fd26a0 a2=0 a3=7ffda9fd268c items=0 ppid=3270 pid=3405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:08:44.692000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Apr 20 19:08:44.840000 audit[3408]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_rule pid=3408 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Apr 20 19:08:44.840000 audit[3408]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe60fac5d0 a2=0 a3=7ffe60fac5bc items=0 ppid=3270 pid=3408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:08:44.840000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Apr 20 19:08:45.036000 audit[3417]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3417 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Apr 20 19:08:45.036000 audit[3417]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff2611b2b0 a2=0 a3=7fff2611b29c items=0 ppid=3270 pid=3417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:08:45.036000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Apr 20 19:08:45.104000 audit[3418]: NETFILTER_CFG table=nat:93 family=10 entries=1 op=nft_register_chain pid=3418 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Apr 20 19:08:45.104000 audit[3418]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff462b31c0 a2=0 a3=7fff462b31ac items=0 ppid=3270 pid=3418 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:08:45.104000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Apr 20 19:08:45.192800 kubelet[3163]: I0420 19:08:45.188070 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qj2d9\" (UniqueName: \"kubernetes.io/projected/22f1ff03-de8a-48db-b03e-54fdbe0d3d5f-kube-api-access-qj2d9\") pod \"tigera-operator-6bf85f8dd-hvgdj\" (UID: \"22f1ff03-de8a-48db-b03e-54fdbe0d3d5f\") " pod="tigera-operator/tigera-operator-6bf85f8dd-hvgdj" Apr 20 19:08:45.200758 kubelet[3163]: I0420 19:08:45.195350 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/22f1ff03-de8a-48db-b03e-54fdbe0d3d5f-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-hvgdj\" (UID: \"22f1ff03-de8a-48db-b03e-54fdbe0d3d5f\") " pod="tigera-operator/tigera-operator-6bf85f8dd-hvgdj" Apr 20 19:08:45.212000 audit[3421]: NETFILTER_CFG table=nat:94 family=10 entries=1 op=nft_register_rule pid=3421 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Apr 20 19:08:45.212000 audit[3421]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffd46e90790 a2=0 a3=7ffd46e9077c items=0 ppid=3270 pid=3421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:08:45.212000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Apr 20 19:08:45.342177 systemd[1]: Created slice kubepods-besteffort-pod22f1ff03_de8a_48db_b03e_54fdbe0d3d5f.slice - libcontainer container kubepods-besteffort-pod22f1ff03_de8a_48db_b03e_54fdbe0d3d5f.slice. Apr 20 19:08:45.479000 audit[3424]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_rule pid=3424 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Apr 20 19:08:45.479000 audit[3424]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd97e71ed0 a2=0 a3=7ffd97e71ebc items=0 ppid=3270 pid=3424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:08:45.479000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Apr 20 19:08:45.536000 audit[3426]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_chain pid=3426 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Apr 20 19:08:45.536000 audit[3426]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc60bad750 a2=0 a3=7ffc60bad73c items=0 ppid=3270 pid=3426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:08:45.536000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Apr 20 19:08:45.609000 audit[3428]: NETFILTER_CFG table=nat:97 family=10 entries=2 op=nft_register_chain pid=3428 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Apr 20 19:08:45.609000 audit[3428]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffe65e924b0 a2=0 a3=7ffe65e9249c items=0 ppid=3270 pid=3428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:08:45.609000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Apr 20 19:08:45.641000 audit[3429]: NETFILTER_CFG table=filter:98 family=10 entries=1 op=nft_register_chain pid=3429 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Apr 20 19:08:45.641000 audit[3429]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffed4621e70 a2=0 a3=7ffed4621e5c items=0 ppid=3270 pid=3429 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:08:45.641000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Apr 20 19:08:45.724148 containerd[1659]: time="2026-04-20T19:08:45.724032207Z" level=info msg="RunPodSandbox for name:\"tigera-operator-6bf85f8dd-hvgdj\" uid:\"22f1ff03-de8a-48db-b03e-54fdbe0d3d5f\" namespace:\"tigera-operator\"" Apr 20 19:08:45.834000 audit[3431]: NETFILTER_CFG table=filter:99 family=10 entries=1 op=nft_register_rule pid=3431 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Apr 20 19:08:45.834000 audit[3431]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe43686040 a2=0 a3=7ffe4368602c items=0 ppid=3270 pid=3431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:08:45.834000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Apr 20 19:08:46.080000 audit[3435]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_rule pid=3435 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Apr 20 19:08:46.080000 audit[3435]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffddbeb8d30 a2=0 a3=7ffddbeb8d1c items=0 ppid=3270 pid=3435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:08:46.080000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Apr 20 19:08:46.183000 audit[3441]: NETFILTER_CFG table=filter:101 family=10 entries=3 op=nft_register_rule pid=3441 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Apr 20 19:08:46.183000 audit[3441]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffe654c2bb0 a2=0 a3=7ffe654c2b9c items=0 ppid=3270 pid=3441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:08:46.183000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Apr 20 19:08:46.216000 audit[3441]: NETFILTER_CFG table=nat:102 family=10 entries=7 op=nft_register_chain pid=3441 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Apr 20 19:08:46.216000 audit[3441]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffe654c2bb0 a2=0 a3=7ffe654c2b9c items=0 ppid=3270 pid=3441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:08:46.216000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Apr 20 19:08:46.357977 containerd[1659]: time="2026-04-20T19:08:46.357686149Z" level=info msg="connecting to shim 535cbf317370e2ee0ec5e64de676b160729bcf3ec8cac6f2b79f5d2eb1374a04" address="unix:///run/containerd/s/f33a8176a49c61303773e14b6c829cb189085279a0319d87c3cb99135d7dee34" namespace=k8s.io protocol=ttrpc version=3 Apr 20 19:08:47.200641 systemd[1]: Started cri-containerd-535cbf317370e2ee0ec5e64de676b160729bcf3ec8cac6f2b79f5d2eb1374a04.scope - libcontainer container 535cbf317370e2ee0ec5e64de676b160729bcf3ec8cac6f2b79f5d2eb1374a04. Apr 20 19:08:47.757000 audit: BPF prog-id=123 op=LOAD Apr 20 19:08:47.777000 audit: BPF prog-id=124 op=LOAD Apr 20 19:08:47.777000 audit[3457]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000186240 a2=98 a3=0 items=0 ppid=3446 pid=3457 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:08:47.777000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533356362663331373337306532656530656335653634646536373662 Apr 20 19:08:47.778000 audit: BPF prog-id=124 op=UNLOAD Apr 20 19:08:47.778000 audit[3457]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3446 pid=3457 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:08:47.778000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533356362663331373337306532656530656335653634646536373662 Apr 20 19:08:47.778000 audit: BPF prog-id=125 op=LOAD Apr 20 19:08:47.778000 audit[3457]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000186490 a2=98 a3=0 items=0 ppid=3446 pid=3457 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:08:47.778000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533356362663331373337306532656530656335653634646536373662 Apr 20 19:08:47.778000 audit: BPF prog-id=126 op=LOAD Apr 20 19:08:47.778000 audit[3457]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000186220 a2=98 a3=0 items=0 ppid=3446 pid=3457 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:08:47.778000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533356362663331373337306532656530656335653634646536373662 Apr 20 19:08:47.778000 audit: BPF prog-id=126 op=UNLOAD Apr 20 19:08:47.778000 audit[3457]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3446 pid=3457 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:08:47.778000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533356362663331373337306532656530656335653634646536373662 Apr 20 19:08:47.778000 audit: BPF prog-id=125 op=UNLOAD Apr 20 19:08:47.778000 audit[3457]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3446 pid=3457 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:08:47.778000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533356362663331373337306532656530656335653634646536373662 Apr 20 19:08:47.778000 audit: BPF prog-id=127 op=LOAD Apr 20 19:08:47.778000 audit[3457]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001866f0 a2=98 a3=0 items=0 ppid=3446 pid=3457 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:08:47.778000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533356362663331373337306532656530656335653634646536373662 Apr 20 19:08:48.346295 containerd[1659]: time="2026-04-20T19:08:48.344268659Z" level=info msg="RunPodSandbox for name:\"tigera-operator-6bf85f8dd-hvgdj\" uid:\"22f1ff03-de8a-48db-b03e-54fdbe0d3d5f\" namespace:\"tigera-operator\" returns sandbox id \"535cbf317370e2ee0ec5e64de676b160729bcf3ec8cac6f2b79f5d2eb1374a04\"" Apr 20 19:08:48.413726 containerd[1659]: time="2026-04-20T19:08:48.413490180Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 20 19:08:51.538070 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount440985470.mount: Deactivated successfully. Apr 20 19:08:59.226580 containerd[1659]: time="2026-04-20T19:08:59.226244818Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 19:08:59.232592 containerd[1659]: time="2026-04-20T19:08:59.226926120Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40837413" Apr 20 19:08:59.232592 containerd[1659]: time="2026-04-20T19:08:59.231262802Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 19:08:59.362567 containerd[1659]: time="2026-04-20T19:08:59.362353923Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 19:08:59.371006 containerd[1659]: time="2026-04-20T19:08:59.370768318Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 10.956116035s" Apr 20 19:08:59.371006 containerd[1659]: time="2026-04-20T19:08:59.370985645Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Apr 20 19:08:59.423414 containerd[1659]: time="2026-04-20T19:08:59.423343066Z" level=info msg="CreateContainer within sandbox \"535cbf317370e2ee0ec5e64de676b160729bcf3ec8cac6f2b79f5d2eb1374a04\" for container name:\"tigera-operator\"" Apr 20 19:08:59.600117 containerd[1659]: time="2026-04-20T19:08:59.599761513Z" level=info msg="Container 4dc5f3cee3e4345df4d1f6cd625d29a3d7985c094a29b2b716e633b97294a72b: CDI devices from CRI Config.CDIDevices: []" Apr 20 19:08:59.645455 containerd[1659]: time="2026-04-20T19:08:59.645380875Z" level=info msg="CreateContainer within sandbox \"535cbf317370e2ee0ec5e64de676b160729bcf3ec8cac6f2b79f5d2eb1374a04\" for name:\"tigera-operator\" returns container id \"4dc5f3cee3e4345df4d1f6cd625d29a3d7985c094a29b2b716e633b97294a72b\"" Apr 20 19:08:59.647718 containerd[1659]: time="2026-04-20T19:08:59.646748988Z" level=info msg="StartContainer for \"4dc5f3cee3e4345df4d1f6cd625d29a3d7985c094a29b2b716e633b97294a72b\"" Apr 20 19:08:59.683150 containerd[1659]: time="2026-04-20T19:08:59.682937382Z" level=info msg="connecting to shim 4dc5f3cee3e4345df4d1f6cd625d29a3d7985c094a29b2b716e633b97294a72b" address="unix:///run/containerd/s/f33a8176a49c61303773e14b6c829cb189085279a0319d87c3cb99135d7dee34" protocol=ttrpc version=3 Apr 20 19:09:00.003033 systemd[1]: Started cri-containerd-4dc5f3cee3e4345df4d1f6cd625d29a3d7985c094a29b2b716e633b97294a72b.scope - libcontainer container 4dc5f3cee3e4345df4d1f6cd625d29a3d7985c094a29b2b716e633b97294a72b. Apr 20 19:09:00.226000 audit: BPF prog-id=128 op=LOAD Apr 20 19:09:00.229835 kernel: kauditd_printk_skb: 69 callbacks suppressed Apr 20 19:09:00.230057 kernel: audit: type=1334 audit(1776712140.226:502): prog-id=128 op=LOAD Apr 20 19:09:00.233000 audit: BPF prog-id=129 op=LOAD Apr 20 19:09:00.233000 audit[3493]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0240 a2=98 a3=0 items=0 ppid=3446 pid=3493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:09:00.243811 kernel: audit: type=1334 audit(1776712140.233:503): prog-id=129 op=LOAD Apr 20 19:09:00.233000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464633566336365653365343334356466346431663663643632356432 Apr 20 19:09:00.249465 kernel: audit: type=1300 audit(1776712140.233:503): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0240 a2=98 a3=0 items=0 ppid=3446 pid=3493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:09:00.250625 kernel: audit: type=1327 audit(1776712140.233:503): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464633566336365653365343334356466346431663663643632356432 Apr 20 19:09:00.233000 audit: BPF prog-id=129 op=UNLOAD Apr 20 19:09:00.233000 audit[3493]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3446 pid=3493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:09:00.262354 kernel: audit: type=1334 audit(1776712140.233:504): prog-id=129 op=UNLOAD Apr 20 19:09:00.233000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464633566336365653365343334356466346431663663643632356432 Apr 20 19:09:00.266365 kernel: audit: type=1300 audit(1776712140.233:504): arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3446 pid=3493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:09:00.266435 kernel: audit: type=1327 audit(1776712140.233:504): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464633566336365653365343334356466346431663663643632356432 Apr 20 19:09:00.233000 audit: BPF prog-id=130 op=LOAD Apr 20 19:09:00.275141 kernel: audit: type=1334 audit(1776712140.233:505): prog-id=130 op=LOAD Apr 20 19:09:00.233000 audit[3493]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0490 a2=98 a3=0 items=0 ppid=3446 pid=3493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:09:00.233000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464633566336365653365343334356466346431663663643632356432 Apr 20 19:09:00.290249 kernel: audit: type=1300 audit(1776712140.233:505): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0490 a2=98 a3=0 items=0 ppid=3446 pid=3493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:09:00.290614 kernel: audit: type=1327 audit(1776712140.233:505): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464633566336365653365343334356466346431663663643632356432 Apr 20 19:09:00.233000 audit: BPF prog-id=131 op=LOAD Apr 20 19:09:00.233000 audit[3493]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0220 a2=98 a3=0 items=0 ppid=3446 pid=3493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:09:00.233000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464633566336365653365343334356466346431663663643632356432 Apr 20 19:09:00.233000 audit: BPF prog-id=131 op=UNLOAD Apr 20 19:09:00.233000 audit[3493]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3446 pid=3493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:09:00.233000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464633566336365653365343334356466346431663663643632356432 Apr 20 19:09:00.233000 audit: BPF prog-id=130 op=UNLOAD Apr 20 19:09:00.233000 audit[3493]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3446 pid=3493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:09:00.233000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464633566336365653365343334356466346431663663643632356432 Apr 20 19:09:00.233000 audit: BPF prog-id=132 op=LOAD Apr 20 19:09:00.233000 audit[3493]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06f0 a2=98 a3=0 items=0 ppid=3446 pid=3493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:09:00.233000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464633566336365653365343334356466346431663663643632356432 Apr 20 19:09:00.380467 containerd[1659]: time="2026-04-20T19:09:00.380357719Z" level=info msg="StartContainer for \"4dc5f3cee3e4345df4d1f6cd625d29a3d7985c094a29b2b716e633b97294a72b\" returns successfully" Apr 20 19:09:34.873700 kubelet[3163]: E0420 19:09:34.872447 3163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:09:41.863984 kubelet[3163]: E0420 19:09:41.863890 3163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:09:46.924352 kubelet[3163]: E0420 19:09:46.922296 3163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:09:49.959635 kubelet[3163]: E0420 19:09:49.959361 3163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:10:23.437071 kubelet[3163]: E0420 19:10:23.436753 3163 kubelet_node_status.go:460] "Node not becoming ready in time after startup" Apr 20 19:10:27.783654 kubelet[3163]: E0420 19:10:27.781735 3163 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:10:32.018379 sudo[1900]: pam_unix(sudo:session): session closed for user root Apr 20 19:10:32.018000 audit[1900]: AUDIT1106 pid=1900 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Apr 20 19:10:32.023395 kernel: kauditd_printk_skb: 12 callbacks suppressed Apr 20 19:10:32.023515 kernel: audit: type=1106 audit(1776712232.018:510): pid=1900 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Apr 20 19:10:32.018000 audit[1900]: AUDIT1104 pid=1900 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Apr 20 19:10:32.040714 kernel: audit: type=1104 audit(1776712232.018:511): pid=1900 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Apr 20 19:10:32.041322 sshd[1899]: Connection closed by 10.0.0.1 port 41716 Apr 20 19:10:32.046598 sshd-session[1883]: pam_unix(sshd:session): session closed for user core Apr 20 19:10:32.050000 audit[1883]: AUDIT1106 pid=1883 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:10:32.070770 kernel: audit: type=1106 audit(1776712232.050:512): pid=1883 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:10:32.050000 audit[1883]: AUDIT1104 pid=1883 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:10:32.130932 kernel: audit: type=1104 audit(1776712232.050:513): pid=1883 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:10:32.132572 systemd[1]: sshd@6-12290-10.0.0.14:22-10.0.0.1:41716.service: Deactivated successfully. Apr 20 19:10:32.139000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-12290-10.0.0.14:22-10.0.0.1:41716 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:10:32.148629 kernel: audit: type=1131 audit(1776712232.139:514): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-12290-10.0.0.14:22-10.0.0.1:41716 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:10:32.234204 systemd[1]: session-8.scope: Deactivated successfully. Apr 20 19:10:32.234930 systemd[1]: session-8.scope: Consumed 1min 46.400s CPU time, 205.7M memory peak. Apr 20 19:10:32.350869 systemd-logind[1627]: Session 8 logged out. Waiting for processes to exit. Apr 20 19:10:32.415759 systemd-logind[1627]: Removed session 8. Apr 20 19:10:32.871803 kubelet[3163]: E0420 19:10:32.871406 3163 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:10:37.948610 kubelet[3163]: E0420 19:10:37.948032 3163 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:10:43.035248 kubelet[3163]: E0420 19:10:43.034919 3163 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:10:48.244390 kubelet[3163]: E0420 19:10:48.234504 3163 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:10:53.578048 kubelet[3163]: E0420 19:10:53.577275 3163 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:10:55.074307 kubelet[3163]: E0420 19:10:55.073614 3163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:10:58.659251 kubelet[3163]: E0420 19:10:58.658973 3163 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:11:01.913282 kubelet[3163]: E0420 19:11:01.913119 3163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:11:03.739744 kubelet[3163]: E0420 19:11:03.739496 3163 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:11:05.247041 kubelet[3163]: E0420 19:11:05.184324 3163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:11:07.056979 kubelet[3163]: E0420 19:11:07.054450 3163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:11:07.922462 kubelet[3163]: E0420 19:11:07.921817 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.055s" Apr 20 19:11:08.769992 kubelet[3163]: E0420 19:11:08.769694 3163 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:11:13.953726 kubelet[3163]: E0420 19:11:13.950691 3163 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:11:18.031915 kubelet[3163]: E0420 19:11:18.016234 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.122s" Apr 20 19:11:19.214148 kubelet[3163]: E0420 19:11:19.085066 3163 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:11:24.433901 kubelet[3163]: E0420 19:11:24.426394 3163 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:11:26.083458 kubelet[3163]: E0420 19:11:26.082344 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.222s" Apr 20 19:11:29.666274 kubelet[3163]: E0420 19:11:29.639496 3163 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:11:34.702686 kubelet[3163]: E0420 19:11:34.700210 3163 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:11:39.821821 kubelet[3163]: E0420 19:11:39.819675 3163 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:11:44.847450 kubelet[3163]: E0420 19:11:44.845420 3163 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:11:49.889462 kubelet[3163]: E0420 19:11:49.889205 3163 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:11:52.020880 kubelet[3163]: E0420 19:11:52.019114 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.156s" Apr 20 19:11:54.507030 kubelet[3163]: E0420 19:11:54.503523 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.523s" Apr 20 19:11:55.053773 kubelet[3163]: E0420 19:11:55.053484 3163 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:11:56.414476 systemd[1]: Started sshd@7-8194-10.0.0.14:22-10.0.0.1:38878.service - OpenSSH per-connection server daemon (10.0.0.1:38878). Apr 20 19:11:56.423000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-8194-10.0.0.14:22-10.0.0.1:38878 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:11:56.459095 kernel: audit: type=1130 audit(1776712316.423:515): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-8194-10.0.0.14:22-10.0.0.1:38878 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:11:58.919249 kubelet[3163]: E0420 19:11:58.918713 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.926s" Apr 20 19:11:59.946000 audit[3620]: AUDIT1101 pid=3620 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:11:59.999273 kernel: audit: type=1101 audit(1776712319.946:516): pid=3620 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:12:00.027908 sshd[3620]: Accepted publickey for core from 10.0.0.1 port 38878 ssh2: RSA SHA256:ZNIzts6V4KYKlrJxXaosrimCRlmsV/+NkZ5UtjwHrjE Apr 20 19:12:00.041000 audit[3620]: AUDIT1103 pid=3620 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:12:00.043229 sshd-session[3620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 19:12:00.061771 kernel: audit: type=1103 audit(1776712320.041:517): pid=3620 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:12:00.041000 audit[3620]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffd79e6cc0 a2=3 a3=0 items=0 ppid=1 pid=3620 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:12:00.041000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:12:00.135666 kernel: audit: type=1006 audit(1776712320.041:518): pid=3620 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Apr 20 19:12:00.138413 kernel: audit: type=1300 audit(1776712320.041:518): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffd79e6cc0 a2=3 a3=0 items=0 ppid=1 pid=3620 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:12:00.144506 kernel: audit: type=1327 audit(1776712320.041:518): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:12:00.214802 kubelet[3163]: E0420 19:12:00.211461 3163 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:12:00.711056 systemd-logind[1627]: New session '9' of user 'core' with class 'user' and type 'tty'. Apr 20 19:12:00.903202 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 20 19:12:01.165000 audit[3620]: AUDIT1105 pid=3620 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:12:01.185303 kernel: audit: type=1105 audit(1776712321.165:519): pid=3620 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:12:01.424000 audit[3626]: AUDIT1103 pid=3626 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:12:01.464179 kernel: audit: type=1103 audit(1776712321.424:520): pid=3626 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:12:02.153498 kubelet[3163]: E0420 19:12:02.152192 3163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:12:05.559515 kubelet[3163]: E0420 19:12:05.471187 3163 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:12:10.671032 kubelet[3163]: E0420 19:12:10.669753 3163 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:12:13.023098 kubelet[3163]: E0420 19:12:13.022049 3163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:12:15.144683 sshd[3626]: Connection closed by 10.0.0.1 port 38878 Apr 20 19:12:15.165445 sshd-session[3620]: pam_unix(sshd:session): session closed for user core Apr 20 19:12:15.202000 audit[3620]: AUDIT1106 pid=3620 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:12:15.216683 systemd[1]: sshd@7-8194-10.0.0.14:22-10.0.0.1:38878.service: Deactivated successfully. Apr 20 19:12:15.203000 audit[3620]: AUDIT1104 pid=3620 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:12:15.218000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-8194-10.0.0.14:22-10.0.0.1:38878 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:12:15.310654 kernel: audit: type=1106 audit(1776712335.202:521): pid=3620 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:12:15.218694 systemd[1]: sshd@7-8194-10.0.0.14:22-10.0.0.1:38878.service: Consumed 1.199s CPU time, 4.8M memory peak. Apr 20 19:12:15.319036 kernel: audit: type=1104 audit(1776712335.203:522): pid=3620 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:12:15.319394 kernel: audit: type=1131 audit(1776712335.218:523): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-8194-10.0.0.14:22-10.0.0.1:38878 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:12:15.324892 systemd[1]: session-9.scope: Deactivated successfully. Apr 20 19:12:15.333899 systemd[1]: session-9.scope: Consumed 8.416s CPU time, 19.1M memory peak. Apr 20 19:12:15.360794 systemd-logind[1627]: Session 9 logged out. Waiting for processes to exit. Apr 20 19:12:15.508944 systemd-logind[1627]: Removed session 9. Apr 20 19:12:15.685761 kubelet[3163]: E0420 19:12:15.682372 3163 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:12:19.925310 kubelet[3163]: E0420 19:12:19.921240 3163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:12:20.808990 kubelet[3163]: E0420 19:12:20.808596 3163 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:12:21.120271 systemd[1]: Started sshd@8-8195-10.0.0.14:22-10.0.0.1:44848.service - OpenSSH per-connection server daemon (10.0.0.1:44848). Apr 20 19:12:21.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-8195-10.0.0.14:22-10.0.0.1:44848 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:12:21.185410 kernel: audit: type=1130 audit(1776712341.133:524): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-8195-10.0.0.14:22-10.0.0.1:44848 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:12:21.560519 containerd[1659]: time="2026-04-20T19:12:21.558372444Z" level=info msg="container event discarded" container=4541a931ad8dcecb05315d64587cd8ba7190629062d02fc0133cc1309c2941e5 type=CONTAINER_CREATED_EVENT Apr 20 19:12:21.560519 containerd[1659]: time="2026-04-20T19:12:21.559351626Z" level=info msg="container event discarded" container=4541a931ad8dcecb05315d64587cd8ba7190629062d02fc0133cc1309c2941e5 type=CONTAINER_STARTED_EVENT Apr 20 19:12:21.747284 containerd[1659]: time="2026-04-20T19:12:21.657983357Z" level=info msg="container event discarded" container=b20307ecdf6e20705e0fdd182059b7830a4642f157174536595c44ea5ac2f131 type=CONTAINER_CREATED_EVENT Apr 20 19:12:21.775994 containerd[1659]: time="2026-04-20T19:12:21.771313506Z" level=info msg="container event discarded" container=b20307ecdf6e20705e0fdd182059b7830a4642f157174536595c44ea5ac2f131 type=CONTAINER_STARTED_EVENT Apr 20 19:12:21.838388 containerd[1659]: time="2026-04-20T19:12:21.776183295Z" level=info msg="container event discarded" container=a0a1c013bb9119be3e83c967343167afaabfa5d3210072f49e9de991e138aad2 type=CONTAINER_CREATED_EVENT Apr 20 19:12:21.838388 containerd[1659]: time="2026-04-20T19:12:21.776307487Z" level=info msg="container event discarded" container=a0a1c013bb9119be3e83c967343167afaabfa5d3210072f49e9de991e138aad2 type=CONTAINER_STARTED_EVENT Apr 20 19:12:22.177064 containerd[1659]: time="2026-04-20T19:12:22.172316011Z" level=info msg="container event discarded" container=d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf type=CONTAINER_CREATED_EVENT Apr 20 19:12:22.205599 containerd[1659]: time="2026-04-20T19:12:22.180972899Z" level=info msg="container event discarded" container=ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729 type=CONTAINER_CREATED_EVENT Apr 20 19:12:22.221664 containerd[1659]: time="2026-04-20T19:12:22.217253534Z" level=info msg="container event discarded" container=336c04308422bb47be47dbabd9a54c52608564fda7f37c62a62203892575bd65 type=CONTAINER_CREATED_EVENT Apr 20 19:12:23.949088 containerd[1659]: time="2026-04-20T19:12:23.942972969Z" level=info msg="container event discarded" container=ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729 type=CONTAINER_STARTED_EVENT Apr 20 19:12:24.159741 containerd[1659]: time="2026-04-20T19:12:24.129445967Z" level=info msg="container event discarded" container=d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf type=CONTAINER_STARTED_EVENT Apr 20 19:12:24.436933 containerd[1659]: time="2026-04-20T19:12:24.363871370Z" level=info msg="container event discarded" container=336c04308422bb47be47dbabd9a54c52608564fda7f37c62a62203892575bd65 type=CONTAINER_STARTED_EVENT Apr 20 19:12:24.480439 kubelet[3163]: E0420 19:12:24.402988 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.531s" Apr 20 19:12:25.837000 audit[3655]: AUDIT1101 pid=3655 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:12:25.855207 kernel: audit: type=1101 audit(1776712345.837:525): pid=3655 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:12:25.862619 sshd[3655]: Accepted publickey for core from 10.0.0.1 port 44848 ssh2: RSA SHA256:ZNIzts6V4KYKlrJxXaosrimCRlmsV/+NkZ5UtjwHrjE Apr 20 19:12:25.931000 audit[3655]: AUDIT1103 pid=3655 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:12:25.949235 kernel: audit: type=1103 audit(1776712345.931:526): pid=3655 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:12:25.961725 kernel: audit: type=1006 audit(1776712345.944:527): pid=3655 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Apr 20 19:12:25.962670 sshd-session[3655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 19:12:25.944000 audit[3655]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffff8f8a6a0 a2=3 a3=0 items=0 ppid=1 pid=3655 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:12:25.980384 kernel: audit: type=1300 audit(1776712345.944:527): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffff8f8a6a0 a2=3 a3=0 items=0 ppid=1 pid=3655 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:12:25.980713 kubelet[3163]: E0420 19:12:25.980093 3163 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:12:25.944000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:12:26.013336 kernel: audit: type=1327 audit(1776712345.944:527): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:12:26.562935 systemd-logind[1627]: New session '10' of user 'core' with class 'user' and type 'tty'. Apr 20 19:12:26.775851 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 20 19:12:27.231000 audit[3655]: AUDIT1105 pid=3655 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:12:27.263427 kernel: audit: type=1105 audit(1776712347.231:528): pid=3655 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:12:27.449000 audit[3661]: AUDIT1103 pid=3661 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:12:27.515295 kernel: audit: type=1103 audit(1776712347.449:529): pid=3661 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:12:29.923233 kubelet[3163]: E0420 19:12:29.922222 3163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:12:31.031092 kubelet[3163]: E0420 19:12:31.027876 3163 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:12:36.218119 sshd[3661]: Connection closed by 10.0.0.1 port 44848 Apr 20 19:12:36.293775 kubelet[3163]: E0420 19:12:36.283667 3163 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:12:36.293114 sshd-session[3655]: pam_unix(sshd:session): session closed for user core Apr 20 19:12:36.371000 audit[3655]: AUDIT1106 pid=3655 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:12:36.422726 kernel: audit: type=1106 audit(1776712356.371:530): pid=3655 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:12:36.447000 audit[3655]: AUDIT1104 pid=3655 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:12:36.546947 kernel: audit: type=1104 audit(1776712356.447:531): pid=3655 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:12:36.862855 systemd[1]: sshd@8-8195-10.0.0.14:22-10.0.0.1:44848.service: Deactivated successfully. Apr 20 19:12:36.976000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-8195-10.0.0.14:22-10.0.0.1:44848 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:12:37.015221 kernel: audit: type=1131 audit(1776712356.976:532): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-8195-10.0.0.14:22-10.0.0.1:44848 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:12:37.009609 systemd[1]: sshd@8-8195-10.0.0.14:22-10.0.0.1:44848.service: Consumed 1.519s CPU time, 4.3M memory peak. Apr 20 19:12:37.194411 systemd[1]: session-10.scope: Deactivated successfully. Apr 20 19:12:37.224875 systemd[1]: session-10.scope: Consumed 5.562s CPU time, 15.7M memory peak. Apr 20 19:12:37.354904 systemd-logind[1627]: Session 10 logged out. Waiting for processes to exit. Apr 20 19:12:37.521257 systemd-logind[1627]: Removed session 10. Apr 20 19:12:40.019382 kubelet[3163]: E0420 19:12:40.019186 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.158s" Apr 20 19:12:41.579399 kubelet[3163]: E0420 19:12:41.564449 3163 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:12:43.316443 systemd[1]: Started sshd@9-3-10.0.0.14:22-10.0.0.1:55080.service - OpenSSH per-connection server daemon (10.0.0.1:55080). Apr 20 19:12:43.323000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-3-10.0.0.14:22-10.0.0.1:55080 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:12:43.506436 kernel: audit: type=1130 audit(1776712363.323:533): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-3-10.0.0.14:22-10.0.0.1:55080 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:12:45.868939 kubelet[3163]: E0420 19:12:45.866228 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.003s" Apr 20 19:12:46.128000 audit[3679]: AUDIT1101 pid=3679 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:12:46.216876 sshd[3679]: Accepted publickey for core from 10.0.0.1 port 55080 ssh2: RSA SHA256:ZNIzts6V4KYKlrJxXaosrimCRlmsV/+NkZ5UtjwHrjE Apr 20 19:12:46.216000 audit[3679]: AUDIT1103 pid=3679 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:12:46.240631 kernel: audit: type=1101 audit(1776712366.128:534): pid=3679 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:12:46.218469 sshd-session[3679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 19:12:46.216000 audit[3679]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe4606e290 a2=3 a3=0 items=0 ppid=1 pid=3679 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:12:46.216000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:12:46.342392 kernel: audit: type=1103 audit(1776712366.216:535): pid=3679 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:12:46.350447 kernel: audit: type=1006 audit(1776712366.216:536): pid=3679 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Apr 20 19:12:46.359127 kernel: audit: type=1300 audit(1776712366.216:536): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe4606e290 a2=3 a3=0 items=0 ppid=1 pid=3679 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:12:46.374741 kernel: audit: type=1327 audit(1776712366.216:536): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:12:46.784084 systemd-logind[1627]: New session '11' of user 'core' with class 'user' and type 'tty'. Apr 20 19:12:46.921267 kubelet[3163]: E0420 19:12:46.821957 3163 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:12:47.205741 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 20 19:12:47.851000 audit[3679]: AUDIT1105 pid=3679 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:12:47.998924 kernel: audit: type=1105 audit(1776712367.851:537): pid=3679 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:12:48.147000 audit[3683]: AUDIT1103 pid=3683 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:12:48.211986 kernel: audit: type=1103 audit(1776712368.147:538): pid=3683 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:12:48.445305 kubelet[3163]: E0420 19:12:48.372798 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.483s" Apr 20 19:12:50.750465 kubelet[3163]: E0420 19:12:50.749966 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.305s" Apr 20 19:12:52.073484 kubelet[3163]: E0420 19:12:52.068215 3163 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:12:52.150148 kubelet[3163]: E0420 19:12:52.075806 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.287s" Apr 20 19:12:55.911180 kubelet[3163]: E0420 19:12:55.900908 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.011s" Apr 20 19:12:57.425939 kubelet[3163]: E0420 19:12:57.423192 3163 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:12:58.538001 kubelet[3163]: E0420 19:12:58.532285 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.63s" Apr 20 19:13:02.602075 kubelet[3163]: E0420 19:13:02.601241 3163 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:13:02.976364 sshd[3683]: Connection closed by 10.0.0.1 port 55080 Apr 20 19:13:03.044478 sshd-session[3679]: pam_unix(sshd:session): session closed for user core Apr 20 19:13:03.115000 audit[3679]: AUDIT1106 pid=3679 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:13:03.135298 kernel: audit: type=1106 audit(1776712383.115:539): pid=3679 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:13:03.116000 audit[3679]: AUDIT1104 pid=3679 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:13:03.177713 kernel: audit: type=1104 audit(1776712383.116:540): pid=3679 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:13:03.370878 systemd[1]: sshd@9-3-10.0.0.14:22-10.0.0.1:55080.service: Deactivated successfully. Apr 20 19:13:03.484000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-3-10.0.0.14:22-10.0.0.1:55080 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:13:03.495444 systemd[1]: sshd@9-3-10.0.0.14:22-10.0.0.1:55080.service: Consumed 1.023s CPU time, 4.1M memory peak. Apr 20 19:13:03.679067 kernel: audit: type=1131 audit(1776712383.484:541): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-3-10.0.0.14:22-10.0.0.1:55080 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:13:03.878804 systemd[1]: session-11.scope: Deactivated successfully. Apr 20 19:13:03.942300 systemd[1]: session-11.scope: Consumed 8.564s CPU time, 15M memory peak. Apr 20 19:13:04.196731 systemd-logind[1627]: Session 11 logged out. Waiting for processes to exit. Apr 20 19:13:04.442234 systemd-logind[1627]: Removed session 11. Apr 20 19:13:07.623681 kubelet[3163]: E0420 19:13:07.623387 3163 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:13:07.898358 kubelet[3163]: E0420 19:13:07.873733 3163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:13:09.248169 systemd[1]: Started sshd@10-8196-10.0.0.14:22-10.0.0.1:41608.service - OpenSSH per-connection server daemon (10.0.0.1:41608). Apr 20 19:13:09.279000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-8196-10.0.0.14:22-10.0.0.1:41608 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:13:09.306461 kernel: audit: type=1130 audit(1776712389.279:542): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-8196-10.0.0.14:22-10.0.0.1:41608 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:13:12.319000 audit[3706]: NETFILTER_CFG table=filter:103 family=2 entries=14 op=nft_register_rule pid=3706 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Apr 20 19:13:12.319000 audit[3706]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffddc928540 a2=0 a3=7ffddc92852c items=0 ppid=3270 pid=3706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:13:12.347282 kernel: audit: type=1325 audit(1776712392.319:543): table=filter:103 family=2 entries=14 op=nft_register_rule pid=3706 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Apr 20 19:13:12.347420 kernel: audit: type=1300 audit(1776712392.319:543): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffddc928540 a2=0 a3=7ffddc92852c items=0 ppid=3270 pid=3706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:13:12.319000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Apr 20 19:13:12.345000 audit[3700]: AUDIT1101 pid=3700 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:13:12.431858 kernel: audit: type=1327 audit(1776712392.319:543): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Apr 20 19:13:12.434730 kernel: audit: type=1101 audit(1776712392.345:544): pid=3700 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:13:12.445000 audit[3706]: NETFILTER_CFG table=nat:104 family=2 entries=12 op=nft_register_rule pid=3706 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Apr 20 19:13:12.445000 audit[3706]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffddc928540 a2=0 a3=0 items=0 ppid=3270 pid=3706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:13:12.445000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Apr 20 19:13:12.491865 sshd[3700]: Accepted publickey for core from 10.0.0.1 port 41608 ssh2: RSA SHA256:ZNIzts6V4KYKlrJxXaosrimCRlmsV/+NkZ5UtjwHrjE Apr 20 19:13:12.493000 audit[3700]: AUDIT1103 pid=3700 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:13:12.505526 kernel: audit: type=1325 audit(1776712392.445:545): table=nat:104 family=2 entries=12 op=nft_register_rule pid=3706 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Apr 20 19:13:12.505714 kernel: audit: type=1300 audit(1776712392.445:545): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffddc928540 a2=0 a3=0 items=0 ppid=3270 pid=3706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:13:12.505765 kernel: audit: type=1327 audit(1776712392.445:545): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Apr 20 19:13:12.505785 kernel: audit: type=1103 audit(1776712392.493:546): pid=3700 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:13:12.532352 kernel: audit: type=1006 audit(1776712392.512:547): pid=3700 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Apr 20 19:13:12.512000 audit[3700]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc0b8addc0 a2=3 a3=0 items=0 ppid=1 pid=3700 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:13:12.512000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:13:12.666622 sshd-session[3700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 19:13:12.714145 kubelet[3163]: E0420 19:13:12.712464 3163 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:13:13.836038 systemd-logind[1627]: New session '12' of user 'core' with class 'user' and type 'tty'. Apr 20 19:13:14.146629 kubelet[3163]: E0420 19:13:14.025992 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.103s" Apr 20 19:13:14.242837 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 20 19:13:14.873000 audit[3700]: AUDIT1105 pid=3700 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:13:14.908210 kernel: kauditd_printk_skb: 2 callbacks suppressed Apr 20 19:13:14.916499 kernel: audit: type=1105 audit(1776712394.873:548): pid=3700 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:13:15.045000 audit[3710]: AUDIT1103 pid=3710 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:13:15.216503 kernel: audit: type=1103 audit(1776712395.045:549): pid=3710 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:13:15.921795 kubelet[3163]: E0420 19:13:15.921479 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.05s" Apr 20 19:13:17.231000 audit[3709]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3709 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Apr 20 19:13:17.231000 audit[3709]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff6f7f1b90 a2=0 a3=7fff6f7f1b7c items=0 ppid=3270 pid=3709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:13:17.231000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Apr 20 19:13:17.857791 kernel: audit: type=1325 audit(1776712397.231:550): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3709 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Apr 20 19:13:17.878250 kernel: audit: type=1300 audit(1776712397.231:550): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff6f7f1b90 a2=0 a3=7fff6f7f1b7c items=0 ppid=3270 pid=3709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:13:17.945519 kernel: audit: type=1327 audit(1776712397.231:550): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Apr 20 19:13:18.382862 kubelet[3163]: E0420 19:13:18.381062 3163 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:13:18.718000 audit[3709]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3709 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Apr 20 19:13:18.718000 audit[3709]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff6f7f1b90 a2=0 a3=0 items=0 ppid=3270 pid=3709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:13:18.718000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Apr 20 19:13:19.169084 kernel: audit: type=1325 audit(1776712398.718:551): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3709 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Apr 20 19:13:19.185194 kernel: audit: type=1300 audit(1776712398.718:551): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff6f7f1b90 a2=0 a3=0 items=0 ppid=3270 pid=3709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:13:19.218992 kernel: audit: type=1327 audit(1776712398.718:551): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Apr 20 19:13:22.213872 kubelet[3163]: E0420 19:13:22.212336 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.329s" Apr 20 19:13:27.447232 kubelet[3163]: E0420 19:13:27.443237 3163 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:13:27.950000 audit[3721]: NETFILTER_CFG table=filter:107 family=2 entries=17 op=nft_register_rule pid=3721 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Apr 20 19:13:27.950000 audit[3721]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7fff6afd2a20 a2=0 a3=7fff6afd2a0c items=0 ppid=3270 pid=3721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:13:27.950000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Apr 20 19:13:28.126015 kernel: audit: type=1325 audit(1776712407.950:552): table=filter:107 family=2 entries=17 op=nft_register_rule pid=3721 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Apr 20 19:13:28.146387 kernel: audit: type=1300 audit(1776712407.950:552): arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7fff6afd2a20 a2=0 a3=7fff6afd2a0c items=0 ppid=3270 pid=3721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:13:28.341362 kernel: audit: type=1327 audit(1776712407.950:552): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Apr 20 19:13:28.459000 audit[3721]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3721 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Apr 20 19:13:28.459000 audit[3721]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff6afd2a20 a2=0 a3=0 items=0 ppid=3270 pid=3721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:13:28.459000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Apr 20 19:13:28.738915 kernel: audit: type=1325 audit(1776712408.459:553): table=nat:108 family=2 entries=12 op=nft_register_rule pid=3721 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Apr 20 19:13:28.856226 kernel: audit: type=1300 audit(1776712408.459:553): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff6afd2a20 a2=0 a3=0 items=0 ppid=3270 pid=3721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:13:28.866487 kernel: audit: type=1327 audit(1776712408.459:553): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Apr 20 19:13:29.096261 containerd[1659]: time="2026-04-20T19:13:29.064233273Z" level=info msg="container event discarded" container=2c13164970d6ef1f6141d749c815cb3471b4fde57d2ae638fe20a36a6b16d239 type=CONTAINER_CREATED_EVENT Apr 20 19:13:29.096261 containerd[1659]: time="2026-04-20T19:13:29.088002113Z" level=info msg="container event discarded" container=2c13164970d6ef1f6141d749c815cb3471b4fde57d2ae638fe20a36a6b16d239 type=CONTAINER_STARTED_EVENT Apr 20 19:13:30.246274 containerd[1659]: time="2026-04-20T19:13:30.218529165Z" level=info msg="container event discarded" container=3b0a4a3b835e85fc99f3ab2b047e3f77e9f11959e0cedf296580685144662c2a type=CONTAINER_CREATED_EVENT Apr 20 19:13:32.744146 containerd[1659]: time="2026-04-20T19:13:32.743527563Z" level=info msg="container event discarded" container=3b0a4a3b835e85fc99f3ab2b047e3f77e9f11959e0cedf296580685144662c2a type=CONTAINER_STARTED_EVENT Apr 20 19:13:33.440448 kubelet[3163]: E0420 19:13:33.432369 3163 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:13:33.870915 kubelet[3163]: E0420 19:13:33.756086 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="11.285s" Apr 20 19:13:35.765715 systemd[1]: cri-containerd-4dc5f3cee3e4345df4d1f6cd625d29a3d7985c094a29b2b716e633b97294a72b.scope: Deactivated successfully. Apr 20 19:13:35.810000 audit: BPF prog-id=132 op=UNLOAD Apr 20 19:13:35.842000 audit: BPF prog-id=128 op=UNLOAD Apr 20 19:13:36.655517 kernel: audit: type=1334 audit(1776712415.810:554): prog-id=132 op=UNLOAD Apr 20 19:13:35.844101 systemd[1]: cri-containerd-4dc5f3cee3e4345df4d1f6cd625d29a3d7985c094a29b2b716e633b97294a72b.scope: Consumed 1min 51.592s CPU time, 92.2M memory peak. Apr 20 19:13:36.841508 containerd[1659]: time="2026-04-20T19:13:36.569313247Z" level=info msg="received container exit event container_id:\"4dc5f3cee3e4345df4d1f6cd625d29a3d7985c094a29b2b716e633b97294a72b\" id:\"4dc5f3cee3e4345df4d1f6cd625d29a3d7985c094a29b2b716e633b97294a72b\" pid:3506 exit_status:1 exited_at:{seconds:1776712416 nanos:261469101}" Apr 20 19:13:36.917807 kernel: audit: type=1334 audit(1776712415.842:555): prog-id=128 op=UNLOAD Apr 20 19:13:38.996821 kubelet[3163]: E0420 19:13:38.994206 3163 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:13:39.165502 kubelet[3163]: E0420 19:13:39.004810 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.158s" Apr 20 19:13:40.848234 kubelet[3163]: E0420 19:13:40.847887 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.843s" Apr 20 19:13:41.248429 sshd[3710]: Connection closed by 10.0.0.1 port 41608 Apr 20 19:13:41.331717 sshd-session[3700]: pam_unix(sshd:session): session closed for user core Apr 20 19:13:41.674000 audit[3700]: AUDIT1106 pid=3700 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:13:41.677000 audit[3700]: AUDIT1104 pid=3700 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:13:42.180685 kernel: audit: type=1106 audit(1776712421.674:556): pid=3700 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:13:42.419000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-8196-10.0.0.14:22-10.0.0.1:41608 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:13:42.145042 systemd[1]: sshd@10-8196-10.0.0.14:22-10.0.0.1:41608.service: Deactivated successfully. Apr 20 19:13:43.061721 kernel: audit: type=1104 audit(1776712421.677:557): pid=3700 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:13:43.074465 kubelet[3163]: E0420 19:13:42.551972 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.694s" Apr 20 19:13:42.551950 systemd[1]: sshd@10-8196-10.0.0.14:22-10.0.0.1:41608.service: Consumed 1.482s CPU time, 4.6M memory peak. Apr 20 19:13:43.104046 kernel: audit: type=1131 audit(1776712422.419:558): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-8196-10.0.0.14:22-10.0.0.1:41608 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:13:43.500622 systemd[1]: session-12.scope: Deactivated successfully. Apr 20 19:13:43.664524 kubelet[3163]: E0420 19:13:43.662363 3163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:13:43.707024 systemd[1]: session-12.scope: Consumed 13.576s CPU time, 17.6M memory peak. Apr 20 19:13:44.336393 systemd-logind[1627]: Session 12 logged out. Waiting for processes to exit. Apr 20 19:13:44.563864 kubelet[3163]: E0420 19:13:44.559060 3163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:13:45.049480 kubelet[3163]: E0420 19:13:44.751305 3163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:13:44.826455 systemd-logind[1627]: Removed session 12. Apr 20 19:13:45.820223 kubelet[3163]: E0420 19:13:45.769486 3163 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:13:46.559340 kubelet[3163]: E0420 19:13:46.546747 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.27s" Apr 20 19:13:46.736363 containerd[1659]: time="2026-04-20T19:13:46.633467982Z" level=error msg="failed to delete task" error="context deadline exceeded" id=4dc5f3cee3e4345df4d1f6cd625d29a3d7985c094a29b2b716e633b97294a72b Apr 20 19:13:46.736363 containerd[1659]: time="2026-04-20T19:13:46.726165770Z" level=error msg="failed to handle container TaskExit event container_id:\"4dc5f3cee3e4345df4d1f6cd625d29a3d7985c094a29b2b716e633b97294a72b\" id:\"4dc5f3cee3e4345df4d1f6cd625d29a3d7985c094a29b2b716e633b97294a72b\" pid:3506 exit_status:1 exited_at:{seconds:1776712416 nanos:261469101}" error="failed to stop container: failed to delete task: context deadline exceeded" Apr 20 19:13:47.570759 containerd[1659]: time="2026-04-20T19:13:47.554301881Z" level=error msg="ttrpc: received message on inactive stream" stream=73 Apr 20 19:13:48.257962 containerd[1659]: time="2026-04-20T19:13:48.257769960Z" level=info msg="TaskExit event container_id:\"4dc5f3cee3e4345df4d1f6cd625d29a3d7985c094a29b2b716e633b97294a72b\" id:\"4dc5f3cee3e4345df4d1f6cd625d29a3d7985c094a29b2b716e633b97294a72b\" pid:3506 exit_status:1 exited_at:{seconds:1776712416 nanos:261469101}" Apr 20 19:13:48.527448 containerd[1659]: time="2026-04-20T19:13:48.353257269Z" level=info msg="container event discarded" container=535cbf317370e2ee0ec5e64de676b160729bcf3ec8cac6f2b79f5d2eb1374a04 type=CONTAINER_CREATED_EVENT Apr 20 19:13:48.527448 containerd[1659]: time="2026-04-20T19:13:48.355418108Z" level=info msg="container event discarded" container=535cbf317370e2ee0ec5e64de676b160729bcf3ec8cac6f2b79f5d2eb1374a04 type=CONTAINER_STARTED_EVENT Apr 20 19:13:48.744388 containerd[1659]: time="2026-04-20T19:13:48.703037636Z" level=error msg="failed to delete task" error="rpc error: code = NotFound desc = container not created: not found" id=4dc5f3cee3e4345df4d1f6cd625d29a3d7985c094a29b2b716e633b97294a72b Apr 20 19:13:49.059479 containerd[1659]: time="2026-04-20T19:13:49.054513088Z" level=info msg="Ensure that container 4dc5f3cee3e4345df4d1f6cd625d29a3d7985c094a29b2b716e633b97294a72b in task-service has been cleanup successfully" Apr 20 19:13:49.835072 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4dc5f3cee3e4345df4d1f6cd625d29a3d7985c094a29b2b716e633b97294a72b-rootfs.mount: Deactivated successfully. Apr 20 19:13:51.750056 systemd[1]: Started sshd@11-8197-10.0.0.14:22-10.0.0.1:56388.service - OpenSSH per-connection server daemon (10.0.0.1:56388). Apr 20 19:13:51.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-8197-10.0.0.14:22-10.0.0.1:56388 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:13:52.118649 kernel: audit: type=1130 audit(1776712431.924:559): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-8197-10.0.0.14:22-10.0.0.1:56388 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:13:52.267348 kubelet[3163]: E0420 19:13:52.267029 3163 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:13:56.917069 kubelet[3163]: E0420 19:13:56.916832 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="10.3s" Apr 20 19:13:58.714820 kubelet[3163]: E0420 19:13:58.712836 3163 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:13:59.782185 containerd[1659]: time="2026-04-20T19:13:59.769867917Z" level=info msg="container event discarded" container=4dc5f3cee3e4345df4d1f6cd625d29a3d7985c094a29b2b716e633b97294a72b type=CONTAINER_CREATED_EVENT Apr 20 19:14:00.471498 containerd[1659]: time="2026-04-20T19:14:00.382991346Z" level=info msg="container event discarded" container=4dc5f3cee3e4345df4d1f6cd625d29a3d7985c094a29b2b716e633b97294a72b type=CONTAINER_STARTED_EVENT Apr 20 19:14:00.746516 kubelet[3163]: E0420 19:14:00.725496 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.718s" Apr 20 19:14:03.481680 kubelet[3163]: E0420 19:14:03.478376 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.752s" Apr 20 19:14:04.112300 kubelet[3163]: I0420 19:14:03.994831 3163 scope.go:117] "RemoveContainer" containerID="4dc5f3cee3e4345df4d1f6cd625d29a3d7985c094a29b2b716e633b97294a72b" Apr 20 19:14:04.348417 kubelet[3163]: E0420 19:14:04.314658 3163 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:14:08.179841 containerd[1659]: time="2026-04-20T19:14:08.179584093Z" level=info msg="CreateContainer within sandbox \"535cbf317370e2ee0ec5e64de676b160729bcf3ec8cac6f2b79f5d2eb1374a04\" for container name:\"tigera-operator\" attempt:1" Apr 20 19:14:09.081886 kubelet[3163]: E0420 19:14:09.081157 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.325s" Apr 20 19:14:09.851499 kubelet[3163]: E0420 19:14:09.838699 3163 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:14:10.884221 containerd[1659]: time="2026-04-20T19:14:10.858390479Z" level=info msg="Container bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf: CDI devices from CRI Config.CDIDevices: []" Apr 20 19:14:11.467000 audit[3745]: AUDIT1101 pid=3745 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:14:11.636696 kernel: audit: type=1101 audit(1776712451.467:560): pid=3745 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:14:11.637319 kubelet[3163]: E0420 19:14:11.636489 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.355s" Apr 20 19:14:11.635000 audit[3745]: AUDIT1103 pid=3745 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:14:11.636000 audit[3745]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe32c2a0b0 a2=3 a3=0 items=0 ppid=1 pid=3745 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:14:11.636000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:14:11.816797 sshd[3745]: Accepted publickey for core from 10.0.0.1 port 56388 ssh2: RSA SHA256:ZNIzts6V4KYKlrJxXaosrimCRlmsV/+NkZ5UtjwHrjE Apr 20 19:14:11.640528 sshd-session[3745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 19:14:11.860200 kernel: audit: type=1103 audit(1776712451.635:561): pid=3745 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:14:11.637359 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3471957298.mount: Deactivated successfully. Apr 20 19:14:12.010341 kernel: audit: type=1006 audit(1776712451.636:562): pid=3745 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Apr 20 19:14:12.023138 kernel: audit: type=1300 audit(1776712451.636:562): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe32c2a0b0 a2=3 a3=0 items=0 ppid=1 pid=3745 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:14:12.059449 kernel: audit: type=1327 audit(1776712451.636:562): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:14:12.130425 containerd[1659]: time="2026-04-20T19:14:12.128808216Z" level=info msg="CreateContainer within sandbox \"535cbf317370e2ee0ec5e64de676b160729bcf3ec8cac6f2b79f5d2eb1374a04\" for name:\"tigera-operator\" attempt:1 returns container id \"bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf\"" Apr 20 19:14:12.217851 containerd[1659]: time="2026-04-20T19:14:12.173712308Z" level=info msg="StartContainer for \"bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf\"" Apr 20 19:14:12.674861 containerd[1659]: time="2026-04-20T19:14:12.673847515Z" level=info msg="connecting to shim bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf" address="unix:///run/containerd/s/f33a8176a49c61303773e14b6c829cb189085279a0319d87c3cb99135d7dee34" protocol=ttrpc version=3 Apr 20 19:14:12.849250 systemd-logind[1627]: New session '13' of user 'core' with class 'user' and type 'tty'. Apr 20 19:14:14.116971 kubelet[3163]: E0420 19:14:14.115492 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.243s" Apr 20 19:14:15.373661 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 20 19:14:15.465019 kubelet[3163]: E0420 19:14:15.452303 3163 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:14:15.518000 audit[3745]: AUDIT1105 pid=3745 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:14:15.543446 kernel: audit: type=1105 audit(1776712455.518:563): pid=3745 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:14:15.613000 audit[3762]: AUDIT1103 pid=3762 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:14:15.635816 kernel: audit: type=1103 audit(1776712455.613:564): pid=3762 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:14:15.669713 systemd[1]: Started cri-containerd-bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf.scope - libcontainer container bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf. Apr 20 19:14:16.562000 audit: BPF prog-id=133 op=LOAD Apr 20 19:14:16.602126 kernel: audit: type=1334 audit(1776712456.562:565): prog-id=133 op=LOAD Apr 20 19:14:16.614000 audit: BPF prog-id=134 op=LOAD Apr 20 19:14:16.614000 audit[3751]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000160240 a2=98 a3=0 items=0 ppid=3446 pid=3751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:14:16.614000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261623962643161663035373361333565353763333063643132313232 Apr 20 19:14:16.614000 audit: BPF prog-id=134 op=UNLOAD Apr 20 19:14:16.647387 kernel: audit: type=1334 audit(1776712456.614:566): prog-id=134 op=LOAD Apr 20 19:14:16.647596 kernel: audit: type=1300 audit(1776712456.614:566): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000160240 a2=98 a3=0 items=0 ppid=3446 pid=3751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:14:16.647618 kernel: audit: type=1327 audit(1776712456.614:566): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261623962643161663035373361333565353763333063643132313232 Apr 20 19:14:16.647635 kernel: audit: type=1334 audit(1776712456.614:567): prog-id=134 op=UNLOAD Apr 20 19:14:16.614000 audit[3751]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3446 pid=3751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:14:16.659672 kernel: audit: type=1300 audit(1776712456.614:567): arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3446 pid=3751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:14:16.614000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261623962643161663035373361333565353763333063643132313232 Apr 20 19:14:16.614000 audit: BPF prog-id=135 op=LOAD Apr 20 19:14:16.614000 audit[3751]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000160490 a2=98 a3=0 items=0 ppid=3446 pid=3751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:14:16.614000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261623962643161663035373361333565353763333063643132313232 Apr 20 19:14:16.614000 audit: BPF prog-id=136 op=LOAD Apr 20 19:14:16.614000 audit[3751]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000160220 a2=98 a3=0 items=0 ppid=3446 pid=3751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:14:16.614000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261623962643161663035373361333565353763333063643132313232 Apr 20 19:14:16.614000 audit: BPF prog-id=136 op=UNLOAD Apr 20 19:14:16.614000 audit[3751]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3446 pid=3751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:14:16.614000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261623962643161663035373361333565353763333063643132313232 Apr 20 19:14:16.614000 audit: BPF prog-id=135 op=UNLOAD Apr 20 19:14:16.614000 audit[3751]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3446 pid=3751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:14:16.614000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261623962643161663035373361333565353763333063643132313232 Apr 20 19:14:16.614000 audit: BPF prog-id=137 op=LOAD Apr 20 19:14:16.614000 audit[3751]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001606f0 a2=98 a3=0 items=0 ppid=3446 pid=3751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:14:16.614000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261623962643161663035373361333565353763333063643132313232 Apr 20 19:14:16.992738 kernel: audit: type=1327 audit(1776712456.614:567): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261623962643161663035373361333565353763333063643132313232 Apr 20 19:14:16.993203 kernel: audit: type=1334 audit(1776712456.614:568): prog-id=135 op=LOAD Apr 20 19:14:16.993394 kernel: audit: type=1300 audit(1776712456.614:568): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000160490 a2=98 a3=0 items=0 ppid=3446 pid=3751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:14:16.993490 kernel: audit: type=1327 audit(1776712456.614:568): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6261623962643161663035373361333565353763333063643132313232 Apr 20 19:14:18.032408 containerd[1659]: time="2026-04-20T19:14:18.028053223Z" level=info msg="StartContainer for \"bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf\" returns successfully" Apr 20 19:14:21.528063 kubelet[3163]: E0420 19:14:21.527892 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.567s" Apr 20 19:14:21.557889 kubelet[3163]: E0420 19:14:21.470166 3163 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:14:23.257975 kubelet[3163]: I0420 19:14:23.243477 3163 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-hvgdj" podStartSLOduration=328.226500618 podStartE2EDuration="5m39.24345904s" podCreationTimestamp="2026-04-20 19:08:44 +0000 UTC" firstStartedPulling="2026-04-20 19:08:48.377895703 +0000 UTC m=+26.472145416" lastFinishedPulling="2026-04-20 19:08:59.394854133 +0000 UTC m=+37.489103838" observedRunningTime="2026-04-20 19:09:00.76034651 +0000 UTC m=+38.854596220" watchObservedRunningTime="2026-04-20 19:14:23.24345904 +0000 UTC m=+361.337708751" Apr 20 19:14:25.602489 kubelet[3163]: E0420 19:14:25.602157 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.732s" Apr 20 19:14:26.742244 kubelet[3163]: E0420 19:14:26.742103 3163 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:14:29.983079 kubelet[3163]: E0420 19:14:29.979352 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.114s" Apr 20 19:14:30.422387 kubelet[3163]: E0420 19:14:30.366964 3163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:14:30.648943 sshd[3762]: Connection closed by 10.0.0.1 port 56388 Apr 20 19:14:30.734000 audit[3745]: AUDIT1106 pid=3745 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:14:30.749000 audit[3745]: AUDIT1104 pid=3745 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:14:30.675329 sshd-session[3745]: pam_unix(sshd:session): session closed for user core Apr 20 19:14:30.855375 kernel: kauditd_printk_skb: 12 callbacks suppressed Apr 20 19:14:30.871000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-8197-10.0.0.14:22-10.0.0.1:56388 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:14:30.850092 systemd[1]: sshd@11-8197-10.0.0.14:22-10.0.0.1:56388.service: Deactivated successfully. Apr 20 19:14:30.896021 kernel: audit: type=1106 audit(1776712470.734:573): pid=3745 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:14:30.875158 systemd[1]: sshd@11-8197-10.0.0.14:22-10.0.0.1:56388.service: Consumed 5.418s CPU time, 4.1M memory peak. Apr 20 19:14:30.896172 kernel: audit: type=1104 audit(1776712470.749:574): pid=3745 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:14:30.896186 kernel: audit: type=1131 audit(1776712470.871:575): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-8197-10.0.0.14:22-10.0.0.1:56388 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:14:30.978257 systemd[1]: session-13.scope: Deactivated successfully. Apr 20 19:14:30.980709 systemd[1]: session-13.scope: Consumed 8.507s CPU time, 17.8M memory peak. Apr 20 19:14:31.019399 systemd-logind[1627]: Session 13 logged out. Waiting for processes to exit. Apr 20 19:14:31.169449 systemd-logind[1627]: Removed session 13. Apr 20 19:14:31.793096 kubelet[3163]: E0420 19:14:31.792915 3163 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:14:35.869684 systemd[1]: Started sshd@12-4099-10.0.0.14:22-10.0.0.1:35832.service - OpenSSH per-connection server daemon (10.0.0.1:35832). Apr 20 19:14:35.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-4099-10.0.0.14:22-10.0.0.1:35832 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:14:35.888944 kernel: audit: type=1130 audit(1776712475.871:576): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-4099-10.0.0.14:22-10.0.0.1:35832 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:14:36.445000 audit[3803]: AUDIT1101 pid=3803 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:14:36.453453 sshd[3803]: Accepted publickey for core from 10.0.0.1 port 35832 ssh2: RSA SHA256:ZNIzts6V4KYKlrJxXaosrimCRlmsV/+NkZ5UtjwHrjE Apr 20 19:14:36.452000 audit[3803]: AUDIT1103 pid=3803 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:14:36.457918 sshd-session[3803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 19:14:36.464830 kernel: audit: type=1101 audit(1776712476.445:577): pid=3803 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:14:36.466525 kernel: audit: type=1103 audit(1776712476.452:578): pid=3803 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:14:36.466624 kernel: audit: type=1006 audit(1776712476.455:579): pid=3803 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Apr 20 19:14:36.455000 audit[3803]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffecdd8fa60 a2=3 a3=0 items=0 ppid=1 pid=3803 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:14:36.455000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:14:36.512598 kernel: audit: type=1300 audit(1776712476.455:579): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffecdd8fa60 a2=3 a3=0 items=0 ppid=1 pid=3803 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:14:36.512893 kernel: audit: type=1327 audit(1776712476.455:579): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:14:36.553798 systemd-logind[1627]: New session '14' of user 'core' with class 'user' and type 'tty'. Apr 20 19:14:36.650714 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 20 19:14:36.713000 audit[3803]: AUDIT1105 pid=3803 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:14:36.760250 kernel: audit: type=1105 audit(1776712476.713:580): pid=3803 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:14:36.783000 audit[3807]: AUDIT1103 pid=3807 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:14:36.845931 kernel: audit: type=1103 audit(1776712476.783:581): pid=3807 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:14:36.861430 kubelet[3163]: E0420 19:14:36.861243 3163 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:14:38.126912 sshd[3807]: Connection closed by 10.0.0.1 port 35832 Apr 20 19:14:38.133132 sshd-session[3803]: pam_unix(sshd:session): session closed for user core Apr 20 19:14:38.132000 audit[3803]: AUDIT1106 pid=3803 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:14:38.132000 audit[3803]: AUDIT1104 pid=3803 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:14:38.164226 kernel: audit: type=1106 audit(1776712478.132:582): pid=3803 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:14:38.164433 kernel: audit: type=1104 audit(1776712478.132:583): pid=3803 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:14:38.174811 systemd[1]: sshd@12-4099-10.0.0.14:22-10.0.0.1:35832.service: Deactivated successfully. Apr 20 19:14:38.177000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-4099-10.0.0.14:22-10.0.0.1:35832 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:14:38.182232 systemd[1]: session-14.scope: Deactivated successfully. Apr 20 19:14:38.183014 systemd[1]: session-14.scope: Consumed 1.052s CPU time, 17.7M memory peak. Apr 20 19:14:38.282731 systemd-logind[1627]: Session 14 logged out. Waiting for processes to exit. Apr 20 19:14:38.324390 systemd-logind[1627]: Removed session 14. Apr 20 19:14:41.903634 kubelet[3163]: E0420 19:14:41.899261 3163 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:14:44.008888 systemd[1]: Started sshd@13-4100-10.0.0.14:22-10.0.0.1:35838.service - OpenSSH per-connection server daemon (10.0.0.1:35838). Apr 20 19:14:44.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-4100-10.0.0.14:22-10.0.0.1:35838 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:14:44.036449 kernel: kauditd_printk_skb: 1 callbacks suppressed Apr 20 19:14:44.039939 kernel: audit: type=1130 audit(1776712484.010:585): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-4100-10.0.0.14:22-10.0.0.1:35838 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:14:46.334000 audit[3825]: AUDIT1101 pid=3825 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:14:46.411749 kernel: audit: type=1101 audit(1776712486.334:586): pid=3825 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:14:46.412213 sshd[3825]: Accepted publickey for core from 10.0.0.1 port 35838 ssh2: RSA SHA256:ZNIzts6V4KYKlrJxXaosrimCRlmsV/+NkZ5UtjwHrjE Apr 20 19:14:46.525000 audit[3825]: AUDIT1103 pid=3825 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:14:46.525000 audit[3825]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe9a0e4c70 a2=3 a3=0 items=0 ppid=1 pid=3825 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:14:46.525000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:14:46.552112 sshd-session[3825]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 19:14:46.619229 kernel: audit: type=1103 audit(1776712486.525:587): pid=3825 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:14:46.619473 kernel: audit: type=1006 audit(1776712486.525:588): pid=3825 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Apr 20 19:14:46.619493 kernel: audit: type=1300 audit(1776712486.525:588): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe9a0e4c70 a2=3 a3=0 items=0 ppid=1 pid=3825 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:14:46.619624 kernel: audit: type=1327 audit(1776712486.525:588): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:14:46.985010 kubelet[3163]: E0420 19:14:46.984819 3163 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:14:47.010669 systemd-logind[1627]: New session '15' of user 'core' with class 'user' and type 'tty'. Apr 20 19:14:47.024976 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 20 19:14:47.075000 audit[3825]: AUDIT1105 pid=3825 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:14:47.172697 kernel: audit: type=1105 audit(1776712487.075:589): pid=3825 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:14:47.195000 audit[3829]: AUDIT1103 pid=3829 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:14:47.223004 kernel: audit: type=1103 audit(1776712487.195:590): pid=3829 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:14:49.038504 sshd[3829]: Connection closed by 10.0.0.1 port 35838 Apr 20 19:14:49.039751 sshd-session[3825]: pam_unix(sshd:session): session closed for user core Apr 20 19:14:49.038000 audit[3825]: AUDIT1106 pid=3825 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:14:49.043000 audit[3825]: AUDIT1104 pid=3825 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:14:49.064405 kernel: audit: type=1106 audit(1776712489.038:591): pid=3825 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:14:49.138799 kernel: audit: type=1104 audit(1776712489.043:592): pid=3825 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:14:49.131922 systemd[1]: sshd@13-4100-10.0.0.14:22-10.0.0.1:35838.service: Deactivated successfully. Apr 20 19:14:49.144000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-4100-10.0.0.14:22-10.0.0.1:35838 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:14:49.201381 kernel: audit: type=1131 audit(1776712489.144:593): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-4100-10.0.0.14:22-10.0.0.1:35838 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:14:49.152230 systemd[1]: sshd@13-4100-10.0.0.14:22-10.0.0.1:35838.service: Consumed 1.051s CPU time, 4.2M memory peak. Apr 20 19:14:49.220307 systemd[1]: session-15.scope: Deactivated successfully. Apr 20 19:14:49.240642 systemd[1]: session-15.scope: Consumed 1.284s CPU time, 15.7M memory peak. Apr 20 19:14:49.324840 systemd-logind[1627]: Session 15 logged out. Waiting for processes to exit. Apr 20 19:14:49.336706 systemd-logind[1627]: Removed session 15. Apr 20 19:14:52.013135 kubelet[3163]: E0420 19:14:52.010865 3163 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:14:55.323949 systemd[1]: Started sshd@14-4-10.0.0.14:22-10.0.0.1:48744.service - OpenSSH per-connection server daemon (10.0.0.1:48744). Apr 20 19:14:55.399000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-4-10.0.0.14:22-10.0.0.1:48744 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:14:55.408701 kernel: audit: type=1130 audit(1776712495.399:594): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-4-10.0.0.14:22-10.0.0.1:48744 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:14:55.462009 systemd[1]: Starting systemd-tmpfiles-clean.service - Cleanup of Temporary Directories... Apr 20 19:14:56.035808 systemd-tmpfiles[3848]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Apr 20 19:14:56.035845 systemd-tmpfiles[3848]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Apr 20 19:14:56.036648 systemd-tmpfiles[3848]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 20 19:14:56.062423 systemd-tmpfiles[3848]: ACLs are not supported, ignoring. Apr 20 19:14:56.062496 systemd-tmpfiles[3848]: ACLs are not supported, ignoring. Apr 20 19:14:56.072612 systemd-tmpfiles[3848]: Detected autofs mount point /boot during canonicalization of boot. Apr 20 19:14:56.072630 systemd-tmpfiles[3848]: Skipping /boot Apr 20 19:14:56.094454 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Apr 20 19:14:56.098215 systemd[1]: Finished systemd-tmpfiles-clean.service - Cleanup of Temporary Directories. Apr 20 19:14:56.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-clean comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:14:56.104756 kernel: audit: type=1130 audit(1776712496.097:595): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-clean comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:14:56.097000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-clean comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:14:56.107204 kernel: audit: type=1131 audit(1776712496.097:596): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-clean comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:14:56.679000 audit[3847]: AUDIT1101 pid=3847 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:14:56.690650 sshd[3847]: Accepted publickey for core from 10.0.0.1 port 48744 ssh2: RSA SHA256:ZNIzts6V4KYKlrJxXaosrimCRlmsV/+NkZ5UtjwHrjE Apr 20 19:14:56.694035 kernel: audit: type=1101 audit(1776712496.679:597): pid=3847 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:14:56.693000 audit[3847]: AUDIT1103 pid=3847 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:14:56.696184 sshd-session[3847]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 19:14:56.705754 kernel: audit: type=1103 audit(1776712496.693:598): pid=3847 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:14:56.693000 audit[3847]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd951c41b0 a2=3 a3=0 items=0 ppid=1 pid=3847 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:14:56.716427 kernel: audit: type=1006 audit(1776712496.693:599): pid=3847 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Apr 20 19:14:56.693000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:14:56.718611 kernel: audit: type=1300 audit(1776712496.693:599): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd951c41b0 a2=3 a3=0 items=0 ppid=1 pid=3847 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:14:56.718813 kernel: audit: type=1327 audit(1776712496.693:599): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:14:56.845431 systemd-logind[1627]: New session '16' of user 'core' with class 'user' and type 'tty'. Apr 20 19:14:56.881839 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 20 19:14:56.921000 audit[3847]: AUDIT1105 pid=3847 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:14:56.935419 kernel: audit: type=1105 audit(1776712496.921:600): pid=3847 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:14:56.950000 audit[3854]: AUDIT1103 pid=3854 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:14:57.038819 kernel: audit: type=1103 audit(1776712496.950:601): pid=3854 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:14:57.039412 kubelet[3163]: E0420 19:14:57.039346 3163 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:14:59.553776 sshd[3854]: Connection closed by 10.0.0.1 port 48744 Apr 20 19:14:59.619245 sshd-session[3847]: pam_unix(sshd:session): session closed for user core Apr 20 19:14:59.622000 audit[3847]: AUDIT1106 pid=3847 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:14:59.627000 audit[3847]: AUDIT1104 pid=3847 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:14:59.749901 systemd[1]: sshd@14-4-10.0.0.14:22-10.0.0.1:48744.service: Deactivated successfully. Apr 20 19:14:59.762000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-4-10.0.0.14:22-10.0.0.1:48744 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:14:59.872265 kubelet[3163]: E0420 19:14:59.870072 3163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:14:59.873368 systemd[1]: session-16.scope: Deactivated successfully. Apr 20 19:14:59.885654 systemd[1]: session-16.scope: Consumed 2.008s CPU time, 17.8M memory peak. Apr 20 19:15:00.017856 systemd-logind[1627]: Session 16 logged out. Waiting for processes to exit. Apr 20 19:15:00.194926 systemd-logind[1627]: Removed session 16. Apr 20 19:15:02.079255 kubelet[3163]: E0420 19:15:02.078756 3163 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:15:03.005017 kubelet[3163]: E0420 19:15:03.004568 3163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:15:04.951583 systemd[1]: Started sshd@15-4101-10.0.0.14:22-10.0.0.1:41758.service - OpenSSH per-connection server daemon (10.0.0.1:41758). Apr 20 19:15:04.950000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-4101-10.0.0.14:22-10.0.0.1:41758 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:15:05.009992 kernel: kauditd_printk_skb: 3 callbacks suppressed Apr 20 19:15:05.015418 kernel: audit: type=1130 audit(1776712504.950:605): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-4101-10.0.0.14:22-10.0.0.1:41758 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:15:07.135189 kubelet[3163]: E0420 19:15:07.134839 3163 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:15:07.407000 audit[3868]: AUDIT1101 pid=3868 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:15:07.421283 sshd[3868]: Accepted publickey for core from 10.0.0.1 port 41758 ssh2: RSA SHA256:ZNIzts6V4KYKlrJxXaosrimCRlmsV/+NkZ5UtjwHrjE Apr 20 19:15:07.519000 audit[3868]: AUDIT1103 pid=3868 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:15:07.540271 kernel: audit: type=1101 audit(1776712507.407:606): pid=3868 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:15:07.546850 kernel: audit: type=1103 audit(1776712507.519:607): pid=3868 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:15:07.547679 kernel: audit: type=1006 audit(1776712507.538:608): pid=3868 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Apr 20 19:15:07.538000 audit[3868]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff5e0d8b60 a2=3 a3=0 items=0 ppid=1 pid=3868 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:15:07.559904 kernel: audit: type=1300 audit(1776712507.538:608): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff5e0d8b60 a2=3 a3=0 items=0 ppid=1 pid=3868 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:15:07.538000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:15:07.581481 kernel: audit: type=1327 audit(1776712507.538:608): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:15:07.564113 sshd-session[3868]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 19:15:07.804110 systemd-logind[1627]: New session '17' of user 'core' with class 'user' and type 'tty'. Apr 20 19:15:07.912758 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 20 19:15:08.018000 audit[3868]: AUDIT1105 pid=3868 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:15:08.053971 kernel: audit: type=1105 audit(1776712508.018:609): pid=3868 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:15:08.108000 audit[3872]: AUDIT1103 pid=3872 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:15:08.122635 kernel: audit: type=1103 audit(1776712508.108:610): pid=3872 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:15:12.163697 sshd[3872]: Connection closed by 10.0.0.1 port 41758 Apr 20 19:15:12.186528 sshd-session[3868]: pam_unix(sshd:session): session closed for user core Apr 20 19:15:12.192000 audit[3868]: AUDIT1106 pid=3868 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:15:12.202612 kernel: audit: type=1106 audit(1776712512.192:611): pid=3868 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:15:12.206287 kubelet[3163]: E0420 19:15:12.202612 3163 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:15:12.192000 audit[3868]: AUDIT1104 pid=3868 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:15:12.223000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-4101-10.0.0.14:22-10.0.0.1:41758 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:15:12.350730 kernel: audit: type=1104 audit(1776712512.192:612): pid=3868 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:15:12.220703 systemd[1]: sshd@15-4101-10.0.0.14:22-10.0.0.1:41758.service: Deactivated successfully. Apr 20 19:15:12.403041 kernel: audit: type=1131 audit(1776712512.223:613): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-4101-10.0.0.14:22-10.0.0.1:41758 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:15:12.233803 systemd[1]: sshd@15-4101-10.0.0.14:22-10.0.0.1:41758.service: Consumed 1.204s CPU time, 4.1M memory peak. Apr 20 19:15:12.561237 systemd[1]: session-17.scope: Deactivated successfully. Apr 20 19:15:12.606278 systemd[1]: session-17.scope: Consumed 2.634s CPU time, 15M memory peak. Apr 20 19:15:12.617686 systemd-logind[1627]: Session 17 logged out. Waiting for processes to exit. Apr 20 19:15:12.773467 systemd-logind[1627]: Removed session 17. Apr 20 19:15:13.000429 kubelet[3163]: E0420 19:15:12.999598 3163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:15:15.436000 audit[3890]: NETFILTER_CFG table=filter:109 family=2 entries=19 op=nft_register_rule pid=3890 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Apr 20 19:15:15.436000 audit[3890]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffcf71b97d0 a2=0 a3=7ffcf71b97bc items=0 ppid=3270 pid=3890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:15:15.436000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Apr 20 19:15:15.481482 kernel: audit: type=1325 audit(1776712515.436:614): table=filter:109 family=2 entries=19 op=nft_register_rule pid=3890 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Apr 20 19:15:15.481642 kernel: audit: type=1300 audit(1776712515.436:614): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffcf71b97d0 a2=0 a3=7ffcf71b97bc items=0 ppid=3270 pid=3890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:15:15.484994 kernel: audit: type=1327 audit(1776712515.436:614): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Apr 20 19:15:15.505000 audit[3890]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3890 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Apr 20 19:15:15.505000 audit[3890]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcf71b97d0 a2=0 a3=0 items=0 ppid=3270 pid=3890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:15:15.505000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Apr 20 19:15:15.615074 kernel: audit: type=1325 audit(1776712515.505:615): table=nat:110 family=2 entries=12 op=nft_register_rule pid=3890 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Apr 20 19:15:15.616348 kernel: audit: type=1300 audit(1776712515.505:615): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcf71b97d0 a2=0 a3=0 items=0 ppid=3270 pid=3890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:15:15.616683 kernel: audit: type=1327 audit(1776712515.505:615): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Apr 20 19:15:16.585000 audit[3892]: NETFILTER_CFG table=filter:111 family=2 entries=20 op=nft_register_rule pid=3892 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Apr 20 19:15:16.585000 audit[3892]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffce9b41e90 a2=0 a3=7ffce9b41e7c items=0 ppid=3270 pid=3892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:15:16.585000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Apr 20 19:15:16.623000 audit[3892]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3892 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Apr 20 19:15:16.623000 audit[3892]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffce9b41e90 a2=0 a3=0 items=0 ppid=3270 pid=3892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:15:16.623000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Apr 20 19:15:16.632419 kernel: audit: type=1325 audit(1776712516.585:616): table=filter:111 family=2 entries=20 op=nft_register_rule pid=3892 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Apr 20 19:15:17.265798 kubelet[3163]: E0420 19:15:17.265011 3163 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:15:17.847219 systemd[1]: Started sshd@16-12291-10.0.0.14:22-10.0.0.1:42704.service - OpenSSH per-connection server daemon (10.0.0.1:42704). Apr 20 19:15:17.853000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-12291-10.0.0.14:22-10.0.0.1:42704 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:15:17.959498 kernel: kauditd_printk_skb: 5 callbacks suppressed Apr 20 19:15:17.961502 kernel: audit: type=1130 audit(1776712517.853:618): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-12291-10.0.0.14:22-10.0.0.1:42704 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:15:19.283000 audit[3894]: AUDIT1101 pid=3894 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:15:19.344000 audit[3894]: AUDIT1103 pid=3894 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:15:19.354827 sshd[3894]: Accepted publickey for core from 10.0.0.1 port 42704 ssh2: RSA SHA256:ZNIzts6V4KYKlrJxXaosrimCRlmsV/+NkZ5UtjwHrjE Apr 20 19:15:19.344000 audit[3894]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe50ccd080 a2=3 a3=0 items=0 ppid=1 pid=3894 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:15:19.344000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:15:19.438133 kernel: audit: type=1101 audit(1776712519.283:619): pid=3894 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:15:19.406316 sshd-session[3894]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 19:15:19.441345 kernel: audit: type=1103 audit(1776712519.344:620): pid=3894 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:15:19.445334 kernel: audit: type=1006 audit(1776712519.344:621): pid=3894 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Apr 20 19:15:19.445652 kernel: audit: type=1300 audit(1776712519.344:621): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe50ccd080 a2=3 a3=0 items=0 ppid=1 pid=3894 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:15:19.451484 kernel: audit: type=1327 audit(1776712519.344:621): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:15:19.618656 systemd-logind[1627]: New session '18' of user 'core' with class 'user' and type 'tty'. Apr 20 19:15:19.649777 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 20 19:15:19.886000 audit[3894]: AUDIT1105 pid=3894 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:15:19.918727 kernel: audit: type=1105 audit(1776712519.886:622): pid=3894 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:15:19.976000 audit[3898]: AUDIT1103 pid=3898 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:15:20.019820 kernel: audit: type=1103 audit(1776712519.976:623): pid=3898 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:15:22.281129 sshd[3898]: Connection closed by 10.0.0.1 port 42704 Apr 20 19:15:22.286350 sshd-session[3894]: pam_unix(sshd:session): session closed for user core Apr 20 19:15:22.386000 audit[3894]: AUDIT1106 pid=3894 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:15:22.390805 kubelet[3163]: E0420 19:15:22.287207 3163 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:15:22.399638 kernel: audit: type=1106 audit(1776712522.386:624): pid=3894 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:15:22.386000 audit[3894]: AUDIT1104 pid=3894 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:15:22.401347 kernel: audit: type=1104 audit(1776712522.386:625): pid=3894 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:15:22.417124 systemd[1]: sshd@16-12291-10.0.0.14:22-10.0.0.1:42704.service: Deactivated successfully. Apr 20 19:15:22.424000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-12291-10.0.0.14:22-10.0.0.1:42704 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:15:22.611962 systemd[1]: session-18.scope: Deactivated successfully. Apr 20 19:15:22.626785 systemd[1]: session-18.scope: Consumed 1.715s CPU time, 16.8M memory peak. Apr 20 19:15:22.636106 systemd-logind[1627]: Session 18 logged out. Waiting for processes to exit. Apr 20 19:15:22.651380 systemd-logind[1627]: Removed session 18. Apr 20 19:15:27.373623 kubelet[3163]: E0420 19:15:27.371815 3163 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:15:27.800434 systemd[1]: Started sshd@17-8198-10.0.0.14:22-10.0.0.1:48854.service - OpenSSH per-connection server daemon (10.0.0.1:48854). Apr 20 19:15:27.799000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-8198-10.0.0.14:22-10.0.0.1:48854 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:15:27.813015 kernel: kauditd_printk_skb: 1 callbacks suppressed Apr 20 19:15:27.814016 kernel: audit: type=1130 audit(1776712527.799:627): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-8198-10.0.0.14:22-10.0.0.1:48854 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:15:27.815514 systemd[1]: Started systemd-sysupdate.service - Automatic System Update. Apr 20 19:15:27.814000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysupdate comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:15:27.821579 kernel: audit: type=1130 audit(1776712527.814:628): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysupdate comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:15:28.013878 systemd-sysupdate[3915]: Discovering installed instances… Apr 20 19:15:28.014814 systemd-sysupdate[3915]: Discovering available instances… Apr 20 19:15:28.014893 systemd-sysupdate[3915]: Determining installed update sets… Apr 20 19:15:28.014923 systemd-sysupdate[3915]: Determining available update sets… Apr 20 19:15:28.014947 systemd-sysupdate[3915]: No update needed. Apr 20 19:15:28.023876 systemd[1]: systemd-sysupdate.service: Deactivated successfully. Apr 20 19:15:28.023000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysupdate comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:15:28.043135 kernel: audit: type=1131 audit(1776712528.023:629): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysupdate comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:15:28.236000 audit[3914]: AUDIT1101 pid=3914 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:15:28.279077 kernel: audit: type=1101 audit(1776712528.236:630): pid=3914 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:15:28.280749 sshd[3914]: Accepted publickey for core from 10.0.0.1 port 48854 ssh2: RSA SHA256:ZNIzts6V4KYKlrJxXaosrimCRlmsV/+NkZ5UtjwHrjE Apr 20 19:15:28.279000 audit[3914]: AUDIT1103 pid=3914 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:15:28.336468 kernel: audit: type=1103 audit(1776712528.279:631): pid=3914 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:15:28.351742 kernel: audit: type=1006 audit(1776712528.335:632): pid=3914 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Apr 20 19:15:28.335000 audit[3914]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc09ed6f60 a2=3 a3=0 items=0 ppid=1 pid=3914 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:15:28.335000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:15:28.386012 kernel: audit: type=1300 audit(1776712528.335:632): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc09ed6f60 a2=3 a3=0 items=0 ppid=1 pid=3914 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:15:28.372160 sshd-session[3914]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 19:15:28.392210 kernel: audit: type=1327 audit(1776712528.335:632): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:15:28.549990 systemd-logind[1627]: New session '19' of user 'core' with class 'user' and type 'tty'. Apr 20 19:15:28.581356 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 20 19:15:28.643000 audit[3914]: AUDIT1105 pid=3914 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:15:28.658819 kernel: audit: type=1105 audit(1776712528.643:633): pid=3914 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:15:28.659000 audit[3920]: AUDIT1103 pid=3920 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:15:28.747148 kernel: audit: type=1103 audit(1776712528.659:634): pid=3920 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:15:30.485666 sshd[3920]: Connection closed by 10.0.0.1 port 48854 Apr 20 19:15:30.487000 audit[3914]: AUDIT1106 pid=3914 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:15:30.488000 audit[3914]: AUDIT1104 pid=3914 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:15:30.487016 sshd-session[3914]: pam_unix(sshd:session): session closed for user core Apr 20 19:15:30.520797 systemd[1]: sshd@17-8198-10.0.0.14:22-10.0.0.1:48854.service: Deactivated successfully. Apr 20 19:15:30.520000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-8198-10.0.0.14:22-10.0.0.1:48854 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:15:30.531923 systemd[1]: session-19.scope: Deactivated successfully. Apr 20 19:15:30.563365 systemd[1]: session-19.scope: Consumed 1.284s CPU time, 15.5M memory peak. Apr 20 19:15:30.609390 systemd-logind[1627]: Session 19 logged out. Waiting for processes to exit. Apr 20 19:15:30.615205 systemd-logind[1627]: Removed session 19. Apr 20 19:15:31.861086 kubelet[3163]: E0420 19:15:31.860937 3163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:15:32.436001 kubelet[3163]: E0420 19:15:32.432395 3163 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:15:35.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-8199-10.0.0.14:22-10.0.0.1:54296 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:15:35.738488 kernel: kauditd_printk_skb: 3 callbacks suppressed Apr 20 19:15:35.723847 systemd[1]: Started sshd@18-8199-10.0.0.14:22-10.0.0.1:54296.service - OpenSSH per-connection server daemon (10.0.0.1:54296). Apr 20 19:15:35.753130 kernel: audit: type=1130 audit(1776712535.722:638): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-8199-10.0.0.14:22-10.0.0.1:54296 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:15:36.840000 audit[3936]: AUDIT1101 pid=3936 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:15:36.845000 audit[3936]: AUDIT1103 pid=3936 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:15:36.846000 audit[3936]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffccc511e50 a2=3 a3=0 items=0 ppid=1 pid=3936 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:15:36.846000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:15:36.957029 sshd[3936]: Accepted publickey for core from 10.0.0.1 port 54296 ssh2: RSA SHA256:ZNIzts6V4KYKlrJxXaosrimCRlmsV/+NkZ5UtjwHrjE Apr 20 19:15:36.848497 sshd-session[3936]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 19:15:36.990873 kernel: audit: type=1101 audit(1776712536.840:639): pid=3936 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:15:36.990995 kernel: audit: type=1103 audit(1776712536.845:640): pid=3936 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:15:36.991114 kernel: audit: type=1006 audit(1776712536.846:641): pid=3936 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Apr 20 19:15:36.991203 kernel: audit: type=1300 audit(1776712536.846:641): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffccc511e50 a2=3 a3=0 items=0 ppid=1 pid=3936 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:15:36.991226 kernel: audit: type=1327 audit(1776712536.846:641): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:15:37.209769 systemd-logind[1627]: New session '20' of user 'core' with class 'user' and type 'tty'. Apr 20 19:15:37.336114 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 20 19:15:37.453175 kubelet[3163]: E0420 19:15:37.452474 3163 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:15:37.534000 audit[3936]: AUDIT1105 pid=3936 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:15:37.553750 kernel: audit: type=1105 audit(1776712537.534:642): pid=3936 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:15:37.691000 audit[3941]: AUDIT1103 pid=3941 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:15:37.730770 kernel: audit: type=1103 audit(1776712537.691:643): pid=3941 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:15:38.844000 audit[3954]: NETFILTER_CFG table=filter:113 family=2 entries=21 op=nft_register_rule pid=3954 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Apr 20 19:15:38.844000 audit[3954]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffe695ac3e0 a2=0 a3=7ffe695ac3cc items=0 ppid=3270 pid=3954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:15:38.844000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Apr 20 19:15:38.928000 audit[3954]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3954 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Apr 20 19:15:38.928000 audit[3954]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe695ac3e0 a2=0 a3=0 items=0 ppid=3270 pid=3954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:15:38.928000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Apr 20 19:15:38.942913 kernel: audit: type=1325 audit(1776712538.844:644): table=filter:113 family=2 entries=21 op=nft_register_rule pid=3954 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Apr 20 19:15:38.943025 kernel: audit: type=1300 audit(1776712538.844:644): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffe695ac3e0 a2=0 a3=7ffe695ac3cc items=0 ppid=3270 pid=3954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:15:39.372000 audit[3957]: NETFILTER_CFG table=filter:115 family=2 entries=22 op=nft_register_rule pid=3957 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Apr 20 19:15:39.372000 audit[3957]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffe37419b10 a2=0 a3=7ffe37419afc items=0 ppid=3270 pid=3957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:15:39.372000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Apr 20 19:15:39.386000 audit[3957]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3957 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Apr 20 19:15:39.386000 audit[3957]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe37419b10 a2=0 a3=0 items=0 ppid=3270 pid=3957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:15:39.386000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Apr 20 19:15:40.634713 sshd[3941]: Connection closed by 10.0.0.1 port 54296 Apr 20 19:15:40.648398 sshd-session[3936]: pam_unix(sshd:session): session closed for user core Apr 20 19:15:40.749000 audit[3936]: AUDIT1106 pid=3936 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:15:40.758000 audit[3936]: AUDIT1104 pid=3936 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:15:40.795381 kernel: kauditd_printk_skb: 10 callbacks suppressed Apr 20 19:15:40.795648 kernel: audit: type=1106 audit(1776712540.749:648): pid=3936 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:15:40.795714 kernel: audit: type=1104 audit(1776712540.758:649): pid=3936 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:15:40.807486 systemd[1]: sshd@18-8199-10.0.0.14:22-10.0.0.1:54296.service: Deactivated successfully. Apr 20 19:15:40.820000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-8199-10.0.0.14:22-10.0.0.1:54296 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:15:40.830015 kernel: audit: type=1131 audit(1776712540.820:650): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-8199-10.0.0.14:22-10.0.0.1:54296 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:15:40.904184 systemd[1]: session-20.scope: Deactivated successfully. Apr 20 19:15:40.928503 systemd[1]: session-20.scope: Consumed 1.822s CPU time, 15.3M memory peak. Apr 20 19:15:41.089159 systemd-logind[1627]: Session 20 logged out. Waiting for processes to exit. Apr 20 19:15:41.347378 systemd-logind[1627]: Removed session 20. Apr 20 19:15:41.591000 audit[3965]: NETFILTER_CFG table=filter:117 family=2 entries=22 op=nft_register_rule pid=3965 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Apr 20 19:15:41.591000 audit[3965]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffcadc6c040 a2=0 a3=7ffcadc6c02c items=0 ppid=3270 pid=3965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:15:41.618514 kernel: audit: type=1325 audit(1776712541.591:651): table=filter:117 family=2 entries=22 op=nft_register_rule pid=3965 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Apr 20 19:15:41.591000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Apr 20 19:15:41.632811 kernel: audit: type=1300 audit(1776712541.591:651): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffcadc6c040 a2=0 a3=7ffcadc6c02c items=0 ppid=3270 pid=3965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:15:41.633030 kernel: audit: type=1327 audit(1776712541.591:651): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Apr 20 19:15:41.631000 audit[3965]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=3965 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Apr 20 19:15:41.631000 audit[3965]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcadc6c040 a2=0 a3=0 items=0 ppid=3270 pid=3965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:15:41.631000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Apr 20 19:15:41.734731 kernel: audit: type=1325 audit(1776712541.631:652): table=nat:118 family=2 entries=12 op=nft_register_rule pid=3965 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Apr 20 19:15:41.736362 kernel: audit: type=1300 audit(1776712541.631:652): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcadc6c040 a2=0 a3=0 items=0 ppid=3270 pid=3965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:15:41.737057 kernel: audit: type=1327 audit(1776712541.631:652): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Apr 20 19:15:42.539947 kubelet[3163]: E0420 19:15:42.538926 3163 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:15:42.862629 kubelet[3163]: I0420 19:15:42.858513 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/071d23f6-a94b-4165-9229-2d0570b516d8-cni-log-dir\") pod \"calico-node-g9fs5\" (UID: \"071d23f6-a94b-4165-9229-2d0570b516d8\") " pod="calico-system/calico-node-g9fs5" Apr 20 19:15:42.862629 kubelet[3163]: I0420 19:15:42.858777 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/071d23f6-a94b-4165-9229-2d0570b516d8-xtables-lock\") pod \"calico-node-g9fs5\" (UID: \"071d23f6-a94b-4165-9229-2d0570b516d8\") " pod="calico-system/calico-node-g9fs5" Apr 20 19:15:42.862629 kubelet[3163]: I0420 19:15:42.858795 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/071d23f6-a94b-4165-9229-2d0570b516d8-cni-net-dir\") pod \"calico-node-g9fs5\" (UID: \"071d23f6-a94b-4165-9229-2d0570b516d8\") " pod="calico-system/calico-node-g9fs5" Apr 20 19:15:42.862629 kubelet[3163]: I0420 19:15:42.858808 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/071d23f6-a94b-4165-9229-2d0570b516d8-node-certs\") pod \"calico-node-g9fs5\" (UID: \"071d23f6-a94b-4165-9229-2d0570b516d8\") " pod="calico-system/calico-node-g9fs5" Apr 20 19:15:42.862629 kubelet[3163]: I0420 19:15:42.858923 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/071d23f6-a94b-4165-9229-2d0570b516d8-tigera-ca-bundle\") pod \"calico-node-g9fs5\" (UID: \"071d23f6-a94b-4165-9229-2d0570b516d8\") " pod="calico-system/calico-node-g9fs5" Apr 20 19:15:42.880810 kubelet[3163]: I0420 19:15:42.858965 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/071d23f6-a94b-4165-9229-2d0570b516d8-lib-modules\") pod \"calico-node-g9fs5\" (UID: \"071d23f6-a94b-4165-9229-2d0570b516d8\") " pod="calico-system/calico-node-g9fs5" Apr 20 19:15:42.880810 kubelet[3163]: I0420 19:15:42.858978 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kv6b\" (UniqueName: \"kubernetes.io/projected/071d23f6-a94b-4165-9229-2d0570b516d8-kube-api-access-5kv6b\") pod \"calico-node-g9fs5\" (UID: \"071d23f6-a94b-4165-9229-2d0570b516d8\") " pod="calico-system/calico-node-g9fs5" Apr 20 19:15:42.880810 kubelet[3163]: I0420 19:15:42.859049 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/071d23f6-a94b-4165-9229-2d0570b516d8-nodeproc\") pod \"calico-node-g9fs5\" (UID: \"071d23f6-a94b-4165-9229-2d0570b516d8\") " pod="calico-system/calico-node-g9fs5" Apr 20 19:15:42.880810 kubelet[3163]: I0420 19:15:42.859060 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/071d23f6-a94b-4165-9229-2d0570b516d8-var-lib-calico\") pod \"calico-node-g9fs5\" (UID: \"071d23f6-a94b-4165-9229-2d0570b516d8\") " pod="calico-system/calico-node-g9fs5" Apr 20 19:15:42.880810 kubelet[3163]: I0420 19:15:42.859079 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/071d23f6-a94b-4165-9229-2d0570b516d8-policysync\") pod \"calico-node-g9fs5\" (UID: \"071d23f6-a94b-4165-9229-2d0570b516d8\") " pod="calico-system/calico-node-g9fs5" Apr 20 19:15:42.942119 kubelet[3163]: I0420 19:15:42.859090 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/071d23f6-a94b-4165-9229-2d0570b516d8-sys-fs\") pod \"calico-node-g9fs5\" (UID: \"071d23f6-a94b-4165-9229-2d0570b516d8\") " pod="calico-system/calico-node-g9fs5" Apr 20 19:15:42.942119 kubelet[3163]: I0420 19:15:42.859101 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/071d23f6-a94b-4165-9229-2d0570b516d8-var-run-calico\") pod \"calico-node-g9fs5\" (UID: \"071d23f6-a94b-4165-9229-2d0570b516d8\") " pod="calico-system/calico-node-g9fs5" Apr 20 19:15:42.942119 kubelet[3163]: I0420 19:15:42.859159 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/071d23f6-a94b-4165-9229-2d0570b516d8-bpffs\") pod \"calico-node-g9fs5\" (UID: \"071d23f6-a94b-4165-9229-2d0570b516d8\") " pod="calico-system/calico-node-g9fs5" Apr 20 19:15:42.942119 kubelet[3163]: I0420 19:15:42.859168 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/071d23f6-a94b-4165-9229-2d0570b516d8-cni-bin-dir\") pod \"calico-node-g9fs5\" (UID: \"071d23f6-a94b-4165-9229-2d0570b516d8\") " pod="calico-system/calico-node-g9fs5" Apr 20 19:15:42.942119 kubelet[3163]: I0420 19:15:42.859220 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/071d23f6-a94b-4165-9229-2d0570b516d8-flexvol-driver-host\") pod \"calico-node-g9fs5\" (UID: \"071d23f6-a94b-4165-9229-2d0570b516d8\") " pod="calico-system/calico-node-g9fs5" Apr 20 19:15:43.059931 systemd[1]: Created slice kubepods-besteffort-pod071d23f6_a94b_4165_9229_2d0570b516d8.slice - libcontainer container kubepods-besteffort-pod071d23f6_a94b_4165_9229_2d0570b516d8.slice. Apr 20 19:15:43.686101 kubelet[3163]: E0420 19:15:43.685930 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:43.686101 kubelet[3163]: W0420 19:15:43.686031 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:43.713285 kubelet[3163]: E0420 19:15:43.686204 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:43.732030 kubelet[3163]: E0420 19:15:43.721941 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:43.785705 kubelet[3163]: W0420 19:15:43.736795 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:43.785705 kubelet[3163]: E0420 19:15:43.736877 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:43.785705 kubelet[3163]: E0420 19:15:43.782589 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:43.785705 kubelet[3163]: W0420 19:15:43.782870 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:43.785705 kubelet[3163]: E0420 19:15:43.783182 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:43.860837 kubelet[3163]: E0420 19:15:43.858939 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:43.860837 kubelet[3163]: W0420 19:15:43.859130 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:43.860837 kubelet[3163]: E0420 19:15:43.859157 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:43.887668 kubelet[3163]: E0420 19:15:43.886062 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:43.889326 kubelet[3163]: W0420 19:15:43.888579 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:43.889326 kubelet[3163]: E0420 19:15:43.888660 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:43.902179 kubelet[3163]: E0420 19:15:43.901688 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:43.902179 kubelet[3163]: W0420 19:15:43.902090 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:43.902179 kubelet[3163]: E0420 19:15:43.902192 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:43.923617 kubelet[3163]: E0420 19:15:43.923015 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:43.923617 kubelet[3163]: W0420 19:15:43.923357 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:43.923617 kubelet[3163]: E0420 19:15:43.923395 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:44.064313 kubelet[3163]: E0420 19:15:43.985435 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:44.064313 kubelet[3163]: W0420 19:15:43.987094 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:44.064313 kubelet[3163]: E0420 19:15:43.987372 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:44.121102 kubelet[3163]: E0420 19:15:44.110151 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:44.121102 kubelet[3163]: W0420 19:15:44.118385 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:44.134758 kubelet[3163]: E0420 19:15:44.125970 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:44.134758 kubelet[3163]: E0420 19:15:44.133186 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:44.134758 kubelet[3163]: W0420 19:15:44.134150 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:44.134758 kubelet[3163]: E0420 19:15:44.134332 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:44.144716 kubelet[3163]: E0420 19:15:44.135430 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:44.144716 kubelet[3163]: W0420 19:15:44.135441 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:44.144716 kubelet[3163]: E0420 19:15:44.135454 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:44.144716 kubelet[3163]: E0420 19:15:44.136182 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:44.144716 kubelet[3163]: W0420 19:15:44.136192 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:44.144716 kubelet[3163]: E0420 19:15:44.136201 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:44.144716 kubelet[3163]: E0420 19:15:44.136663 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:44.144716 kubelet[3163]: W0420 19:15:44.136673 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:44.144716 kubelet[3163]: E0420 19:15:44.136682 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:44.144716 kubelet[3163]: E0420 19:15:44.142096 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:44.145116 kubelet[3163]: W0420 19:15:44.142118 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:44.145116 kubelet[3163]: E0420 19:15:44.142191 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:44.145361 containerd[1659]: time="2026-04-20T19:15:44.145247021Z" level=info msg="RunPodSandbox for name:\"calico-node-g9fs5\" uid:\"071d23f6-a94b-4165-9229-2d0570b516d8\" namespace:\"calico-system\"" Apr 20 19:15:44.274419 kubelet[3163]: E0420 19:15:44.151227 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:44.274419 kubelet[3163]: W0420 19:15:44.165331 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:44.274419 kubelet[3163]: E0420 19:15:44.233161 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:44.294032 kubelet[3163]: E0420 19:15:44.283090 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:44.294032 kubelet[3163]: W0420 19:15:44.289304 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:44.294032 kubelet[3163]: E0420 19:15:44.289661 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:44.294032 kubelet[3163]: E0420 19:15:44.293830 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:44.303360 kubelet[3163]: W0420 19:15:44.298468 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:44.303360 kubelet[3163]: E0420 19:15:44.299132 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:44.353030 kubelet[3163]: E0420 19:15:44.345453 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:44.353030 kubelet[3163]: W0420 19:15:44.352910 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:44.353030 kubelet[3163]: E0420 19:15:44.353026 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:44.379813 kubelet[3163]: E0420 19:15:44.379361 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:44.379813 kubelet[3163]: W0420 19:15:44.379399 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:44.379813 kubelet[3163]: E0420 19:15:44.379423 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:44.418305 kubelet[3163]: E0420 19:15:44.413507 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:44.418305 kubelet[3163]: W0420 19:15:44.416912 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:44.418305 kubelet[3163]: E0420 19:15:44.417183 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:44.535792 kubelet[3163]: E0420 19:15:44.535661 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:44.535792 kubelet[3163]: W0420 19:15:44.535896 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:44.535792 kubelet[3163]: E0420 19:15:44.535927 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:44.776673 kubelet[3163]: E0420 19:15:44.767826 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:44.798214 kubelet[3163]: W0420 19:15:44.797978 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:44.799638 kubelet[3163]: E0420 19:15:44.798395 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:44.863182 kubelet[3163]: E0420 19:15:44.862992 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:44.863182 kubelet[3163]: W0420 19:15:44.863068 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:44.863182 kubelet[3163]: E0420 19:15:44.863162 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:44.943326 kubelet[3163]: E0420 19:15:44.943038 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:44.943326 kubelet[3163]: W0420 19:15:44.943150 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:44.960845 kubelet[3163]: E0420 19:15:44.954327 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:45.003150 kubelet[3163]: E0420 19:15:45.002855 3163 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5h6vg" podUID="9f02930c-961c-4c4b-8334-b61cbd5c3d20" Apr 20 19:15:45.012444 kubelet[3163]: E0420 19:15:45.012298 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:45.087359 kubelet[3163]: W0420 19:15:45.031612 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:45.087359 kubelet[3163]: E0420 19:15:45.080747 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:45.123846 containerd[1659]: time="2026-04-20T19:15:45.018649880Z" level=info msg="connecting to shim 1bba4a8c43a21f7135445620d6423b332e6946df1fe8c521ac8720d95194888f" address="unix:///run/containerd/s/d6fd6f578359a16fb6047ac6b8915843558ecdd02f7ae288b74c76a061bb8a9a" namespace=k8s.io protocol=ttrpc version=3 Apr 20 19:15:45.184139 kubelet[3163]: E0420 19:15:45.183379 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:45.240500 kubelet[3163]: W0420 19:15:45.187379 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:45.282761 kubelet[3163]: E0420 19:15:45.282363 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:45.312681 kubelet[3163]: E0420 19:15:45.312615 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:45.313275 kubelet[3163]: W0420 19:15:45.313084 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:45.313498 kubelet[3163]: E0420 19:15:45.313481 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:45.322815 kubelet[3163]: E0420 19:15:45.322620 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:45.322815 kubelet[3163]: W0420 19:15:45.322670 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:45.322815 kubelet[3163]: E0420 19:15:45.322800 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:45.438984 kubelet[3163]: E0420 19:15:45.323825 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:45.438984 kubelet[3163]: W0420 19:15:45.323838 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:45.438984 kubelet[3163]: E0420 19:15:45.323853 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:45.438984 kubelet[3163]: E0420 19:15:45.324441 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:45.438984 kubelet[3163]: W0420 19:15:45.324453 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:45.438984 kubelet[3163]: E0420 19:15:45.324464 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:45.438984 kubelet[3163]: E0420 19:15:45.372799 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:45.438984 kubelet[3163]: W0420 19:15:45.378070 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:45.438984 kubelet[3163]: E0420 19:15:45.378409 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:45.438984 kubelet[3163]: E0420 19:15:45.397936 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:45.515523 kubelet[3163]: W0420 19:15:45.397962 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:45.515523 kubelet[3163]: E0420 19:15:45.398014 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:45.515523 kubelet[3163]: E0420 19:15:45.450220 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:45.515523 kubelet[3163]: W0420 19:15:45.485616 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:45.515523 kubelet[3163]: E0420 19:15:45.486487 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:45.528665 kubelet[3163]: E0420 19:15:45.526651 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:45.551733 kubelet[3163]: W0420 19:15:45.541051 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:45.661331 kubelet[3163]: E0420 19:15:45.566398 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:45.695311 kubelet[3163]: E0420 19:15:45.676354 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:45.698902 kubelet[3163]: W0420 19:15:45.695754 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:45.698902 kubelet[3163]: E0420 19:15:45.695966 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:45.702509 kubelet[3163]: E0420 19:15:45.700657 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:45.702509 kubelet[3163]: W0420 19:15:45.700752 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:45.702509 kubelet[3163]: E0420 19:15:45.700870 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:45.702509 kubelet[3163]: E0420 19:15:45.702503 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:45.702743 kubelet[3163]: W0420 19:15:45.702518 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:45.702743 kubelet[3163]: E0420 19:15:45.702572 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:45.702807 kubelet[3163]: E0420 19:15:45.702762 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:45.702807 kubelet[3163]: W0420 19:15:45.702768 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:45.702807 kubelet[3163]: E0420 19:15:45.702776 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:45.702916 kubelet[3163]: E0420 19:15:45.702901 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:45.703065 kubelet[3163]: W0420 19:15:45.703056 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:45.703107 kubelet[3163]: E0420 19:15:45.703101 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:45.703367 kubelet[3163]: E0420 19:15:45.703358 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:45.703410 kubelet[3163]: W0420 19:15:45.703405 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:45.703441 kubelet[3163]: E0420 19:15:45.703436 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:45.703680 kubelet[3163]: E0420 19:15:45.703672 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:45.716499 kubelet[3163]: W0420 19:15:45.705965 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:45.716499 kubelet[3163]: E0420 19:15:45.706193 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:45.723063 kubelet[3163]: E0420 19:15:45.721942 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:45.723063 kubelet[3163]: W0420 19:15:45.721982 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:45.723063 kubelet[3163]: E0420 19:15:45.722013 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:45.744869 kubelet[3163]: E0420 19:15:45.744287 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:45.778309 kubelet[3163]: W0420 19:15:45.749853 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:45.778309 kubelet[3163]: E0420 19:15:45.750192 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:45.898658 kubelet[3163]: E0420 19:15:45.898201 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:45.911341 kubelet[3163]: W0420 19:15:45.899716 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:45.932906 kubelet[3163]: E0420 19:15:45.912490 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:45.939766 kubelet[3163]: E0420 19:15:45.925077 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:45.939766 kubelet[3163]: W0420 19:15:45.935500 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:46.041358 kubelet[3163]: E0420 19:15:46.038353 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:46.052872 kubelet[3163]: E0420 19:15:46.052844 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:46.054001 kubelet[3163]: W0420 19:15:46.053758 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:46.058828 kubelet[3163]: E0420 19:15:46.057899 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:46.060211 kubelet[3163]: E0420 19:15:46.060193 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:46.060341 kubelet[3163]: W0420 19:15:46.060325 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:46.060481 kubelet[3163]: E0420 19:15:46.060469 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:46.069766 kubelet[3163]: E0420 19:15:46.066965 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:46.103014 kubelet[3163]: W0420 19:15:46.082366 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:46.103014 kubelet[3163]: E0420 19:15:46.101975 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:46.105453 kubelet[3163]: E0420 19:15:46.105427 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:46.105609 kubelet[3163]: W0420 19:15:46.105528 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:46.105884 kubelet[3163]: E0420 19:15:46.105869 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:46.106206 kubelet[3163]: E0420 19:15:46.106194 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:46.106331 kubelet[3163]: W0420 19:15:46.106320 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:46.106384 kubelet[3163]: E0420 19:15:46.106376 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:46.113176 kubelet[3163]: E0420 19:15:46.113001 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:46.113176 kubelet[3163]: W0420 19:15:46.114956 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:46.113176 kubelet[3163]: E0420 19:15:46.115211 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:46.219084 kubelet[3163]: E0420 19:15:46.136591 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:46.219084 kubelet[3163]: W0420 19:15:46.136627 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:46.219084 kubelet[3163]: E0420 19:15:46.136743 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:46.219084 kubelet[3163]: E0420 19:15:46.137484 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:46.219084 kubelet[3163]: W0420 19:15:46.137497 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:46.219084 kubelet[3163]: E0420 19:15:46.137511 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:46.241214 kubelet[3163]: E0420 19:15:46.241077 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:46.241214 kubelet[3163]: W0420 19:15:46.241234 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:46.254671 kubelet[3163]: E0420 19:15:46.253629 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:46.254740 systemd[1]: Started sshd@19-8200-10.0.0.14:22-10.0.0.1:40338.service - OpenSSH per-connection server daemon (10.0.0.1:40338). Apr 20 19:15:46.254000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-8200-10.0.0.14:22-10.0.0.1:40338 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:15:46.264335 kernel: audit: type=1130 audit(1776712546.254:653): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-8200-10.0.0.14:22-10.0.0.1:40338 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:15:46.358050 kubelet[3163]: E0420 19:15:46.349489 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:46.358050 kubelet[3163]: W0420 19:15:46.349815 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:46.358050 kubelet[3163]: E0420 19:15:46.350029 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:46.442046 kubelet[3163]: E0420 19:15:46.439178 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:46.443593 kubelet[3163]: W0420 19:15:46.442103 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:46.443678 kubelet[3163]: E0420 19:15:46.443632 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:46.525217 kubelet[3163]: E0420 19:15:46.523974 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:46.525217 kubelet[3163]: W0420 19:15:46.524099 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:46.525217 kubelet[3163]: E0420 19:15:46.524369 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:46.547657 kubelet[3163]: E0420 19:15:46.547339 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:46.547657 kubelet[3163]: W0420 19:15:46.547609 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:46.596850 kubelet[3163]: E0420 19:15:46.547715 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:46.605831 kubelet[3163]: E0420 19:15:46.597077 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:46.605831 kubelet[3163]: W0420 19:15:46.600065 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:46.610194 kubelet[3163]: E0420 19:15:46.610016 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:46.624988 kubelet[3163]: E0420 19:15:46.623109 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:46.633358 kubelet[3163]: W0420 19:15:46.628295 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:46.709775 kubelet[3163]: E0420 19:15:46.634098 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:46.709775 kubelet[3163]: E0420 19:15:46.634527 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:46.709775 kubelet[3163]: W0420 19:15:46.634577 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:46.709775 kubelet[3163]: E0420 19:15:46.634592 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:46.709775 kubelet[3163]: E0420 19:15:46.639468 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:46.709775 kubelet[3163]: W0420 19:15:46.639710 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:46.709775 kubelet[3163]: E0420 19:15:46.639854 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:46.732122 kubelet[3163]: E0420 19:15:46.719404 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:46.732122 kubelet[3163]: W0420 19:15:46.719732 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:46.732122 kubelet[3163]: E0420 19:15:46.719956 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:46.732122 kubelet[3163]: E0420 19:15:46.726841 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:46.732122 kubelet[3163]: W0420 19:15:46.727963 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:46.732122 kubelet[3163]: E0420 19:15:46.728240 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:46.933391 kubelet[3163]: E0420 19:15:46.808411 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:46.933391 kubelet[3163]: W0420 19:15:46.810234 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:46.933391 kubelet[3163]: E0420 19:15:46.811377 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:46.933391 kubelet[3163]: E0420 19:15:46.812294 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:46.933391 kubelet[3163]: W0420 19:15:46.812311 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:46.933391 kubelet[3163]: E0420 19:15:46.812325 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:46.933391 kubelet[3163]: E0420 19:15:46.877519 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:46.933391 kubelet[3163]: W0420 19:15:46.899162 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:46.933391 kubelet[3163]: E0420 19:15:46.899328 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:47.120391 kubelet[3163]: E0420 19:15:47.111182 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:47.120391 kubelet[3163]: W0420 19:15:47.112525 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:47.120391 kubelet[3163]: E0420 19:15:47.112715 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:47.306936 kubelet[3163]: E0420 19:15:47.306183 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:47.306936 kubelet[3163]: W0420 19:15:47.306216 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:47.306936 kubelet[3163]: E0420 19:15:47.306314 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:47.327902 kubelet[3163]: E0420 19:15:47.327795 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:47.364201 kubelet[3163]: W0420 19:15:47.330429 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:47.364201 kubelet[3163]: E0420 19:15:47.330640 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:47.482075 kubelet[3163]: E0420 19:15:47.459353 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:47.717100 kubelet[3163]: W0420 19:15:47.476403 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:47.717100 kubelet[3163]: E0420 19:15:47.496152 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:47.717100 kubelet[3163]: E0420 19:15:47.628360 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:47.717100 kubelet[3163]: W0420 19:15:47.628845 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:47.717100 kubelet[3163]: E0420 19:15:47.628944 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:47.717100 kubelet[3163]: E0420 19:15:47.631943 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:47.717100 kubelet[3163]: W0420 19:15:47.632042 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:47.717100 kubelet[3163]: E0420 19:15:47.632073 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:47.717100 kubelet[3163]: E0420 19:15:47.645848 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:47.717100 kubelet[3163]: W0420 19:15:47.646058 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:47.941693 kubelet[3163]: E0420 19:15:47.646130 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:47.941693 kubelet[3163]: E0420 19:15:47.704299 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:47.941693 kubelet[3163]: W0420 19:15:47.704335 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:47.941693 kubelet[3163]: E0420 19:15:47.704401 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:47.941693 kubelet[3163]: E0420 19:15:47.705000 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:47.941693 kubelet[3163]: W0420 19:15:47.705011 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:47.941693 kubelet[3163]: E0420 19:15:47.705024 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:47.941693 kubelet[3163]: E0420 19:15:47.705126 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:47.941693 kubelet[3163]: E0420 19:15:47.459987 3163 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5h6vg" podUID="9f02930c-961c-4c4b-8334-b61cbd5c3d20" Apr 20 19:15:48.121338 kubelet[3163]: W0420 19:15:47.705132 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:48.121338 kubelet[3163]: E0420 19:15:47.705196 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:48.121338 kubelet[3163]: E0420 19:15:47.711693 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:48.121338 kubelet[3163]: W0420 19:15:47.713352 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:48.121338 kubelet[3163]: E0420 19:15:47.713451 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:48.121338 kubelet[3163]: E0420 19:15:47.716975 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:48.121338 kubelet[3163]: W0420 19:15:47.717076 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:48.121338 kubelet[3163]: E0420 19:15:47.717165 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:48.121338 kubelet[3163]: E0420 19:15:47.717500 3163 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:15:48.121338 kubelet[3163]: E0420 19:15:47.797014 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:48.042749 systemd[1]: Started cri-containerd-1bba4a8c43a21f7135445620d6423b332e6946df1fe8c521ac8720d95194888f.scope - libcontainer container 1bba4a8c43a21f7135445620d6423b332e6946df1fe8c521ac8720d95194888f. Apr 20 19:15:48.151376 kubelet[3163]: W0420 19:15:47.797206 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:48.151376 kubelet[3163]: E0420 19:15:47.797322 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:48.151376 kubelet[3163]: E0420 19:15:47.810336 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:48.151376 kubelet[3163]: W0420 19:15:47.811193 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:48.151376 kubelet[3163]: E0420 19:15:47.829369 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:48.151376 kubelet[3163]: E0420 19:15:48.011365 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:48.151376 kubelet[3163]: W0420 19:15:48.011481 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:48.151376 kubelet[3163]: E0420 19:15:48.011608 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:48.151376 kubelet[3163]: E0420 19:15:48.012662 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:48.151376 kubelet[3163]: W0420 19:15:48.012673 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:48.191857 kubelet[3163]: E0420 19:15:48.012684 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:48.191857 kubelet[3163]: E0420 19:15:48.014865 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:48.191857 kubelet[3163]: W0420 19:15:48.014964 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:48.191857 kubelet[3163]: E0420 19:15:48.014978 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:48.191857 kubelet[3163]: E0420 19:15:48.015598 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:48.191857 kubelet[3163]: W0420 19:15:48.015609 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:48.191857 kubelet[3163]: E0420 19:15:48.015619 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:48.191857 kubelet[3163]: E0420 19:15:48.042475 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:48.191857 kubelet[3163]: W0420 19:15:48.057027 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:48.191857 kubelet[3163]: E0420 19:15:48.057970 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:48.192182 kubelet[3163]: E0420 19:15:48.065398 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:48.192182 kubelet[3163]: W0420 19:15:48.065420 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:48.192182 kubelet[3163]: E0420 19:15:48.065484 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:48.192182 kubelet[3163]: E0420 19:15:48.120590 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:48.192182 kubelet[3163]: W0420 19:15:48.120711 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:48.192182 kubelet[3163]: E0420 19:15:48.120911 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:48.192182 kubelet[3163]: E0420 19:15:48.121377 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:48.192182 kubelet[3163]: W0420 19:15:48.121390 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:48.192182 kubelet[3163]: E0420 19:15:48.121403 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:48.192182 kubelet[3163]: E0420 19:15:48.121630 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:48.208501 kubelet[3163]: W0420 19:15:48.121639 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:48.208501 kubelet[3163]: E0420 19:15:48.121649 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:48.208501 kubelet[3163]: E0420 19:15:48.130653 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:48.208501 kubelet[3163]: W0420 19:15:48.130672 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:48.208501 kubelet[3163]: E0420 19:15:48.130693 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:48.208501 kubelet[3163]: E0420 19:15:48.131054 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:48.208501 kubelet[3163]: W0420 19:15:48.131061 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:48.208501 kubelet[3163]: E0420 19:15:48.131072 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:48.208501 kubelet[3163]: E0420 19:15:48.131208 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:48.208501 kubelet[3163]: W0420 19:15:48.131213 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:48.208890 kubelet[3163]: E0420 19:15:48.131218 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:48.208890 kubelet[3163]: E0420 19:15:48.134449 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:48.208890 kubelet[3163]: W0420 19:15:48.134511 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:48.208890 kubelet[3163]: E0420 19:15:48.134527 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:48.208890 kubelet[3163]: E0420 19:15:48.135072 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:48.208890 kubelet[3163]: W0420 19:15:48.135081 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:48.208890 kubelet[3163]: E0420 19:15:48.135089 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:48.208890 kubelet[3163]: E0420 19:15:48.135175 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:48.208890 kubelet[3163]: W0420 19:15:48.135179 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:48.208890 kubelet[3163]: E0420 19:15:48.135184 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:48.209294 kubelet[3163]: E0420 19:15:48.135486 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:48.209294 kubelet[3163]: W0420 19:15:48.135493 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:48.209294 kubelet[3163]: E0420 19:15:48.135499 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:48.209294 kubelet[3163]: E0420 19:15:48.135965 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:48.209294 kubelet[3163]: W0420 19:15:48.135973 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:48.209294 kubelet[3163]: E0420 19:15:48.135980 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:48.209294 kubelet[3163]: E0420 19:15:48.136072 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:48.209294 kubelet[3163]: W0420 19:15:48.136076 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:48.209294 kubelet[3163]: E0420 19:15:48.136081 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:48.209294 kubelet[3163]: E0420 19:15:48.136161 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:48.209843 kubelet[3163]: W0420 19:15:48.136165 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:48.209843 kubelet[3163]: E0420 19:15:48.136169 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:48.209843 kubelet[3163]: E0420 19:15:48.138011 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:48.209843 kubelet[3163]: W0420 19:15:48.138071 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:48.209843 kubelet[3163]: E0420 19:15:48.138091 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:48.209843 kubelet[3163]: E0420 19:15:48.138351 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:48.209843 kubelet[3163]: W0420 19:15:48.138357 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:48.209843 kubelet[3163]: E0420 19:15:48.138365 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:48.209843 kubelet[3163]: E0420 19:15:48.138504 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:48.209843 kubelet[3163]: W0420 19:15:48.138511 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:48.233489 kubelet[3163]: E0420 19:15:48.138516 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:48.233489 kubelet[3163]: E0420 19:15:48.138726 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:48.233489 kubelet[3163]: W0420 19:15:48.138736 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:48.233489 kubelet[3163]: E0420 19:15:48.138744 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:48.233489 kubelet[3163]: E0420 19:15:48.138856 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:48.233489 kubelet[3163]: W0420 19:15:48.138862 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:48.233489 kubelet[3163]: E0420 19:15:48.138869 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:48.233489 kubelet[3163]: E0420 19:15:48.139486 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:48.233489 kubelet[3163]: W0420 19:15:48.140865 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:48.233489 kubelet[3163]: E0420 19:15:48.140988 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:48.272071 kubelet[3163]: E0420 19:15:48.143223 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:48.272071 kubelet[3163]: W0420 19:15:48.146292 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:48.272071 kubelet[3163]: E0420 19:15:48.147398 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:48.272071 kubelet[3163]: E0420 19:15:48.149452 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:48.272071 kubelet[3163]: W0420 19:15:48.149527 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:48.272071 kubelet[3163]: E0420 19:15:48.149591 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:48.272071 kubelet[3163]: E0420 19:15:48.150103 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:48.272071 kubelet[3163]: W0420 19:15:48.150111 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:48.272071 kubelet[3163]: E0420 19:15:48.150119 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:48.272071 kubelet[3163]: E0420 19:15:48.150198 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:48.276722 kubelet[3163]: W0420 19:15:48.150202 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:48.276722 kubelet[3163]: E0420 19:15:48.150208 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:48.276722 kubelet[3163]: E0420 19:15:48.151182 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:48.276722 kubelet[3163]: W0420 19:15:48.151232 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:48.276722 kubelet[3163]: E0420 19:15:48.151309 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:48.276722 kubelet[3163]: E0420 19:15:48.151625 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:48.276722 kubelet[3163]: W0420 19:15:48.151632 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:48.276722 kubelet[3163]: E0420 19:15:48.151638 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:48.276722 kubelet[3163]: E0420 19:15:48.151879 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:48.276722 kubelet[3163]: W0420 19:15:48.151884 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:48.277331 kubelet[3163]: E0420 19:15:48.151890 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:48.277331 kubelet[3163]: E0420 19:15:48.151995 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:48.277331 kubelet[3163]: W0420 19:15:48.151999 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:48.277331 kubelet[3163]: E0420 19:15:48.152003 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:48.277331 kubelet[3163]: E0420 19:15:48.160357 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:48.277331 kubelet[3163]: W0420 19:15:48.160510 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:48.277331 kubelet[3163]: E0420 19:15:48.160628 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:48.277331 kubelet[3163]: E0420 19:15:48.190918 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:48.277331 kubelet[3163]: W0420 19:15:48.191103 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:48.277331 kubelet[3163]: E0420 19:15:48.191181 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:48.335067 kubelet[3163]: E0420 19:15:48.332134 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:48.457755 kubelet[3163]: W0420 19:15:48.379447 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:48.507141 kubelet[3163]: E0420 19:15:48.498365 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:48.529750 kubelet[3163]: I0420 19:15:48.507955 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/9f02930c-961c-4c4b-8334-b61cbd5c3d20-varrun\") pod \"csi-node-driver-5h6vg\" (UID: \"9f02930c-961c-4c4b-8334-b61cbd5c3d20\") " pod="calico-system/csi-node-driver-5h6vg" Apr 20 19:15:48.529750 kubelet[3163]: E0420 19:15:48.526010 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:48.529750 kubelet[3163]: W0420 19:15:48.526165 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:48.529750 kubelet[3163]: E0420 19:15:48.526199 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:48.552525 kubelet[3163]: I0420 19:15:48.526466 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9f02930c-961c-4c4b-8334-b61cbd5c3d20-registration-dir\") pod \"csi-node-driver-5h6vg\" (UID: \"9f02930c-961c-4c4b-8334-b61cbd5c3d20\") " pod="calico-system/csi-node-driver-5h6vg" Apr 20 19:15:48.650303 kubelet[3163]: E0420 19:15:48.650003 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:48.656243 kubelet[3163]: W0420 19:15:48.654701 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:48.656243 kubelet[3163]: E0420 19:15:48.654748 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:48.662820 kubelet[3163]: E0420 19:15:48.661979 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:48.662820 kubelet[3163]: W0420 19:15:48.662008 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:48.662820 kubelet[3163]: E0420 19:15:48.662095 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:48.662820 kubelet[3163]: E0420 19:15:48.662514 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:48.676870 kubelet[3163]: W0420 19:15:48.662525 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:48.676870 kubelet[3163]: E0420 19:15:48.675159 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:48.696352 kubelet[3163]: E0420 19:15:48.696206 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:48.696352 kubelet[3163]: W0420 19:15:48.696883 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:48.696352 kubelet[3163]: E0420 19:15:48.696968 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:48.719000 audit[4085]: AUDIT1101 pid=4085 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:15:48.748958 kubelet[3163]: I0420 19:15:48.703877 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4m6bv\" (UniqueName: \"kubernetes.io/projected/9f02930c-961c-4c4b-8334-b61cbd5c3d20-kube-api-access-4m6bv\") pod \"csi-node-driver-5h6vg\" (UID: \"9f02930c-961c-4c4b-8334-b61cbd5c3d20\") " pod="calico-system/csi-node-driver-5h6vg" Apr 20 19:15:48.748958 kubelet[3163]: E0420 19:15:48.713692 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:48.748958 kubelet[3163]: W0420 19:15:48.713795 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:48.748958 kubelet[3163]: E0420 19:15:48.713877 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:48.748958 kubelet[3163]: E0420 19:15:48.744416 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:48.755000 audit[4085]: AUDIT1103 pid=4085 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:15:48.844000 audit[4085]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe33774a40 a2=3 a3=0 items=0 ppid=1 pid=4085 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:15:48.859628 kernel: audit: type=1101 audit(1776712548.719:654): pid=4085 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:15:48.859749 sshd[4085]: Accepted publickey for core from 10.0.0.1 port 40338 ssh2: RSA SHA256:ZNIzts6V4KYKlrJxXaosrimCRlmsV/+NkZ5UtjwHrjE Apr 20 19:15:48.860237 kubelet[3163]: W0420 19:15:48.779782 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:48.860237 kubelet[3163]: E0420 19:15:48.782604 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:48.860237 kubelet[3163]: E0420 19:15:48.826340 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:48.860237 kubelet[3163]: W0420 19:15:48.826597 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:48.860237 kubelet[3163]: E0420 19:15:48.826631 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:48.860237 kubelet[3163]: E0420 19:15:48.826865 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:48.860237 kubelet[3163]: W0420 19:15:48.826874 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:48.860237 kubelet[3163]: E0420 19:15:48.826889 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:48.860237 kubelet[3163]: I0420 19:15:48.827055 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9f02930c-961c-4c4b-8334-b61cbd5c3d20-kubelet-dir\") pod \"csi-node-driver-5h6vg\" (UID: \"9f02930c-961c-4c4b-8334-b61cbd5c3d20\") " pod="calico-system/csi-node-driver-5h6vg" Apr 20 19:15:48.858766 sshd-session[4085]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 19:15:48.860836 kernel: audit: type=1103 audit(1776712548.755:655): pid=4085 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:15:48.860862 kubelet[3163]: E0420 19:15:48.835985 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:48.860862 kubelet[3163]: W0420 19:15:48.836072 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:48.860862 kubelet[3163]: E0420 19:15:48.836165 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:48.860862 kubelet[3163]: E0420 19:15:48.845015 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:48.860862 kubelet[3163]: W0420 19:15:48.845332 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:48.860862 kubelet[3163]: E0420 19:15:48.845424 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:48.860862 kubelet[3163]: E0420 19:15:48.846094 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:48.860862 kubelet[3163]: W0420 19:15:48.846109 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:48.860862 kubelet[3163]: E0420 19:15:48.846180 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:48.863009 kernel: audit: type=1006 audit(1776712548.844:656): pid=4085 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Apr 20 19:15:48.863187 kernel: audit: type=1300 audit(1776712548.844:656): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe33774a40 a2=3 a3=0 items=0 ppid=1 pid=4085 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:15:48.844000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:15:48.885050 kubelet[3163]: I0420 19:15:48.883742 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9f02930c-961c-4c4b-8334-b61cbd5c3d20-socket-dir\") pod \"csi-node-driver-5h6vg\" (UID: \"9f02930c-961c-4c4b-8334-b61cbd5c3d20\") " pod="calico-system/csi-node-driver-5h6vg" Apr 20 19:15:48.889394 kernel: audit: type=1327 audit(1776712548.844:656): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:15:48.902632 kubelet[3163]: E0420 19:15:48.902446 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:48.905954 kubelet[3163]: W0420 19:15:48.903726 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:48.905954 kubelet[3163]: E0420 19:15:48.903762 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:48.943750 kubelet[3163]: E0420 19:15:48.942020 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:48.944932 kubelet[3163]: W0420 19:15:48.944915 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:48.945222 kubelet[3163]: E0420 19:15:48.945164 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:49.037688 kubelet[3163]: E0420 19:15:49.028646 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:49.037688 kubelet[3163]: W0420 19:15:49.028915 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:49.051453 kubelet[3163]: E0420 19:15:49.038313 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:49.051453 kubelet[3163]: E0420 19:15:49.044844 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:49.051453 kubelet[3163]: W0420 19:15:49.045153 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:49.051453 kubelet[3163]: E0420 19:15:49.045185 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:49.051453 kubelet[3163]: E0420 19:15:49.051247 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:49.051830 kubelet[3163]: W0420 19:15:49.051412 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:49.051830 kubelet[3163]: E0420 19:15:49.051731 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:49.064219 kubelet[3163]: E0420 19:15:49.062800 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:49.064219 kubelet[3163]: W0420 19:15:49.063044 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:49.064219 kubelet[3163]: E0420 19:15:49.063184 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:49.064219 kubelet[3163]: E0420 19:15:49.064199 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:49.064219 kubelet[3163]: W0420 19:15:49.064215 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:49.064849 kubelet[3163]: E0420 19:15:49.064236 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:49.065148 kubelet[3163]: E0420 19:15:49.065094 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:49.065148 kubelet[3163]: W0420 19:15:49.065132 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:49.065229 kubelet[3163]: E0420 19:15:49.065147 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:49.081304 systemd-logind[1627]: New session '21' of user 'core' with class 'user' and type 'tty'. Apr 20 19:15:49.115832 kubelet[3163]: E0420 19:15:49.115723 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:49.116002 kubelet[3163]: W0420 19:15:49.115926 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:49.116026 kubelet[3163]: E0420 19:15:49.115997 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:49.119239 kubelet[3163]: E0420 19:15:49.118988 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:49.139858 kubelet[3163]: W0420 19:15:49.125670 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:49.139858 kubelet[3163]: E0420 19:15:49.126152 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:49.175711 kubelet[3163]: E0420 19:15:49.163449 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:49.225000 audit: BPF prog-id=138 op=LOAD Apr 20 19:15:49.307034 kernel: audit: type=1334 audit(1776712549.225:657): prog-id=138 op=LOAD Apr 20 19:15:49.307357 kubelet[3163]: W0420 19:15:49.176652 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:49.307357 kubelet[3163]: E0420 19:15:49.182948 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:49.335682 kubelet[3163]: E0420 19:15:49.332101 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:49.335682 kubelet[3163]: W0420 19:15:49.332329 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:49.335682 kubelet[3163]: E0420 19:15:49.332374 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:49.348000 audit: BPF prog-id=139 op=LOAD Apr 20 19:15:49.348000 audit[4059]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001ee240 a2=98 a3=0 items=0 ppid=4032 pid=4059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:15:49.348000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162626134613863343361323166373133353434353632306436343233 Apr 20 19:15:49.348846 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 20 19:15:49.348000 audit: BPF prog-id=139 op=UNLOAD Apr 20 19:15:49.348000 audit[4059]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4032 pid=4059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:15:49.348000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162626134613863343361323166373133353434353632306436343233 Apr 20 19:15:49.352000 audit: BPF prog-id=140 op=LOAD Apr 20 19:15:49.352000 audit[4059]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001ee490 a2=98 a3=0 items=0 ppid=4032 pid=4059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:15:49.352000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162626134613863343361323166373133353434353632306436343233 Apr 20 19:15:49.373000 audit: BPF prog-id=141 op=LOAD Apr 20 19:15:49.373000 audit[4059]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001ee220 a2=98 a3=0 items=0 ppid=4032 pid=4059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:15:49.373000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162626134613863343361323166373133353434353632306436343233 Apr 20 19:15:49.373000 audit: BPF prog-id=141 op=UNLOAD Apr 20 19:15:49.373000 audit[4059]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4032 pid=4059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:15:49.373000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162626134613863343361323166373133353434353632306436343233 Apr 20 19:15:49.374000 audit: BPF prog-id=140 op=UNLOAD Apr 20 19:15:49.374000 audit[4059]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4032 pid=4059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:15:49.374000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162626134613863343361323166373133353434353632306436343233 Apr 20 19:15:49.374000 audit: BPF prog-id=142 op=LOAD Apr 20 19:15:49.374000 audit[4059]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001ee6f0 a2=98 a3=0 items=0 ppid=4032 pid=4059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:15:49.374000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162626134613863343361323166373133353434353632306436343233 Apr 20 19:15:49.442428 kernel: audit: type=1334 audit(1776712549.348:658): prog-id=139 op=LOAD Apr 20 19:15:49.477001 kubelet[3163]: E0420 19:15:49.385248 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:49.477001 kubelet[3163]: W0420 19:15:49.385513 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:49.477001 kubelet[3163]: E0420 19:15:49.385652 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:49.512301 kernel: audit: type=1300 audit(1776712549.348:658): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001ee240 a2=98 a3=0 items=0 ppid=4032 pid=4059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:15:49.514645 kernel: audit: type=1327 audit(1776712549.348:658): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162626134613863343361323166373133353434353632306436343233 Apr 20 19:15:49.532806 kubelet[3163]: E0420 19:15:49.532696 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:49.541579 kubelet[3163]: W0420 19:15:49.532761 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:49.541579 kubelet[3163]: E0420 19:15:49.532972 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:49.552000 audit[4085]: AUDIT1105 pid=4085 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:15:49.659000 audit[4193]: AUDIT1103 pid=4193 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:15:49.661143 kubelet[3163]: E0420 19:15:49.544812 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:49.661143 kubelet[3163]: W0420 19:15:49.544926 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:49.661143 kubelet[3163]: E0420 19:15:49.545027 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:49.661143 kubelet[3163]: E0420 19:15:49.644106 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:49.661143 kubelet[3163]: W0420 19:15:49.644146 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:49.661143 kubelet[3163]: E0420 19:15:49.644251 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:49.661143 kubelet[3163]: E0420 19:15:49.655283 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:49.661143 kubelet[3163]: W0420 19:15:49.655308 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:49.661143 kubelet[3163]: E0420 19:15:49.655346 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:49.706826 kubelet[3163]: E0420 19:15:49.706160 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:49.706826 kubelet[3163]: W0420 19:15:49.706445 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:49.706826 kubelet[3163]: E0420 19:15:49.706525 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:49.717248 kubelet[3163]: E0420 19:15:49.714580 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:49.717248 kubelet[3163]: W0420 19:15:49.714661 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:49.717248 kubelet[3163]: E0420 19:15:49.714718 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:49.717248 kubelet[3163]: E0420 19:15:49.715227 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:49.717248 kubelet[3163]: W0420 19:15:49.715237 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:49.717248 kubelet[3163]: E0420 19:15:49.715248 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:49.717248 kubelet[3163]: E0420 19:15:49.716938 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:49.717248 kubelet[3163]: W0420 19:15:49.717014 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:49.717248 kubelet[3163]: E0420 19:15:49.717032 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:49.717696 kubelet[3163]: E0420 19:15:49.717351 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:49.717696 kubelet[3163]: W0420 19:15:49.717359 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:49.717696 kubelet[3163]: E0420 19:15:49.717368 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:49.717696 kubelet[3163]: E0420 19:15:49.717570 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:49.717696 kubelet[3163]: W0420 19:15:49.717576 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:49.717696 kubelet[3163]: E0420 19:15:49.717583 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:49.717790 kubelet[3163]: E0420 19:15:49.717732 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:49.717790 kubelet[3163]: W0420 19:15:49.717738 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:49.717790 kubelet[3163]: E0420 19:15:49.717747 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:49.720445 kubelet[3163]: E0420 19:15:49.718252 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:49.720445 kubelet[3163]: W0420 19:15:49.719152 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:49.720445 kubelet[3163]: E0420 19:15:49.719231 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:49.728921 kubelet[3163]: E0420 19:15:49.724789 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:49.728921 kubelet[3163]: W0420 19:15:49.724836 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:49.728921 kubelet[3163]: E0420 19:15:49.724861 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:49.728921 kubelet[3163]: E0420 19:15:49.725367 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:49.728921 kubelet[3163]: W0420 19:15:49.725410 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:49.728921 kubelet[3163]: E0420 19:15:49.725425 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:49.925399 kubelet[3163]: E0420 19:15:49.922368 3163 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5h6vg" podUID="9f02930c-961c-4c4b-8334-b61cbd5c3d20" Apr 20 19:15:50.520404 kubelet[3163]: E0420 19:15:50.520041 3163 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 20 19:15:50.520404 kubelet[3163]: W0420 19:15:50.520063 3163 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 20 19:15:50.520404 kubelet[3163]: E0420 19:15:50.520087 3163 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 20 19:15:50.615191 containerd[1659]: time="2026-04-20T19:15:50.614977481Z" level=info msg="RunPodSandbox for name:\"calico-node-g9fs5\" uid:\"071d23f6-a94b-4165-9229-2d0570b516d8\" namespace:\"calico-system\" returns sandbox id \"1bba4a8c43a21f7135445620d6423b332e6946df1fe8c521ac8720d95194888f\"" Apr 20 19:15:50.754091 containerd[1659]: time="2026-04-20T19:15:50.753928766Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 20 19:15:51.964872 kubelet[3163]: E0420 19:15:51.964705 3163 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5h6vg" podUID="9f02930c-961c-4c4b-8334-b61cbd5c3d20" Apr 20 19:15:52.765043 kubelet[3163]: E0420 19:15:52.763802 3163 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:15:53.899069 kubelet[3163]: E0420 19:15:53.870327 3163 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5h6vg" podUID="9f02930c-961c-4c4b-8334-b61cbd5c3d20" Apr 20 19:15:55.525016 kubelet[3163]: E0420 19:15:55.524724 3163 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5h6vg" podUID="9f02930c-961c-4c4b-8334-b61cbd5c3d20" Apr 20 19:15:57.850487 kubelet[3163]: E0420 19:15:57.850042 3163 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:15:57.935199 sshd[4193]: Connection closed by 10.0.0.1 port 40338 Apr 20 19:15:57.938000 audit[4255]: NETFILTER_CFG table=filter:119 family=2 entries=21 op=nft_register_rule pid=4255 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Apr 20 19:15:57.938000 audit[4255]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe0e9f6f30 a2=0 a3=7ffe0e9f6f1c items=0 ppid=3270 pid=4255 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:15:57.957000 audit[4085]: AUDIT1106 pid=4085 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:15:57.957000 audit[4085]: AUDIT1104 pid=4085 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:15:57.938000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Apr 20 19:15:58.152000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-8200-10.0.0.14:22-10.0.0.1:40338 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:15:58.152000 audit[4255]: NETFILTER_CFG table=nat:120 family=2 entries=19 op=nft_register_chain pid=4255 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Apr 20 19:15:58.152000 audit[4255]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffe0e9f6f30 a2=0 a3=7ffe0e9f6f1c items=0 ppid=3270 pid=4255 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:15:58.152000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Apr 20 19:15:57.934968 sshd-session[4085]: pam_unix(sshd:session): session closed for user core Apr 20 19:15:58.286079 kubelet[3163]: E0420 19:15:57.959139 3163 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5h6vg" podUID="9f02930c-961c-4c4b-8334-b61cbd5c3d20" Apr 20 19:15:58.287324 kernel: kauditd_printk_skb: 20 callbacks suppressed Apr 20 19:15:58.153893 systemd[1]: sshd@19-8200-10.0.0.14:22-10.0.0.1:40338.service: Deactivated successfully. Apr 20 19:15:58.353433 kernel: audit: type=1325 audit(1776712557.938:667): table=filter:119 family=2 entries=21 op=nft_register_rule pid=4255 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Apr 20 19:15:58.222237 systemd[1]: session-21.scope: Deactivated successfully. Apr 20 19:15:58.355176 kernel: audit: type=1300 audit(1776712557.938:667): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe0e9f6f30 a2=0 a3=7ffe0e9f6f1c items=0 ppid=3270 pid=4255 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:15:58.277372 systemd[1]: session-21.scope: Consumed 5.053s CPU time, 17.4M memory peak. Apr 20 19:15:58.431170 kernel: audit: type=1106 audit(1776712557.957:668): pid=4085 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:15:58.444858 kernel: audit: type=1104 audit(1776712557.957:669): pid=4085 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:15:58.445231 kernel: audit: type=1327 audit(1776712557.938:667): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Apr 20 19:15:58.445118 systemd-logind[1627]: Session 21 logged out. Waiting for processes to exit. Apr 20 19:15:58.463274 kernel: audit: type=1131 audit(1776712558.152:670): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-8200-10.0.0.14:22-10.0.0.1:40338 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:15:58.461155 systemd-logind[1627]: Removed session 21. Apr 20 19:15:58.481959 kernel: audit: type=1325 audit(1776712558.152:671): table=nat:120 family=2 entries=19 op=nft_register_chain pid=4255 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Apr 20 19:15:58.482031 kernel: audit: type=1300 audit(1776712558.152:671): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffe0e9f6f30 a2=0 a3=7ffe0e9f6f1c items=0 ppid=3270 pid=4255 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:15:58.482128 kernel: audit: type=1327 audit(1776712558.152:671): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Apr 20 19:15:59.485141 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1363858560.mount: Deactivated successfully. Apr 20 19:15:59.926159 kubelet[3163]: E0420 19:15:59.926067 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.067s" Apr 20 19:15:59.938238 kubelet[3163]: E0420 19:15:59.932195 3163 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5h6vg" podUID="9f02930c-961c-4c4b-8334-b61cbd5c3d20" Apr 20 19:16:01.862749 kubelet[3163]: E0420 19:16:01.862526 3163 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5h6vg" podUID="9f02930c-961c-4c4b-8334-b61cbd5c3d20" Apr 20 19:16:01.912427 containerd[1659]: time="2026-04-20T19:16:01.912044850Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 19:16:02.038488 containerd[1659]: time="2026-04-20T19:16:01.923808768Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=6180462" Apr 20 19:16:02.038488 containerd[1659]: time="2026-04-20T19:16:02.030174181Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 19:16:02.041878 containerd[1659]: time="2026-04-20T19:16:02.041745166Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 19:16:02.043443 containerd[1659]: time="2026-04-20T19:16:02.042483924Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 11.288442906s" Apr 20 19:16:02.043443 containerd[1659]: time="2026-04-20T19:16:02.042841058Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Apr 20 19:16:02.266447 containerd[1659]: time="2026-04-20T19:16:02.261259430Z" level=info msg="CreateContainer within sandbox \"1bba4a8c43a21f7135445620d6423b332e6946df1fe8c521ac8720d95194888f\" for container name:\"flexvol-driver\"" Apr 20 19:16:02.555238 containerd[1659]: time="2026-04-20T19:16:02.546392556Z" level=info msg="Container 5e0c1918f38592c73fcb73bd95d17e0d6767d8a16147c688f68fc7ce7991db55: CDI devices from CRI Config.CDIDevices: []" Apr 20 19:16:02.745635 containerd[1659]: time="2026-04-20T19:16:02.745144057Z" level=info msg="CreateContainer within sandbox \"1bba4a8c43a21f7135445620d6423b332e6946df1fe8c521ac8720d95194888f\" for name:\"flexvol-driver\" returns container id \"5e0c1918f38592c73fcb73bd95d17e0d6767d8a16147c688f68fc7ce7991db55\"" Apr 20 19:16:02.788902 containerd[1659]: time="2026-04-20T19:16:02.788621072Z" level=info msg="StartContainer for \"5e0c1918f38592c73fcb73bd95d17e0d6767d8a16147c688f68fc7ce7991db55\"" Apr 20 19:16:02.796255 containerd[1659]: time="2026-04-20T19:16:02.796127207Z" level=info msg="connecting to shim 5e0c1918f38592c73fcb73bd95d17e0d6767d8a16147c688f68fc7ce7991db55" address="unix:///run/containerd/s/d6fd6f578359a16fb6047ac6b8915843558ecdd02f7ae288b74c76a061bb8a9a" protocol=ttrpc version=3 Apr 20 19:16:02.986776 kubelet[3163]: E0420 19:16:02.986723 3163 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:16:03.226089 systemd[1]: Started sshd@20-8201-10.0.0.14:22-10.0.0.1:46616.service - OpenSSH per-connection server daemon (10.0.0.1:46616). Apr 20 19:16:03.224000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-8201-10.0.0.14:22-10.0.0.1:46616 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:16:03.240464 kernel: audit: type=1130 audit(1776712563.224:672): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-8201-10.0.0.14:22-10.0.0.1:46616 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:16:03.843726 systemd[1]: Started cri-containerd-5e0c1918f38592c73fcb73bd95d17e0d6767d8a16147c688f68fc7ce7991db55.scope - libcontainer container 5e0c1918f38592c73fcb73bd95d17e0d6767d8a16147c688f68fc7ce7991db55. Apr 20 19:16:03.917056 kubelet[3163]: E0420 19:16:03.914236 3163 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5h6vg" podUID="9f02930c-961c-4c4b-8334-b61cbd5c3d20" Apr 20 19:16:04.812000 audit: BPF prog-id=143 op=LOAD Apr 20 19:16:04.812000 audit[4276]: SYSCALL arch=c000003e syscall=321 success=yes exit=19 a0=5 a1=c00017a490 a2=98 a3=0 items=0 ppid=4032 pid=4276 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:16:04.812000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565306331393138663338353932633733666362373362643935643137 Apr 20 19:16:04.814000 audit: BPF prog-id=144 op=LOAD Apr 20 19:16:04.814000 audit[4276]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a220 a2=98 a3=0 items=0 ppid=4032 pid=4276 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:16:04.814000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565306331393138663338353932633733666362373362643935643137 Apr 20 19:16:04.814000 audit: BPF prog-id=144 op=UNLOAD Apr 20 19:16:04.814000 audit[4276]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4032 pid=4276 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:16:04.814000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565306331393138663338353932633733666362373362643935643137 Apr 20 19:16:04.963902 kernel: audit: type=1334 audit(1776712564.812:673): prog-id=143 op=LOAD Apr 20 19:16:04.964168 sshd[4284]: Accepted publickey for core from 10.0.0.1 port 46616 ssh2: RSA SHA256:ZNIzts6V4KYKlrJxXaosrimCRlmsV/+NkZ5UtjwHrjE Apr 20 19:16:04.814000 audit: BPF prog-id=143 op=UNLOAD Apr 20 19:16:04.814000 audit[4276]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=13 a1=0 a2=0 a3=0 items=0 ppid=4032 pid=4276 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:16:04.814000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565306331393138663338353932633733666362373362643935643137 Apr 20 19:16:04.814000 audit: BPF prog-id=145 op=LOAD Apr 20 19:16:04.814000 audit[4276]: SYSCALL arch=c000003e syscall=321 success=yes exit=19 a0=5 a1=c00017a6f0 a2=98 a3=0 items=0 ppid=4032 pid=4276 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:16:04.814000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565306331393138663338353932633733666362373362643935643137 Apr 20 19:16:04.935000 audit[4284]: AUDIT1101 pid=4284 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:16:04.942000 audit[4284]: AUDIT1103 pid=4284 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:16:04.942000 audit[4284]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe5e235d80 a2=3 a3=0 items=0 ppid=1 pid=4284 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:16:04.942000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:16:04.944996 sshd-session[4284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 19:16:05.000106 kernel: audit: type=1300 audit(1776712564.812:673): arch=c000003e syscall=321 success=yes exit=19 a0=5 a1=c00017a490 a2=98 a3=0 items=0 ppid=4032 pid=4276 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:16:05.000161 kernel: audit: type=1327 audit(1776712564.812:673): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565306331393138663338353932633733666362373362643935643137 Apr 20 19:16:05.000262 kernel: audit: type=1334 audit(1776712564.814:674): prog-id=144 op=LOAD Apr 20 19:16:05.000279 kernel: audit: type=1300 audit(1776712564.814:674): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a220 a2=98 a3=0 items=0 ppid=4032 pid=4276 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:16:05.000330 kernel: audit: type=1327 audit(1776712564.814:674): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565306331393138663338353932633733666362373362643935643137 Apr 20 19:16:05.000348 kernel: audit: type=1334 audit(1776712564.814:675): prog-id=144 op=UNLOAD Apr 20 19:16:05.000416 kernel: audit: type=1300 audit(1776712564.814:675): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4032 pid=4276 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:16:05.000502 kernel: audit: type=1327 audit(1776712564.814:675): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565306331393138663338353932633733666362373362643935643137 Apr 20 19:16:05.134618 systemd-logind[1627]: New session '22' of user 'core' with class 'user' and type 'tty'. Apr 20 19:16:05.187760 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 20 19:16:05.259000 audit[4284]: AUDIT1105 pid=4284 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:16:05.309000 audit[4300]: AUDIT1103 pid=4300 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:16:05.696109 containerd[1659]: time="2026-04-20T19:16:05.695965148Z" level=info msg="StartContainer for \"5e0c1918f38592c73fcb73bd95d17e0d6767d8a16147c688f68fc7ce7991db55\" returns successfully" Apr 20 19:16:05.872331 kubelet[3163]: E0420 19:16:05.865456 3163 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5h6vg" podUID="9f02930c-961c-4c4b-8334-b61cbd5c3d20" Apr 20 19:16:06.197000 audit: BPF prog-id=145 op=UNLOAD Apr 20 19:16:06.194948 systemd[1]: cri-containerd-5e0c1918f38592c73fcb73bd95d17e0d6767d8a16147c688f68fc7ce7991db55.scope: Deactivated successfully. Apr 20 19:16:06.321425 containerd[1659]: time="2026-04-20T19:16:06.321052910Z" level=info msg="received container exit event container_id:\"5e0c1918f38592c73fcb73bd95d17e0d6767d8a16147c688f68fc7ce7991db55\" id:\"5e0c1918f38592c73fcb73bd95d17e0d6767d8a16147c688f68fc7ce7991db55\" pid:4292 exited_at:{seconds:1776712566 nanos:284106523}" Apr 20 19:16:07.060627 kubelet[3163]: E0420 19:16:07.059505 3163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:16:07.384047 sshd[4300]: Connection closed by 10.0.0.1 port 46616 Apr 20 19:16:07.385466 sshd-session[4284]: pam_unix(sshd:session): session closed for user core Apr 20 19:16:07.384000 audit[4284]: AUDIT1106 pid=4284 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:16:07.384000 audit[4284]: AUDIT1104 pid=4284 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:16:07.494061 systemd[1]: sshd@20-8201-10.0.0.14:22-10.0.0.1:46616.service: Deactivated successfully. Apr 20 19:16:07.493000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-8201-10.0.0.14:22-10.0.0.1:46616 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:16:07.502813 systemd[1]: session-22.scope: Deactivated successfully. Apr 20 19:16:07.503443 systemd[1]: session-22.scope: Consumed 1.428s CPU time, 15.8M memory peak. Apr 20 19:16:07.513034 systemd-logind[1627]: Session 22 logged out. Waiting for processes to exit. Apr 20 19:16:07.720774 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5e0c1918f38592c73fcb73bd95d17e0d6767d8a16147c688f68fc7ce7991db55-rootfs.mount: Deactivated successfully. Apr 20 19:16:07.739190 systemd-logind[1627]: Removed session 22. Apr 20 19:16:07.861077 kubelet[3163]: E0420 19:16:07.860018 3163 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5h6vg" podUID="9f02930c-961c-4c4b-8334-b61cbd5c3d20" Apr 20 19:16:08.115824 kubelet[3163]: E0420 19:16:08.108865 3163 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:16:08.707724 containerd[1659]: time="2026-04-20T19:16:08.706341954Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 20 19:16:09.891666 kubelet[3163]: E0420 19:16:09.884806 3163 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5h6vg" podUID="9f02930c-961c-4c4b-8334-b61cbd5c3d20" Apr 20 19:16:11.909175 kubelet[3163]: E0420 19:16:11.907722 3163 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5h6vg" podUID="9f02930c-961c-4c4b-8334-b61cbd5c3d20" Apr 20 19:16:12.591000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-8202-10.0.0.14:22-10.0.0.1:49500 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:16:12.613167 kernel: kauditd_printk_skb: 17 callbacks suppressed Apr 20 19:16:12.593244 systemd[1]: Started sshd@21-8202-10.0.0.14:22-10.0.0.1:49500.service - OpenSSH per-connection server daemon (10.0.0.1:49500). Apr 20 19:16:12.614766 kernel: audit: type=1130 audit(1776712572.591:687): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-8202-10.0.0.14:22-10.0.0.1:49500 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:16:13.198231 kubelet[3163]: E0420 19:16:13.197903 3163 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:16:13.352000 audit[4348]: AUDIT1101 pid=4348 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:16:13.359421 sshd[4348]: Accepted publickey for core from 10.0.0.1 port 49500 ssh2: RSA SHA256:ZNIzts6V4KYKlrJxXaosrimCRlmsV/+NkZ5UtjwHrjE Apr 20 19:16:13.353000 audit[4348]: AUDIT1103 pid=4348 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:16:13.353000 audit[4348]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe5fd86630 a2=3 a3=0 items=0 ppid=1 pid=4348 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:16:13.353000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:16:13.359062 sshd-session[4348]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 19:16:13.392117 kernel: audit: type=1101 audit(1776712573.352:688): pid=4348 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:16:13.392238 kernel: audit: type=1103 audit(1776712573.353:689): pid=4348 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:16:13.392254 kernel: audit: type=1006 audit(1776712573.353:690): pid=4348 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Apr 20 19:16:13.392269 kernel: audit: type=1300 audit(1776712573.353:690): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe5fd86630 a2=3 a3=0 items=0 ppid=1 pid=4348 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:16:13.392335 kernel: audit: type=1327 audit(1776712573.353:690): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:16:13.445647 systemd-logind[1627]: New session '23' of user 'core' with class 'user' and type 'tty'. Apr 20 19:16:13.460018 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 20 19:16:13.553000 audit[4348]: AUDIT1105 pid=4348 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:16:13.586000 audit[4352]: AUDIT1103 pid=4352 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:16:13.602979 kernel: audit: type=1105 audit(1776712573.553:691): pid=4348 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:16:13.603440 kernel: audit: type=1103 audit(1776712573.586:692): pid=4352 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:16:13.872900 kubelet[3163]: E0420 19:16:13.863273 3163 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5h6vg" podUID="9f02930c-961c-4c4b-8334-b61cbd5c3d20" Apr 20 19:16:14.719140 sshd[4352]: Connection closed by 10.0.0.1 port 49500 Apr 20 19:16:14.726798 sshd-session[4348]: pam_unix(sshd:session): session closed for user core Apr 20 19:16:14.735000 audit[4348]: AUDIT1106 pid=4348 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:16:14.735000 audit[4348]: AUDIT1104 pid=4348 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:16:14.749911 kernel: audit: type=1106 audit(1776712574.735:693): pid=4348 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:16:14.751065 kernel: audit: type=1104 audit(1776712574.735:694): pid=4348 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:16:14.752787 systemd[1]: sshd@21-8202-10.0.0.14:22-10.0.0.1:49500.service: Deactivated successfully. Apr 20 19:16:14.753000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-8202-10.0.0.14:22-10.0.0.1:49500 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:16:14.771369 systemd[1]: session-23.scope: Deactivated successfully. Apr 20 19:16:14.801161 systemd-logind[1627]: Session 23 logged out. Waiting for processes to exit. Apr 20 19:16:14.806052 systemd-logind[1627]: Removed session 23. Apr 20 19:16:15.869096 kubelet[3163]: E0420 19:16:15.868059 3163 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5h6vg" podUID="9f02930c-961c-4c4b-8334-b61cbd5c3d20" Apr 20 19:16:16.867413 kubelet[3163]: E0420 19:16:16.862081 3163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:16:17.876123 kubelet[3163]: E0420 19:16:17.875345 3163 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5h6vg" podUID="9f02930c-961c-4c4b-8334-b61cbd5c3d20" Apr 20 19:16:18.258859 kubelet[3163]: E0420 19:16:18.258617 3163 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:16:19.975655 kubelet[3163]: E0420 19:16:19.975011 3163 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5h6vg" podUID="9f02930c-961c-4c4b-8334-b61cbd5c3d20" Apr 20 19:16:20.417000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-4102-10.0.0.14:22-10.0.0.1:52310 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:16:20.439648 kernel: kauditd_printk_skb: 1 callbacks suppressed Apr 20 19:16:20.415386 systemd[1]: Started sshd@22-4102-10.0.0.14:22-10.0.0.1:52310.service - OpenSSH per-connection server daemon (10.0.0.1:52310). Apr 20 19:16:20.440446 kernel: audit: type=1130 audit(1776712580.417:696): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-4102-10.0.0.14:22-10.0.0.1:52310 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:16:21.577000 audit[4373]: AUDIT1101 pid=4373 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:16:21.631103 kernel: audit: type=1101 audit(1776712581.577:697): pid=4373 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:16:21.642185 sshd[4373]: Accepted publickey for core from 10.0.0.1 port 52310 ssh2: RSA SHA256:ZNIzts6V4KYKlrJxXaosrimCRlmsV/+NkZ5UtjwHrjE Apr 20 19:16:21.652000 audit[4373]: AUDIT1103 pid=4373 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:16:21.671140 sshd-session[4373]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 19:16:21.653000 audit[4373]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffecb6054b0 a2=3 a3=0 items=0 ppid=1 pid=4373 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:16:21.653000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:16:21.696700 kernel: audit: type=1103 audit(1776712581.652:698): pid=4373 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:16:21.697021 kernel: audit: type=1006 audit(1776712581.653:699): pid=4373 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Apr 20 19:16:21.697067 kernel: audit: type=1300 audit(1776712581.653:699): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffecb6054b0 a2=3 a3=0 items=0 ppid=1 pid=4373 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:16:21.697087 kernel: audit: type=1327 audit(1776712581.653:699): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:16:21.914977 kubelet[3163]: E0420 19:16:21.903517 3163 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5h6vg" podUID="9f02930c-961c-4c4b-8334-b61cbd5c3d20" Apr 20 19:16:22.185204 systemd-logind[1627]: New session '24' of user 'core' with class 'user' and type 'tty'. Apr 20 19:16:22.423910 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 20 19:16:22.459000 audit[4373]: AUDIT1105 pid=4373 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:16:22.480000 audit[4377]: AUDIT1103 pid=4377 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:16:22.533782 kernel: audit: type=1105 audit(1776712582.459:700): pid=4373 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:16:22.534744 kernel: audit: type=1103 audit(1776712582.480:701): pid=4377 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:16:23.466376 kubelet[3163]: E0420 19:16:23.460419 3163 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:16:23.640691 kubelet[3163]: E0420 19:16:23.639751 3163 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5h6vg" podUID="9f02930c-961c-4c4b-8334-b61cbd5c3d20" Apr 20 19:16:24.925447 kubelet[3163]: E0420 19:16:24.924729 3163 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5h6vg" podUID="9f02930c-961c-4c4b-8334-b61cbd5c3d20" Apr 20 19:16:25.922687 kubelet[3163]: E0420 19:16:25.922358 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.049s" Apr 20 19:16:27.568387 kubelet[3163]: E0420 19:16:27.544462 3163 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5h6vg" podUID="9f02930c-961c-4c4b-8334-b61cbd5c3d20" Apr 20 19:16:28.024191 sshd[4377]: Connection closed by 10.0.0.1 port 52310 Apr 20 19:16:28.045000 audit[4373]: AUDIT1106 pid=4373 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:16:28.068000 audit[4373]: AUDIT1104 pid=4373 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:16:28.045141 sshd-session[4373]: pam_unix(sshd:session): session closed for user core Apr 20 19:16:28.086764 kernel: audit: type=1106 audit(1776712588.045:702): pid=4373 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:16:28.087099 kernel: audit: type=1104 audit(1776712588.068:703): pid=4373 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:16:28.106035 systemd[1]: sshd@22-4102-10.0.0.14:22-10.0.0.1:52310.service: Deactivated successfully. Apr 20 19:16:28.110000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-4102-10.0.0.14:22-10.0.0.1:52310 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:16:28.268261 kernel: audit: type=1131 audit(1776712588.110:704): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-4102-10.0.0.14:22-10.0.0.1:52310 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:16:28.337659 systemd[1]: session-24.scope: Deactivated successfully. Apr 20 19:16:28.433243 systemd[1]: session-24.scope: Consumed 3.477s CPU time, 16.3M memory peak. Apr 20 19:16:28.657839 systemd-logind[1627]: Session 24 logged out. Waiting for processes to exit. Apr 20 19:16:29.064319 kubelet[3163]: E0420 19:16:28.858704 3163 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:16:29.066915 systemd-logind[1627]: Removed session 24. Apr 20 19:16:29.931716 kubelet[3163]: E0420 19:16:29.931357 3163 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5h6vg" podUID="9f02930c-961c-4c4b-8334-b61cbd5c3d20" Apr 20 19:16:31.135250 kubelet[3163]: E0420 19:16:31.134898 3163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:16:31.942459 kubelet[3163]: E0420 19:16:31.923136 3163 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5h6vg" podUID="9f02930c-961c-4c4b-8334-b61cbd5c3d20" Apr 20 19:16:33.474088 systemd[1]: Started sshd@23-8203-10.0.0.14:22-10.0.0.1:40492.service - OpenSSH per-connection server daemon (10.0.0.1:40492). Apr 20 19:16:33.484000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-8203-10.0.0.14:22-10.0.0.1:40492 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:16:33.555366 kernel: audit: type=1130 audit(1776712593.484:705): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-8203-10.0.0.14:22-10.0.0.1:40492 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:16:33.876241 kubelet[3163]: E0420 19:16:33.875933 3163 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5h6vg" podUID="9f02930c-961c-4c4b-8334-b61cbd5c3d20" Apr 20 19:16:34.006356 kubelet[3163]: E0420 19:16:34.006129 3163 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:16:35.878841 kubelet[3163]: E0420 19:16:35.877705 3163 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5h6vg" podUID="9f02930c-961c-4c4b-8334-b61cbd5c3d20" Apr 20 19:16:35.910000 audit[4396]: AUDIT1101 pid=4396 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:16:36.175818 kernel: audit: type=1101 audit(1776712595.910:706): pid=4396 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:16:36.189994 sshd[4396]: Accepted publickey for core from 10.0.0.1 port 40492 ssh2: RSA SHA256:ZNIzts6V4KYKlrJxXaosrimCRlmsV/+NkZ5UtjwHrjE Apr 20 19:16:36.531000 audit[4396]: AUDIT1103 pid=4396 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:16:36.547820 kernel: audit: type=1103 audit(1776712596.531:707): pid=4396 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:16:36.567000 audit[4396]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffc9fbf490 a2=3 a3=0 items=0 ppid=1 pid=4396 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:16:36.567000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:16:36.611365 kernel: audit: type=1006 audit(1776712596.567:708): pid=4396 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Apr 20 19:16:36.611917 kernel: audit: type=1300 audit(1776712596.567:708): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffc9fbf490 a2=3 a3=0 items=0 ppid=1 pid=4396 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:16:36.611459 sshd-session[4396]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 19:16:36.618505 kernel: audit: type=1327 audit(1776712596.567:708): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:16:37.162468 systemd-logind[1627]: New session '25' of user 'core' with class 'user' and type 'tty'. Apr 20 19:16:37.549083 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 20 19:16:38.510000 audit[4396]: AUDIT1105 pid=4396 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:16:38.669007 kernel: audit: type=1105 audit(1776712598.510:709): pid=4396 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:16:38.749782 kubelet[3163]: E0420 19:16:38.749144 3163 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5h6vg" podUID="9f02930c-961c-4c4b-8334-b61cbd5c3d20" Apr 20 19:16:38.841000 audit[4400]: AUDIT1103 pid=4400 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:16:38.888979 kernel: audit: type=1103 audit(1776712598.841:710): pid=4400 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:16:39.622964 kubelet[3163]: E0420 19:16:39.621873 3163 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:16:41.108841 kubelet[3163]: E0420 19:16:41.108171 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.242s" Apr 20 19:16:42.454182 kubelet[3163]: E0420 19:16:42.443766 3163 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5h6vg" podUID="9f02930c-961c-4c4b-8334-b61cbd5c3d20" Apr 20 19:16:42.545014 kubelet[3163]: E0420 19:16:42.544638 3163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:16:43.553272 kubelet[3163]: E0420 19:16:43.548012 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.115s" Apr 20 19:16:44.219131 sshd[4400]: Connection closed by 10.0.0.1 port 40492 Apr 20 19:16:44.233644 sshd-session[4396]: pam_unix(sshd:session): session closed for user core Apr 20 19:16:44.340000 audit[4396]: AUDIT1106 pid=4396 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:16:44.362885 kernel: audit: type=1106 audit(1776712604.340:711): pid=4396 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:16:44.363000 audit[4396]: AUDIT1104 pid=4396 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:16:44.399364 kernel: audit: type=1104 audit(1776712604.363:712): pid=4396 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:16:44.443304 systemd[1]: sshd@23-8203-10.0.0.14:22-10.0.0.1:40492.service: Deactivated successfully. Apr 20 19:16:44.486000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-8203-10.0.0.14:22-10.0.0.1:40492 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:16:44.531159 systemd[1]: sshd@23-8203-10.0.0.14:22-10.0.0.1:40492.service: Consumed 1.020s CPU time, 4.4M memory peak. Apr 20 19:16:44.540910 kernel: audit: type=1131 audit(1776712604.486:713): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-8203-10.0.0.14:22-10.0.0.1:40492 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:16:44.562663 systemd[1]: session-25.scope: Deactivated successfully. Apr 20 19:16:44.612445 systemd[1]: session-25.scope: Consumed 3.445s CPU time, 18.1M memory peak. Apr 20 19:16:44.648767 systemd-logind[1627]: Session 25 logged out. Waiting for processes to exit. Apr 20 19:16:44.661986 systemd-logind[1627]: Removed session 25. Apr 20 19:16:44.825405 kubelet[3163]: E0420 19:16:44.819055 3163 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:16:44.954195 kubelet[3163]: E0420 19:16:44.953981 3163 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5h6vg" podUID="9f02930c-961c-4c4b-8334-b61cbd5c3d20" Apr 20 19:16:46.861040 kubelet[3163]: E0420 19:16:46.860647 3163 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5h6vg" podUID="9f02930c-961c-4c4b-8334-b61cbd5c3d20" Apr 20 19:16:49.228955 kubelet[3163]: E0420 19:16:49.228646 3163 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5h6vg" podUID="9f02930c-961c-4c4b-8334-b61cbd5c3d20" Apr 20 19:16:49.933204 kubelet[3163]: E0420 19:16:49.930507 3163 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:16:50.112011 systemd[1]: Started sshd@24-12292-10.0.0.14:22-10.0.0.1:45916.service - OpenSSH per-connection server daemon (10.0.0.1:45916). Apr 20 19:16:50.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-12292-10.0.0.14:22-10.0.0.1:45916 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:16:50.250108 kernel: audit: type=1130 audit(1776712610.135:714): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-12292-10.0.0.14:22-10.0.0.1:45916 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:16:51.115869 kubelet[3163]: E0420 19:16:51.115656 3163 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5h6vg" podUID="9f02930c-961c-4c4b-8334-b61cbd5c3d20" Apr 20 19:16:51.452000 audit[4423]: AUDIT1101 pid=4423 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:16:51.554674 kernel: audit: type=1101 audit(1776712611.452:715): pid=4423 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:16:51.554000 audit[4423]: AUDIT1103 pid=4423 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:16:51.573730 sshd[4423]: Accepted publickey for core from 10.0.0.1 port 45916 ssh2: RSA SHA256:ZNIzts6V4KYKlrJxXaosrimCRlmsV/+NkZ5UtjwHrjE Apr 20 19:16:51.560000 audit[4423]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc68ea5280 a2=3 a3=0 items=0 ppid=1 pid=4423 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:16:51.560000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:16:51.596174 kernel: audit: type=1103 audit(1776712611.554:716): pid=4423 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:16:51.578416 sshd-session[4423]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 19:16:51.596947 kernel: audit: type=1006 audit(1776712611.560:717): pid=4423 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Apr 20 19:16:51.597037 kernel: audit: type=1300 audit(1776712611.560:717): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc68ea5280 a2=3 a3=0 items=0 ppid=1 pid=4423 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:16:51.597103 kernel: audit: type=1327 audit(1776712611.560:717): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:16:51.851046 systemd-logind[1627]: New session '26' of user 'core' with class 'user' and type 'tty'. Apr 20 19:16:51.870090 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 20 19:16:52.170000 audit[4423]: AUDIT1105 pid=4423 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:16:52.179840 kernel: audit: type=1105 audit(1776712612.170:718): pid=4423 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:16:52.224000 audit[4430]: AUDIT1103 pid=4430 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:16:52.230792 kernel: audit: type=1103 audit(1776712612.224:719): pid=4430 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:16:52.868707 kubelet[3163]: E0420 19:16:52.867076 3163 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5h6vg" podUID="9f02930c-961c-4c4b-8334-b61cbd5c3d20" Apr 20 19:16:55.084314 kubelet[3163]: E0420 19:16:55.082766 3163 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:16:55.227253 kubelet[3163]: E0420 19:16:55.225045 3163 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5h6vg" podUID="9f02930c-961c-4c4b-8334-b61cbd5c3d20" Apr 20 19:16:55.823905 sshd[4430]: Connection closed by 10.0.0.1 port 45916 Apr 20 19:16:55.854117 sshd-session[4423]: pam_unix(sshd:session): session closed for user core Apr 20 19:16:55.935000 audit[4423]: AUDIT1106 pid=4423 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:16:55.941000 audit[4423]: AUDIT1104 pid=4423 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:16:55.958245 kernel: audit: type=1106 audit(1776712615.935:720): pid=4423 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:16:55.959000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-12292-10.0.0.14:22-10.0.0.1:45916 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:16:55.959861 systemd[1]: sshd@24-12292-10.0.0.14:22-10.0.0.1:45916.service: Deactivated successfully. Apr 20 19:16:55.961926 kernel: audit: type=1104 audit(1776712615.941:721): pid=4423 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:16:55.962019 kernel: audit: type=1131 audit(1776712615.959:722): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-12292-10.0.0.14:22-10.0.0.1:45916 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:16:56.086899 systemd[1]: session-26.scope: Deactivated successfully. Apr 20 19:16:56.091315 systemd[1]: session-26.scope: Consumed 2.185s CPU time, 18M memory peak. Apr 20 19:16:56.274438 systemd-logind[1627]: Session 26 logged out. Waiting for processes to exit. Apr 20 19:16:56.361458 systemd-logind[1627]: Removed session 26. Apr 20 19:16:57.014941 kubelet[3163]: E0420 19:16:57.014707 3163 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5h6vg" podUID="9f02930c-961c-4c4b-8334-b61cbd5c3d20" Apr 20 19:16:58.869863 kubelet[3163]: E0420 19:16:58.869448 3163 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5h6vg" podUID="9f02930c-961c-4c4b-8334-b61cbd5c3d20" Apr 20 19:17:00.128132 kubelet[3163]: E0420 19:17:00.115815 3163 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:17:01.070760 kubelet[3163]: E0420 19:17:01.069165 3163 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5h6vg" podUID="9f02930c-961c-4c4b-8334-b61cbd5c3d20" Apr 20 19:17:01.295450 systemd[1]: Started sshd@25-8204-10.0.0.14:22-10.0.0.1:55250.service - OpenSSH per-connection server daemon (10.0.0.1:55250). Apr 20 19:17:01.295000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-8204-10.0.0.14:22-10.0.0.1:55250 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:17:01.414837 kernel: audit: type=1130 audit(1776712621.295:723): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-8204-10.0.0.14:22-10.0.0.1:55250 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:17:01.941000 audit[4448]: AUDIT1101 pid=4448 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:17:01.965000 audit[4448]: AUDIT1103 pid=4448 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:17:01.966000 audit[4448]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff07334c20 a2=3 a3=0 items=0 ppid=1 pid=4448 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:17:02.032180 sshd[4448]: Accepted publickey for core from 10.0.0.1 port 55250 ssh2: RSA SHA256:ZNIzts6V4KYKlrJxXaosrimCRlmsV/+NkZ5UtjwHrjE Apr 20 19:17:01.966000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:17:02.037103 kernel: audit: type=1101 audit(1776712621.941:724): pid=4448 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:17:02.030513 sshd-session[4448]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 19:17:02.040633 kernel: audit: type=1103 audit(1776712621.965:725): pid=4448 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:17:02.040776 kernel: audit: type=1006 audit(1776712621.966:726): pid=4448 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Apr 20 19:17:02.040859 kernel: audit: type=1300 audit(1776712621.966:726): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff07334c20 a2=3 a3=0 items=0 ppid=1 pid=4448 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:17:02.040882 kernel: audit: type=1327 audit(1776712621.966:726): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:17:02.092972 systemd-logind[1627]: New session '27' of user 'core' with class 'user' and type 'tty'. Apr 20 19:17:02.232831 systemd[1]: Started session-27.scope - Session 27 of User core. Apr 20 19:17:02.306000 audit[4448]: AUDIT1105 pid=4448 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:17:02.338081 kernel: audit: type=1105 audit(1776712622.306:727): pid=4448 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:17:02.352000 audit[4456]: AUDIT1103 pid=4456 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:17:02.378481 kernel: audit: type=1103 audit(1776712622.352:728): pid=4456 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:17:02.914668 kubelet[3163]: E0420 19:17:02.912918 3163 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5h6vg" podUID="9f02930c-961c-4c4b-8334-b61cbd5c3d20" Apr 20 19:17:04.173441 sshd[4456]: Connection closed by 10.0.0.1 port 55250 Apr 20 19:17:04.184134 sshd-session[4448]: pam_unix(sshd:session): session closed for user core Apr 20 19:17:04.209000 audit[4448]: AUDIT1106 pid=4448 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:17:04.209000 audit[4448]: AUDIT1104 pid=4448 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:17:04.225799 kernel: audit: type=1106 audit(1776712624.209:729): pid=4448 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:17:04.238028 kernel: audit: type=1104 audit(1776712624.209:730): pid=4448 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:17:04.256137 systemd[1]: sshd@25-8204-10.0.0.14:22-10.0.0.1:55250.service: Deactivated successfully. Apr 20 19:17:04.260000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-8204-10.0.0.14:22-10.0.0.1:55250 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:17:04.371095 systemd[1]: session-27.scope: Deactivated successfully. Apr 20 19:17:04.387390 systemd[1]: session-27.scope: Consumed 1.178s CPU time, 16.2M memory peak. Apr 20 19:17:04.469327 systemd-logind[1627]: Session 27 logged out. Waiting for processes to exit. Apr 20 19:17:04.570396 systemd-logind[1627]: Removed session 27. Apr 20 19:17:04.961221 kubelet[3163]: E0420 19:17:04.960808 3163 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5h6vg" podUID="9f02930c-961c-4c4b-8334-b61cbd5c3d20" Apr 20 19:17:05.143838 kubelet[3163]: E0420 19:17:05.143661 3163 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:17:06.861700 kubelet[3163]: E0420 19:17:06.861233 3163 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5h6vg" podUID="9f02930c-961c-4c4b-8334-b61cbd5c3d20" Apr 20 19:17:07.170105 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2932489466.mount: Deactivated successfully. Apr 20 19:17:07.554510 containerd[1659]: time="2026-04-20T19:17:07.553450028Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 19:17:07.556329 containerd[1659]: time="2026-04-20T19:17:07.555574206Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=1, bytes read=153679095" Apr 20 19:17:07.565888 containerd[1659]: time="2026-04-20T19:17:07.565617712Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 19:17:07.590036 containerd[1659]: time="2026-04-20T19:17:07.587253558Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 19:17:07.591945 containerd[1659]: time="2026-04-20T19:17:07.591849408Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 58.884525747s" Apr 20 19:17:07.592154 containerd[1659]: time="2026-04-20T19:17:07.591950018Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Apr 20 19:17:07.648742 containerd[1659]: time="2026-04-20T19:17:07.648427841Z" level=info msg="CreateContainer within sandbox \"1bba4a8c43a21f7135445620d6423b332e6946df1fe8c521ac8720d95194888f\" for container name:\"ebpf-bootstrap\"" Apr 20 19:17:07.706460 containerd[1659]: time="2026-04-20T19:17:07.706272040Z" level=info msg="Container de0d9f22d1f757f858246be59dd8870879fb3814bf5601fd5aafefa1aed27380: CDI devices from CRI Config.CDIDevices: []" Apr 20 19:17:07.845970 containerd[1659]: time="2026-04-20T19:17:07.844289548Z" level=info msg="CreateContainer within sandbox \"1bba4a8c43a21f7135445620d6423b332e6946df1fe8c521ac8720d95194888f\" for name:\"ebpf-bootstrap\" returns container id \"de0d9f22d1f757f858246be59dd8870879fb3814bf5601fd5aafefa1aed27380\"" Apr 20 19:17:07.854949 containerd[1659]: time="2026-04-20T19:17:07.849245225Z" level=info msg="StartContainer for \"de0d9f22d1f757f858246be59dd8870879fb3814bf5601fd5aafefa1aed27380\"" Apr 20 19:17:07.999658 containerd[1659]: time="2026-04-20T19:17:07.999110090Z" level=info msg="connecting to shim de0d9f22d1f757f858246be59dd8870879fb3814bf5601fd5aafefa1aed27380" address="unix:///run/containerd/s/d6fd6f578359a16fb6047ac6b8915843558ecdd02f7ae288b74c76a061bb8a9a" protocol=ttrpc version=3 Apr 20 19:17:08.450616 systemd[1]: Started cri-containerd-de0d9f22d1f757f858246be59dd8870879fb3814bf5601fd5aafefa1aed27380.scope - libcontainer container de0d9f22d1f757f858246be59dd8870879fb3814bf5601fd5aafefa1aed27380. Apr 20 19:17:08.664000 audit: BPF prog-id=146 op=LOAD Apr 20 19:17:08.668446 kernel: kauditd_printk_skb: 1 callbacks suppressed Apr 20 19:17:08.664000 audit[4469]: SYSCALL arch=c000003e syscall=321 success=yes exit=19 a0=5 a1=c000210490 a2=98 a3=0 items=0 ppid=4032 pid=4469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:17:08.664000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465306439663232643166373537663835383234366265353964643838 Apr 20 19:17:08.665000 audit: BPF prog-id=147 op=LOAD Apr 20 19:17:08.665000 audit[4469]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000210220 a2=98 a3=0 items=0 ppid=4032 pid=4469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:17:08.665000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465306439663232643166373537663835383234366265353964643838 Apr 20 19:17:08.695497 kernel: audit: type=1334 audit(1776712628.664:732): prog-id=146 op=LOAD Apr 20 19:17:08.695842 kernel: audit: type=1300 audit(1776712628.664:732): arch=c000003e syscall=321 success=yes exit=19 a0=5 a1=c000210490 a2=98 a3=0 items=0 ppid=4032 pid=4469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:17:08.696086 kernel: audit: type=1327 audit(1776712628.664:732): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465306439663232643166373537663835383234366265353964643838 Apr 20 19:17:08.696114 kernel: audit: type=1334 audit(1776712628.665:733): prog-id=147 op=LOAD Apr 20 19:17:08.696132 kernel: audit: type=1300 audit(1776712628.665:733): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000210220 a2=98 a3=0 items=0 ppid=4032 pid=4469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:17:08.696226 kernel: audit: type=1327 audit(1776712628.665:733): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465306439663232643166373537663835383234366265353964643838 Apr 20 19:17:08.665000 audit: BPF prog-id=147 op=UNLOAD Apr 20 19:17:08.665000 audit[4469]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4032 pid=4469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:17:08.705602 kernel: audit: type=1334 audit(1776712628.665:734): prog-id=147 op=UNLOAD Apr 20 19:17:08.665000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465306439663232643166373537663835383234366265353964643838 Apr 20 19:17:08.705924 kernel: audit: type=1300 audit(1776712628.665:734): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4032 pid=4469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:17:08.705953 kernel: audit: type=1327 audit(1776712628.665:734): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465306439663232643166373537663835383234366265353964643838 Apr 20 19:17:08.665000 audit: BPF prog-id=146 op=UNLOAD Apr 20 19:17:08.665000 audit[4469]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=13 a1=0 a2=0 a3=0 items=0 ppid=4032 pid=4469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:17:08.665000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465306439663232643166373537663835383234366265353964643838 Apr 20 19:17:08.665000 audit: BPF prog-id=148 op=LOAD Apr 20 19:17:08.665000 audit[4469]: SYSCALL arch=c000003e syscall=321 success=yes exit=19 a0=5 a1=c0002106f0 a2=98 a3=0 items=0 ppid=4032 pid=4469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:17:08.665000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465306439663232643166373537663835383234366265353964643838 Apr 20 19:17:08.717000 audit[4489]: NETFILTER_CFG table=filter:121 family=2 entries=20 op=nft_register_rule pid=4489 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Apr 20 19:17:08.717000 audit[4489]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd77aa17a0 a2=0 a3=7ffd77aa178c items=0 ppid=3270 pid=4489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:17:08.717000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Apr 20 19:17:08.721200 kernel: audit: type=1334 audit(1776712628.665:735): prog-id=146 op=UNLOAD Apr 20 19:17:08.767000 audit[4489]: NETFILTER_CFG table=nat:122 family=2 entries=14 op=nft_register_rule pid=4489 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Apr 20 19:17:08.767000 audit[4489]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffd77aa17a0 a2=0 a3=0 items=0 ppid=3270 pid=4489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:17:08.767000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Apr 20 19:17:08.897678 kubelet[3163]: E0420 19:17:08.896150 3163 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5h6vg" podUID="9f02930c-961c-4c4b-8334-b61cbd5c3d20" Apr 20 19:17:09.160000 audit[4495]: NETFILTER_CFG table=filter:123 family=2 entries=17 op=nft_register_rule pid=4495 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Apr 20 19:17:09.160000 audit[4495]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffeba01d630 a2=0 a3=7ffeba01d61c items=0 ppid=3270 pid=4495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:17:09.160000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Apr 20 19:17:09.169000 audit[4495]: NETFILTER_CFG table=nat:124 family=2 entries=35 op=nft_register_chain pid=4495 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Apr 20 19:17:09.169000 audit[4495]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffeba01d630 a2=0 a3=7ffeba01d61c items=0 ppid=3270 pid=4495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:17:09.169000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Apr 20 19:17:09.321482 containerd[1659]: time="2026-04-20T19:17:09.321146783Z" level=info msg="StartContainer for \"de0d9f22d1f757f858246be59dd8870879fb3814bf5601fd5aafefa1aed27380\" returns successfully" Apr 20 19:17:09.471319 systemd[1]: Started sshd@26-4103-10.0.0.14:22-10.0.0.1:44990.service - OpenSSH per-connection server daemon (10.0.0.1:44990). Apr 20 19:17:09.517000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-4103-10.0.0.14:22-10.0.0.1:44990 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:17:09.667000 audit[4510]: NETFILTER_CFG table=filter:125 family=2 entries=14 op=nft_register_rule pid=4510 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Apr 20 19:17:09.667000 audit[4510]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc84e7c3f0 a2=0 a3=7ffc84e7c3dc items=0 ppid=3270 pid=4510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:17:09.667000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Apr 20 19:17:09.718000 audit[4510]: NETFILTER_CFG table=nat:126 family=2 entries=44 op=nft_register_rule pid=4510 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Apr 20 19:17:09.718000 audit[4510]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffc84e7c3f0 a2=0 a3=7ffc84e7c3dc items=0 ppid=3270 pid=4510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:17:09.718000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Apr 20 19:17:10.153144 kubelet[3163]: E0420 19:17:10.152972 3163 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:17:10.159000 audit[4507]: AUDIT1101 pid=4507 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:17:10.169000 audit[4507]: AUDIT1103 pid=4507 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:17:10.169000 audit[4507]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd89e3b150 a2=3 a3=0 items=0 ppid=1 pid=4507 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:17:10.169000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:17:10.190937 sshd[4507]: Accepted publickey for core from 10.0.0.1 port 44990 ssh2: RSA SHA256:ZNIzts6V4KYKlrJxXaosrimCRlmsV/+NkZ5UtjwHrjE Apr 20 19:17:10.177285 sshd-session[4507]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 19:17:10.210911 systemd-logind[1627]: New session '28' of user 'core' with class 'user' and type 'tty'. Apr 20 19:17:10.235467 systemd[1]: Started session-28.scope - Session 28 of User core. Apr 20 19:17:10.344000 audit[4507]: AUDIT1105 pid=4507 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:17:10.376000 audit[4515]: AUDIT1103 pid=4515 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:17:10.839000 audit[4526]: NETFILTER_CFG table=filter:127 family=2 entries=14 op=nft_register_rule pid=4526 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Apr 20 19:17:10.839000 audit[4526]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffedeb27e40 a2=0 a3=7ffedeb27e2c items=0 ppid=3270 pid=4526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:17:10.839000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Apr 20 19:17:10.947638 kubelet[3163]: E0420 19:17:10.938243 3163 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5h6vg" podUID="9f02930c-961c-4c4b-8334-b61cbd5c3d20" Apr 20 19:17:10.953000 audit[4526]: NETFILTER_CFG table=nat:128 family=2 entries=56 op=nft_register_chain pid=4526 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Apr 20 19:17:10.953000 audit[4526]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffedeb27e40 a2=0 a3=7ffedeb27e2c items=0 ppid=3270 pid=4526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:17:10.953000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Apr 20 19:17:11.000261 systemd[1]: cri-containerd-de0d9f22d1f757f858246be59dd8870879fb3814bf5601fd5aafefa1aed27380.scope: Deactivated successfully. Apr 20 19:17:11.006000 audit: BPF prog-id=148 op=UNLOAD Apr 20 19:17:11.025925 containerd[1659]: time="2026-04-20T19:17:11.023909980Z" level=info msg="received container exit event container_id:\"de0d9f22d1f757f858246be59dd8870879fb3814bf5601fd5aafefa1aed27380\" id:\"de0d9f22d1f757f858246be59dd8870879fb3814bf5601fd5aafefa1aed27380\" pid:4482 exited_at:{seconds:1776712631 nanos:16923959}" Apr 20 19:17:11.248325 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-de0d9f22d1f757f858246be59dd8870879fb3814bf5601fd5aafefa1aed27380-rootfs.mount: Deactivated successfully. Apr 20 19:17:11.324166 sshd[4515]: Connection closed by 10.0.0.1 port 44990 Apr 20 19:17:11.324000 audit[4507]: AUDIT1106 pid=4507 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:17:11.324000 audit[4507]: AUDIT1104 pid=4507 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:17:11.324709 sshd-session[4507]: pam_unix(sshd:session): session closed for user core Apr 20 19:17:11.359610 systemd[1]: sshd@26-4103-10.0.0.14:22-10.0.0.1:44990.service: Deactivated successfully. Apr 20 19:17:11.361000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-4103-10.0.0.14:22-10.0.0.1:44990 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:17:11.450841 systemd[1]: session-28.scope: Deactivated successfully. Apr 20 19:17:11.461055 systemd-logind[1627]: Session 28 logged out. Waiting for processes to exit. Apr 20 19:17:11.477466 systemd-logind[1627]: Removed session 28. Apr 20 19:17:11.844751 containerd[1659]: time="2026-04-20T19:17:11.844386386Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 20 19:17:12.860203 kubelet[3163]: E0420 19:17:12.859732 3163 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5h6vg" podUID="9f02930c-961c-4c4b-8334-b61cbd5c3d20" Apr 20 19:17:14.860038 kubelet[3163]: E0420 19:17:14.859726 3163 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5h6vg" podUID="9f02930c-961c-4c4b-8334-b61cbd5c3d20" Apr 20 19:17:15.185280 kubelet[3163]: E0420 19:17:15.184448 3163 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:17:16.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-8205-10.0.0.14:22-10.0.0.1:55486 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:17:16.444441 systemd[1]: Started sshd@27-8205-10.0.0.14:22-10.0.0.1:55486.service - OpenSSH per-connection server daemon (10.0.0.1:55486). Apr 20 19:17:16.555067 kernel: kauditd_printk_skb: 41 callbacks suppressed Apr 20 19:17:16.555564 kernel: audit: type=1130 audit(1776712636.444:755): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-8205-10.0.0.14:22-10.0.0.1:55486 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:17:16.843000 audit[4555]: AUDIT1101 pid=4555 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:17:16.853160 kernel: audit: type=1101 audit(1776712636.843:756): pid=4555 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:17:16.854211 sshd[4555]: Accepted publickey for core from 10.0.0.1 port 55486 ssh2: RSA SHA256:ZNIzts6V4KYKlrJxXaosrimCRlmsV/+NkZ5UtjwHrjE Apr 20 19:17:16.854000 audit[4555]: AUDIT1103 pid=4555 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:17:16.862658 kernel: audit: type=1103 audit(1776712636.854:757): pid=4555 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:17:16.867135 kernel: audit: type=1006 audit(1776712636.862:758): pid=4555 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=29 res=1 Apr 20 19:17:16.862000 audit[4555]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeb0bd0410 a2=3 a3=0 items=0 ppid=1 pid=4555 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:17:16.872759 sshd-session[4555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 19:17:16.862000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:17:16.890284 kernel: audit: type=1300 audit(1776712636.862:758): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeb0bd0410 a2=3 a3=0 items=0 ppid=1 pid=4555 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:17:16.891486 kubelet[3163]: E0420 19:17:16.872753 3163 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5h6vg" podUID="9f02930c-961c-4c4b-8334-b61cbd5c3d20" Apr 20 19:17:16.892264 kernel: audit: type=1327 audit(1776712636.862:758): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:17:16.918044 systemd-logind[1627]: New session '29' of user 'core' with class 'user' and type 'tty'. Apr 20 19:17:16.937222 systemd[1]: Started session-29.scope - Session 29 of User core. Apr 20 19:17:17.046000 audit[4555]: AUDIT1105 pid=4555 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:17:17.062489 kernel: audit: type=1105 audit(1776712637.046:759): pid=4555 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:17:17.066000 audit[4559]: AUDIT1103 pid=4559 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:17:17.088169 kernel: audit: type=1103 audit(1776712637.066:760): pid=4559 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:17:17.331000 audit[4565]: NETFILTER_CFG table=filter:129 family=2 entries=14 op=nft_register_rule pid=4565 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Apr 20 19:17:17.331000 audit[4565]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd9cf59c10 a2=0 a3=7ffd9cf59bfc items=0 ppid=3270 pid=4565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:17:17.349461 kernel: audit: type=1325 audit(1776712637.331:761): table=filter:129 family=2 entries=14 op=nft_register_rule pid=4565 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Apr 20 19:17:17.331000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Apr 20 19:17:17.351158 kernel: audit: type=1300 audit(1776712637.331:761): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd9cf59c10 a2=0 a3=7ffd9cf59bfc items=0 ppid=3270 pid=4565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:17:17.352000 audit[4565]: NETFILTER_CFG table=nat:130 family=2 entries=20 op=nft_register_rule pid=4565 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Apr 20 19:17:17.352000 audit[4565]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffd9cf59c10 a2=0 a3=7ffd9cf59bfc items=0 ppid=3270 pid=4565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:17:17.352000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Apr 20 19:17:18.599000 audit[4572]: NETFILTER_CFG table=filter:131 family=2 entries=14 op=nft_register_rule pid=4572 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Apr 20 19:17:18.599000 audit[4572]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffeb38f2e10 a2=0 a3=7ffeb38f2dfc items=0 ppid=3270 pid=4572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:17:18.599000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Apr 20 19:17:18.611000 audit[4572]: NETFILTER_CFG table=nat:132 family=2 entries=20 op=nft_register_rule pid=4572 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Apr 20 19:17:18.611000 audit[4572]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffeb38f2e10 a2=0 a3=7ffeb38f2dfc items=0 ppid=3270 pid=4572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:17:18.611000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Apr 20 19:17:18.937495 kubelet[3163]: E0420 19:17:18.886695 3163 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5h6vg" podUID="9f02930c-961c-4c4b-8334-b61cbd5c3d20" Apr 20 19:17:18.946970 kubelet[3163]: E0420 19:17:18.946905 3163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:17:19.350490 sshd[4559]: Connection closed by 10.0.0.1 port 55486 Apr 20 19:17:19.361000 audit[4555]: AUDIT1106 pid=4555 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:17:19.362000 audit[4555]: AUDIT1104 pid=4555 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:17:19.355043 sshd-session[4555]: pam_unix(sshd:session): session closed for user core Apr 20 19:17:19.379000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-8205-10.0.0.14:22-10.0.0.1:55486 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:17:19.379289 systemd[1]: sshd@27-8205-10.0.0.14:22-10.0.0.1:55486.service: Deactivated successfully. Apr 20 19:17:19.397885 systemd[1]: session-29.scope: Deactivated successfully. Apr 20 19:17:19.410596 systemd[1]: session-29.scope: Consumed 1.451s CPU time, 17.8M memory peak. Apr 20 19:17:19.417173 systemd-logind[1627]: Session 29 logged out. Waiting for processes to exit. Apr 20 19:17:19.418512 systemd-logind[1627]: Removed session 29. Apr 20 19:17:20.293867 kubelet[3163]: E0420 19:17:20.291790 3163 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:17:20.918840 kubelet[3163]: E0420 19:17:20.918392 3163 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5h6vg" podUID="9f02930c-961c-4c4b-8334-b61cbd5c3d20" Apr 20 19:17:22.574124 containerd[1659]: time="2026-04-20T19:17:22.571871799Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 19:17:22.586960 containerd[1659]: time="2026-04-20T19:17:22.578920041Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=1, bytes read=65011712" Apr 20 19:17:22.587666 containerd[1659]: time="2026-04-20T19:17:22.587519620Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 19:17:22.706504 containerd[1659]: time="2026-04-20T19:17:22.704268685Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 19:17:22.717173 containerd[1659]: time="2026-04-20T19:17:22.706674964Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 10.862046639s" Apr 20 19:17:22.717173 containerd[1659]: time="2026-04-20T19:17:22.706823659Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Apr 20 19:17:22.871581 kubelet[3163]: E0420 19:17:22.867740 3163 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5h6vg" podUID="9f02930c-961c-4c4b-8334-b61cbd5c3d20" Apr 20 19:17:22.933405 containerd[1659]: time="2026-04-20T19:17:22.933189514Z" level=info msg="CreateContainer within sandbox \"1bba4a8c43a21f7135445620d6423b332e6946df1fe8c521ac8720d95194888f\" for container name:\"install-cni\"" Apr 20 19:17:23.030955 containerd[1659]: time="2026-04-20T19:17:23.028374218Z" level=info msg="Container 9ded083b6efa3e8bd38dd84260b7356256b1201ad704d9a7d0fc68e43d407328: CDI devices from CRI Config.CDIDevices: []" Apr 20 19:17:23.353106 containerd[1659]: time="2026-04-20T19:17:23.352878227Z" level=info msg="CreateContainer within sandbox \"1bba4a8c43a21f7135445620d6423b332e6946df1fe8c521ac8720d95194888f\" for name:\"install-cni\" returns container id \"9ded083b6efa3e8bd38dd84260b7356256b1201ad704d9a7d0fc68e43d407328\"" Apr 20 19:17:23.359869 containerd[1659]: time="2026-04-20T19:17:23.359528359Z" level=info msg="StartContainer for \"9ded083b6efa3e8bd38dd84260b7356256b1201ad704d9a7d0fc68e43d407328\"" Apr 20 19:17:23.493651 containerd[1659]: time="2026-04-20T19:17:23.492276943Z" level=info msg="connecting to shim 9ded083b6efa3e8bd38dd84260b7356256b1201ad704d9a7d0fc68e43d407328" address="unix:///run/containerd/s/d6fd6f578359a16fb6047ac6b8915843558ecdd02f7ae288b74c76a061bb8a9a" protocol=ttrpc version=3 Apr 20 19:17:24.092038 systemd[1]: Started cri-containerd-9ded083b6efa3e8bd38dd84260b7356256b1201ad704d9a7d0fc68e43d407328.scope - libcontainer container 9ded083b6efa3e8bd38dd84260b7356256b1201ad704d9a7d0fc68e43d407328. Apr 20 19:17:24.753089 systemd[1]: Started sshd@28-8206-10.0.0.14:22-10.0.0.1:55502.service - OpenSSH per-connection server daemon (10.0.0.1:55502). Apr 20 19:17:24.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-8206-10.0.0.14:22-10.0.0.1:55502 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:17:24.849041 kernel: kauditd_printk_skb: 13 callbacks suppressed Apr 20 19:17:24.849344 kernel: audit: type=1130 audit(1776712644.782:768): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-8206-10.0.0.14:22-10.0.0.1:55502 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:17:24.880725 kubelet[3163]: E0420 19:17:24.880241 3163 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5h6vg" podUID="9f02930c-961c-4c4b-8334-b61cbd5c3d20" Apr 20 19:17:25.049000 audit: BPF prog-id=149 op=LOAD Apr 20 19:17:25.049000 audit[4580]: SYSCALL arch=c000003e syscall=321 success=yes exit=19 a0=5 a1=c000106490 a2=98 a3=0 items=0 ppid=4032 pid=4580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:17:25.049000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964656430383362366566613365386264333864643834323630623733 Apr 20 19:17:25.049000 audit: BPF prog-id=150 op=LOAD Apr 20 19:17:25.049000 audit[4580]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106220 a2=98 a3=0 items=0 ppid=4032 pid=4580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:17:25.049000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964656430383362366566613365386264333864643834323630623733 Apr 20 19:17:25.105472 kernel: audit: type=1334 audit(1776712645.049:769): prog-id=149 op=LOAD Apr 20 19:17:25.107279 kernel: audit: type=1300 audit(1776712645.049:769): arch=c000003e syscall=321 success=yes exit=19 a0=5 a1=c000106490 a2=98 a3=0 items=0 ppid=4032 pid=4580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:17:25.107510 kernel: audit: type=1327 audit(1776712645.049:769): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964656430383362366566613365386264333864643834323630623733 Apr 20 19:17:25.049000 audit: BPF prog-id=150 op=UNLOAD Apr 20 19:17:25.118893 kernel: audit: type=1334 audit(1776712645.049:770): prog-id=150 op=LOAD Apr 20 19:17:25.049000 audit[4580]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4032 pid=4580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:17:25.049000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964656430383362366566613365386264333864643834323630623733 Apr 20 19:17:25.049000 audit: BPF prog-id=149 op=UNLOAD Apr 20 19:17:25.049000 audit[4580]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=13 a1=0 a2=0 a3=0 items=0 ppid=4032 pid=4580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:17:25.049000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964656430383362366566613365386264333864643834323630623733 Apr 20 19:17:25.049000 audit: BPF prog-id=151 op=LOAD Apr 20 19:17:25.049000 audit[4580]: SYSCALL arch=c000003e syscall=321 success=yes exit=19 a0=5 a1=c0001066f0 a2=98 a3=0 items=0 ppid=4032 pid=4580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:17:25.049000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964656430383362366566613365386264333864643834323630623733 Apr 20 19:17:25.165334 kernel: audit: type=1300 audit(1776712645.049:770): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106220 a2=98 a3=0 items=0 ppid=4032 pid=4580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:17:25.180154 kernel: audit: type=1327 audit(1776712645.049:770): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964656430383362366566613365386264333864643834323630623733 Apr 20 19:17:25.283400 kernel: audit: type=1334 audit(1776712645.049:771): prog-id=150 op=UNLOAD Apr 20 19:17:25.289702 kernel: audit: type=1300 audit(1776712645.049:771): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4032 pid=4580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:17:25.293919 kernel: audit: type=1327 audit(1776712645.049:771): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964656430383362366566613365386264333864643834323630623733 Apr 20 19:17:25.440758 kubelet[3163]: E0420 19:17:25.387245 3163 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:17:26.020447 containerd[1659]: time="2026-04-20T19:17:26.015056751Z" level=info msg="StartContainer for \"9ded083b6efa3e8bd38dd84260b7356256b1201ad704d9a7d0fc68e43d407328\" returns successfully" Apr 20 19:17:26.316000 audit[4601]: AUDIT1101 pid=4601 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:17:26.322000 audit[4601]: AUDIT1103 pid=4601 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:17:26.322000 audit[4601]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff9d7938e0 a2=3 a3=0 items=0 ppid=1 pid=4601 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:17:26.322000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:17:26.354357 sshd[4601]: Accepted publickey for core from 10.0.0.1 port 55502 ssh2: RSA SHA256:ZNIzts6V4KYKlrJxXaosrimCRlmsV/+NkZ5UtjwHrjE Apr 20 19:17:26.323990 sshd-session[4601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 19:17:26.436162 systemd-logind[1627]: New session '30' of user 'core' with class 'user' and type 'tty'. Apr 20 19:17:26.511983 systemd[1]: Started session-30.scope - Session 30 of User core. Apr 20 19:17:26.588000 audit[4601]: AUDIT1105 pid=4601 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:17:26.676000 audit[4618]: AUDIT1103 pid=4618 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:17:26.878746 kubelet[3163]: E0420 19:17:26.865521 3163 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5h6vg" podUID="9f02930c-961c-4c4b-8334-b61cbd5c3d20" Apr 20 19:17:28.672000 audit[4632]: NETFILTER_CFG table=filter:133 family=2 entries=13 op=nft_register_rule pid=4632 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Apr 20 19:17:28.672000 audit[4632]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffe53195780 a2=0 a3=7ffe5319576c items=0 ppid=3270 pid=4632 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:17:28.672000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Apr 20 19:17:28.697000 audit[4632]: NETFILTER_CFG table=nat:134 family=2 entries=27 op=nft_register_chain pid=4632 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Apr 20 19:17:28.697000 audit[4632]: SYSCALL arch=c000003e syscall=46 success=yes exit=9348 a0=3 a1=7ffe53195780 a2=0 a3=7ffe5319576c items=0 ppid=3270 pid=4632 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:17:28.697000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Apr 20 19:17:28.838261 sshd[4618]: Connection closed by 10.0.0.1 port 55502 Apr 20 19:17:28.839719 sshd-session[4601]: pam_unix(sshd:session): session closed for user core Apr 20 19:17:28.840000 audit[4601]: AUDIT1106 pid=4601 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:17:28.840000 audit[4601]: AUDIT1104 pid=4601 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:17:28.843764 systemd[1]: sshd@28-8206-10.0.0.14:22-10.0.0.1:55502.service: Deactivated successfully. Apr 20 19:17:28.856000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-8206-10.0.0.14:22-10.0.0.1:55502 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:17:28.877357 kubelet[3163]: E0420 19:17:28.877180 3163 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5h6vg" podUID="9f02930c-961c-4c4b-8334-b61cbd5c3d20" Apr 20 19:17:28.893778 systemd[1]: session-30.scope: Deactivated successfully. Apr 20 19:17:28.894180 systemd[1]: session-30.scope: Consumed 1.689s CPU time, 16.2M memory peak. Apr 20 19:17:28.895713 systemd-logind[1627]: Session 30 logged out. Waiting for processes to exit. Apr 20 19:17:28.896790 systemd-logind[1627]: Removed session 30. Apr 20 19:17:30.443924 kubelet[3163]: E0420 19:17:30.440149 3163 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:17:30.952848 kubelet[3163]: E0420 19:17:30.950269 3163 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5h6vg" podUID="9f02930c-961c-4c4b-8334-b61cbd5c3d20" Apr 20 19:17:30.960696 kubelet[3163]: E0420 19:17:30.953303 3163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:17:32.877209 kubelet[3163]: E0420 19:17:32.876900 3163 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5h6vg" podUID="9f02930c-961c-4c4b-8334-b61cbd5c3d20" Apr 20 19:17:33.969194 systemd[1]: Started sshd@29-8207-10.0.0.14:22-10.0.0.1:37654.service - OpenSSH per-connection server daemon (10.0.0.1:37654). Apr 20 19:17:33.969000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-8207-10.0.0.14:22-10.0.0.1:37654 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:17:34.048595 kernel: kauditd_printk_skb: 22 callbacks suppressed Apr 20 19:17:34.049773 kernel: audit: type=1130 audit(1776712653.969:784): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-8207-10.0.0.14:22-10.0.0.1:37654 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:17:35.008221 kubelet[3163]: E0420 19:17:35.007903 3163 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5h6vg" podUID="9f02930c-961c-4c4b-8334-b61cbd5c3d20" Apr 20 19:17:35.165000 audit[4637]: AUDIT1101 pid=4637 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:17:35.166631 sshd[4637]: Accepted publickey for core from 10.0.0.1 port 37654 ssh2: RSA SHA256:ZNIzts6V4KYKlrJxXaosrimCRlmsV/+NkZ5UtjwHrjE Apr 20 19:17:35.173205 kernel: audit: type=1101 audit(1776712655.165:785): pid=4637 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:17:35.188000 audit[4637]: AUDIT1103 pid=4637 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:17:35.192218 sshd-session[4637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 19:17:35.190000 audit[4637]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffefe94a550 a2=3 a3=0 items=0 ppid=1 pid=4637 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=31 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:17:35.202891 kernel: audit: type=1103 audit(1776712655.188:786): pid=4637 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:17:35.190000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:17:35.204672 kernel: audit: type=1006 audit(1776712655.190:787): pid=4637 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=31 res=1 Apr 20 19:17:35.204771 kernel: audit: type=1300 audit(1776712655.190:787): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffefe94a550 a2=3 a3=0 items=0 ppid=1 pid=4637 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=31 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:17:35.204786 kernel: audit: type=1327 audit(1776712655.190:787): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:17:35.235185 systemd-logind[1627]: New session '31' of user 'core' with class 'user' and type 'tty'. Apr 20 19:17:35.246870 systemd[1]: Started session-31.scope - Session 31 of User core. Apr 20 19:17:35.350000 audit[4637]: AUDIT1105 pid=4637 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:17:35.366735 kernel: audit: type=1105 audit(1776712655.350:788): pid=4637 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:17:35.377000 audit[4641]: AUDIT1103 pid=4641 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:17:35.385669 kernel: audit: type=1103 audit(1776712655.377:789): pid=4641 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:17:35.471143 kubelet[3163]: E0420 19:17:35.470022 3163 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 20 19:17:35.945000 audit[4652]: NETFILTER_CFG table=filter:135 family=2 entries=12 op=nft_register_rule pid=4652 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Apr 20 19:17:35.953085 kernel: audit: type=1325 audit(1776712655.945:790): table=filter:135 family=2 entries=12 op=nft_register_rule pid=4652 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Apr 20 19:17:35.945000 audit[4652]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffd8f146310 a2=0 a3=7ffd8f1462fc items=0 ppid=3270 pid=4652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:17:35.945000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Apr 20 19:17:35.973000 audit[4652]: NETFILTER_CFG table=nat:136 family=2 entries=30 op=nft_register_rule pid=4652 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Apr 20 19:17:35.973000 audit[4652]: SYSCALL arch=c000003e syscall=46 success=yes exit=9348 a0=3 a1=7ffd8f146310 a2=0 a3=7ffd8f1462fc items=0 ppid=3270 pid=4652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:17:35.973000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Apr 20 19:17:35.983759 kernel: audit: type=1300 audit(1776712655.945:790): arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffd8f146310 a2=0 a3=7ffd8f1462fc items=0 ppid=3270 pid=4652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:17:36.022000 audit[4654]: NETFILTER_CFG table=filter:137 family=2 entries=12 op=nft_register_rule pid=4654 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Apr 20 19:17:36.022000 audit[4654]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffcaa1c2390 a2=0 a3=7ffcaa1c237c items=0 ppid=3270 pid=4654 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:17:36.022000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Apr 20 19:17:36.039000 audit[4654]: NETFILTER_CFG table=nat:138 family=2 entries=22 op=nft_register_rule pid=4654 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Apr 20 19:17:36.039000 audit[4654]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffcaa1c2390 a2=0 a3=7ffcaa1c237c items=0 ppid=3270 pid=4654 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:17:36.039000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Apr 20 19:17:36.210991 sshd[4641]: Connection closed by 10.0.0.1 port 37654 Apr 20 19:17:36.214851 sshd-session[4637]: pam_unix(sshd:session): session closed for user core Apr 20 19:17:36.215000 audit[4637]: AUDIT1106 pid=4637 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:17:36.221000 audit[4637]: AUDIT1104 pid=4637 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:17:36.234874 systemd[1]: sshd@29-8207-10.0.0.14:22-10.0.0.1:37654.service: Deactivated successfully. Apr 20 19:17:36.241000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-8207-10.0.0.14:22-10.0.0.1:37654 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:17:36.261629 systemd[1]: session-31.scope: Deactivated successfully. Apr 20 19:17:36.277846 systemd-logind[1627]: Session 31 logged out. Waiting for processes to exit. Apr 20 19:17:36.291796 systemd-logind[1627]: Removed session 31. Apr 20 19:17:36.860172 kubelet[3163]: E0420 19:17:36.859997 3163 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5h6vg" podUID="9f02930c-961c-4c4b-8334-b61cbd5c3d20" Apr 20 19:17:37.066739 systemd[1]: cri-containerd-9ded083b6efa3e8bd38dd84260b7356256b1201ad704d9a7d0fc68e43d407328.scope: Deactivated successfully. Apr 20 19:17:37.070223 systemd[1]: cri-containerd-9ded083b6efa3e8bd38dd84260b7356256b1201ad704d9a7d0fc68e43d407328.scope: Consumed 7.067s CPU time, 180M memory peak, 4.7M read from disk, 177M written to disk. Apr 20 19:17:37.074000 audit: BPF prog-id=151 op=UNLOAD Apr 20 19:17:37.079242 containerd[1659]: time="2026-04-20T19:17:37.077374863Z" level=info msg="received container exit event container_id:\"9ded083b6efa3e8bd38dd84260b7356256b1201ad704d9a7d0fc68e43d407328\" id:\"9ded083b6efa3e8bd38dd84260b7356256b1201ad704d9a7d0fc68e43d407328\" pid:4594 exited_at:{seconds:1776712657 nanos:67503909}" Apr 20 19:17:37.296735 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9ded083b6efa3e8bd38dd84260b7356256b1201ad704d9a7d0fc68e43d407328-rootfs.mount: Deactivated successfully. Apr 20 19:17:38.414428 containerd[1659]: time="2026-04-20T19:17:38.413082795Z" level=info msg="CreateContainer within sandbox \"1bba4a8c43a21f7135445620d6423b332e6946df1fe8c521ac8720d95194888f\" for container name:\"calico-node\"" Apr 20 19:17:38.486056 containerd[1659]: time="2026-04-20T19:17:38.484899332Z" level=info msg="Container 7ad53f3fafdaae0a232b3f26afb202844744846718e9540c77ad7513e837efd9: CDI devices from CRI Config.CDIDevices: []" Apr 20 19:17:38.584108 containerd[1659]: time="2026-04-20T19:17:38.581843225Z" level=info msg="CreateContainer within sandbox \"1bba4a8c43a21f7135445620d6423b332e6946df1fe8c521ac8720d95194888f\" for name:\"calico-node\" returns container id \"7ad53f3fafdaae0a232b3f26afb202844744846718e9540c77ad7513e837efd9\"" Apr 20 19:17:38.600727 containerd[1659]: time="2026-04-20T19:17:38.600349633Z" level=info msg="StartContainer for \"7ad53f3fafdaae0a232b3f26afb202844744846718e9540c77ad7513e837efd9\"" Apr 20 19:17:38.642146 containerd[1659]: time="2026-04-20T19:17:38.641035323Z" level=info msg="connecting to shim 7ad53f3fafdaae0a232b3f26afb202844744846718e9540c77ad7513e837efd9" address="unix:///run/containerd/s/d6fd6f578359a16fb6047ac6b8915843558ecdd02f7ae288b74c76a061bb8a9a" protocol=ttrpc version=3 Apr 20 19:17:38.863142 kubelet[3163]: E0420 19:17:38.862113 3163 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5h6vg" podUID="9f02930c-961c-4c4b-8334-b61cbd5c3d20" Apr 20 19:17:38.954989 systemd[1]: Started cri-containerd-7ad53f3fafdaae0a232b3f26afb202844744846718e9540c77ad7513e837efd9.scope - libcontainer container 7ad53f3fafdaae0a232b3f26afb202844744846718e9540c77ad7513e837efd9. Apr 20 19:17:39.569000 audit: BPF prog-id=152 op=LOAD Apr 20 19:17:39.569000 audit[4673]: SYSCALL arch=c000003e syscall=321 success=yes exit=19 a0=5 a1=c000186490 a2=98 a3=0 items=0 ppid=4032 pid=4673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:17:39.569000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761643533663366616664616165306132333262336632366166623230 Apr 20 19:17:39.575000 audit: BPF prog-id=153 op=LOAD Apr 20 19:17:39.575000 audit[4673]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186220 a2=98 a3=0 items=0 ppid=4032 pid=4673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:17:39.575000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761643533663366616664616165306132333262336632366166623230 Apr 20 19:17:39.613090 kernel: kauditd_printk_skb: 14 callbacks suppressed Apr 20 19:17:39.575000 audit: BPF prog-id=153 op=UNLOAD Apr 20 19:17:39.617452 kernel: audit: type=1334 audit(1776712659.569:798): prog-id=152 op=LOAD Apr 20 19:17:39.575000 audit[4673]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4032 pid=4673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:17:39.624262 kernel: audit: type=1300 audit(1776712659.569:798): arch=c000003e syscall=321 success=yes exit=19 a0=5 a1=c000186490 a2=98 a3=0 items=0 ppid=4032 pid=4673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:17:39.575000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761643533663366616664616165306132333262336632366166623230 Apr 20 19:17:39.629235 kernel: audit: type=1327 audit(1776712659.569:798): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761643533663366616664616165306132333262336632366166623230 Apr 20 19:17:39.632823 kernel: audit: type=1334 audit(1776712659.575:799): prog-id=153 op=LOAD Apr 20 19:17:39.635164 kernel: audit: type=1300 audit(1776712659.575:799): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186220 a2=98 a3=0 items=0 ppid=4032 pid=4673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:17:39.600000 audit: BPF prog-id=152 op=UNLOAD Apr 20 19:17:39.635801 kernel: audit: type=1327 audit(1776712659.575:799): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761643533663366616664616165306132333262336632366166623230 Apr 20 19:17:39.635988 kernel: audit: type=1334 audit(1776712659.575:800): prog-id=153 op=UNLOAD Apr 20 19:17:39.637143 kernel: audit: type=1300 audit(1776712659.575:800): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4032 pid=4673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:17:39.600000 audit[4673]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=13 a1=0 a2=0 a3=0 items=0 ppid=4032 pid=4673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:17:39.600000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761643533663366616664616165306132333262336632366166623230 Apr 20 19:17:39.600000 audit: BPF prog-id=154 op=LOAD Apr 20 19:17:39.600000 audit[4673]: SYSCALL arch=c000003e syscall=321 success=yes exit=19 a0=5 a1=c0001866f0 a2=98 a3=0 items=0 ppid=4032 pid=4673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:17:39.600000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761643533663366616664616165306132333262336632366166623230 Apr 20 19:17:39.642527 kernel: audit: type=1327 audit(1776712659.575:800): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3761643533663366616664616165306132333262336632366166623230 Apr 20 19:17:39.642841 kernel: audit: type=1334 audit(1776712659.600:801): prog-id=152 op=UNLOAD Apr 20 19:17:40.098699 containerd[1659]: time="2026-04-20T19:17:40.097913487Z" level=info msg="StartContainer for \"7ad53f3fafdaae0a232b3f26afb202844744846718e9540c77ad7513e837efd9\" returns successfully" Apr 20 19:17:41.899162 systemd[1]: Created slice kubepods-besteffort-pod9f02930c_961c_4c4b_8334_b61cbd5c3d20.slice - libcontainer container kubepods-besteffort-pod9f02930c_961c_4c4b_8334_b61cbd5c3d20.slice. Apr 20 19:17:42.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-8208-10.0.0.14:22-10.0.0.1:45654 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:17:42.094879 systemd[1]: Started sshd@30-8208-10.0.0.14:22-10.0.0.1:45654.service - OpenSSH per-connection server daemon (10.0.0.1:45654). Apr 20 19:17:42.334019 containerd[1659]: time="2026-04-20T19:17:42.330308171Z" level=info msg="RunPodSandbox for name:\"csi-node-driver-5h6vg\" uid:\"9f02930c-961c-4c4b-8334-b61cbd5c3d20\" namespace:\"calico-system\"" Apr 20 19:17:43.422000 audit[4710]: AUDIT1101 pid=4710 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:17:43.428357 sshd[4710]: Accepted publickey for core from 10.0.0.1 port 45654 ssh2: RSA SHA256:ZNIzts6V4KYKlrJxXaosrimCRlmsV/+NkZ5UtjwHrjE Apr 20 19:17:43.440000 audit[4710]: AUDIT1103 pid=4710 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:17:43.442000 audit[4710]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff62437bd0 a2=3 a3=0 items=0 ppid=1 pid=4710 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=32 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:17:43.442000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:17:43.444362 sshd-session[4710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 19:17:43.678844 systemd-logind[1627]: New session '32' of user 'core' with class 'user' and type 'tty'. Apr 20 19:17:43.703359 systemd[1]: Started session-32.scope - Session 32 of User core. Apr 20 19:17:43.769000 audit[4710]: AUDIT1105 pid=4710 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:17:43.875000 audit[4727]: AUDIT1103 pid=4727 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:17:43.900442 kubelet[3163]: E0420 19:17:43.900357 3163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:17:45.925000 audit[4760]: NETFILTER_CFG table=filter:139 family=2 entries=11 op=nft_register_rule pid=4760 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Apr 20 19:17:45.925000 audit[4760]: SYSCALL arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7ffeefafc000 a2=0 a3=7ffeefafbfec items=0 ppid=3270 pid=4760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:17:45.961125 kernel: kauditd_printk_skb: 13 callbacks suppressed Apr 20 19:17:45.961247 kernel: audit: type=1325 audit(1776712665.925:809): table=filter:139 family=2 entries=11 op=nft_register_rule pid=4760 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Apr 20 19:17:45.961271 kernel: audit: type=1300 audit(1776712665.925:809): arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7ffeefafc000 a2=0 a3=7ffeefafbfec items=0 ppid=3270 pid=4760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:17:45.925000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Apr 20 19:17:45.971644 kernel: audit: type=1327 audit(1776712665.925:809): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Apr 20 19:17:45.948000 audit[4760]: NETFILTER_CFG table=nat:140 family=2 entries=41 op=nft_register_chain pid=4760 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Apr 20 19:17:45.986581 kernel: audit: type=1325 audit(1776712665.948:810): table=nat:140 family=2 entries=41 op=nft_register_chain pid=4760 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Apr 20 19:17:45.948000 audit[4760]: SYSCALL arch=c000003e syscall=46 success=yes exit=14812 a0=3 a1=7ffeefafc000 a2=0 a3=7ffeefafbfec items=0 ppid=3270 pid=4760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:17:46.006417 kernel: audit: type=1300 audit(1776712665.948:810): arch=c000003e syscall=46 success=yes exit=14812 a0=3 a1=7ffeefafc000 a2=0 a3=7ffeefafbfec items=0 ppid=3270 pid=4760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:17:45.948000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Apr 20 19:17:46.030056 kernel: audit: type=1327 audit(1776712665.948:810): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Apr 20 19:17:46.032222 sshd[4727]: Connection closed by 10.0.0.1 port 45654 Apr 20 19:17:46.053066 sshd-session[4710]: pam_unix(sshd:session): session closed for user core Apr 20 19:17:46.083000 audit[4710]: AUDIT1106 pid=4710 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:17:46.123913 kernel: audit: type=1106 audit(1776712666.083:811): pid=4710 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:17:46.085000 audit[4710]: AUDIT1104 pid=4710 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:17:46.133000 audit[4762]: NETFILTER_CFG table=filter:141 family=2 entries=10 op=nft_register_rule pid=4762 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Apr 20 19:17:46.133000 audit[4762]: SYSCALL arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7ffff8e9e730 a2=0 a3=7ffff8e9e71c items=0 ppid=3270 pid=4762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:17:46.133000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Apr 20 19:17:46.162000 audit[4762]: NETFILTER_CFG table=nat:142 family=2 entries=24 op=nft_register_rule pid=4762 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Apr 20 19:17:46.163726 kernel: audit: type=1104 audit(1776712666.085:812): pid=4710 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:17:46.162330 systemd[1]: sshd@30-8208-10.0.0.14:22-10.0.0.1:45654.service: Deactivated successfully. Apr 20 19:17:46.163964 kernel: audit: type=1325 audit(1776712666.133:813): table=filter:141 family=2 entries=10 op=nft_register_rule pid=4762 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Apr 20 19:17:46.163982 kernel: audit: type=1300 audit(1776712666.133:813): arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7ffff8e9e730 a2=0 a3=7ffff8e9e71c items=0 ppid=3270 pid=4762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:17:46.162000 audit[4762]: SYSCALL arch=c000003e syscall=46 success=yes exit=7308 a0=3 a1=7ffff8e9e730 a2=0 a3=7ffff8e9e71c items=0 ppid=3270 pid=4762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:17:46.162000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Apr 20 19:17:46.163000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-8208-10.0.0.14:22-10.0.0.1:45654 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:17:46.195093 systemd[1]: session-32.scope: Deactivated successfully. Apr 20 19:17:46.198922 systemd[1]: session-32.scope: Consumed 1.584s CPU time, 16.1M memory peak. Apr 20 19:17:46.208103 systemd-logind[1627]: Session 32 logged out. Waiting for processes to exit. Apr 20 19:17:46.231000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-12293-10.0.0.14:22-10.0.0.1:48054 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:17:46.231963 systemd[1]: Started sshd@31-12293-10.0.0.14:22-10.0.0.1:48054.service - OpenSSH per-connection server daemon (10.0.0.1:48054). Apr 20 19:17:46.250586 containerd[1659]: time="2026-04-20T19:17:46.249617364Z" level=error msg="Failed to destroy network for sandbox \"bbd71cc0e3ae810aa2b742e5b3212fc11837c79920bb05ac01057b15c3a4a237\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 20 19:17:46.255206 systemd-logind[1627]: Removed session 32. Apr 20 19:17:46.393950 containerd[1659]: time="2026-04-20T19:17:46.393577523Z" level=error msg="RunPodSandbox for name:\"csi-node-driver-5h6vg\" uid:\"9f02930c-961c-4c4b-8334-b61cbd5c3d20\" namespace:\"calico-system\" failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bbd71cc0e3ae810aa2b742e5b3212fc11837c79920bb05ac01057b15c3a4a237\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 20 19:17:46.404171 kubelet[3163]: E0420 19:17:46.398478 3163 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bbd71cc0e3ae810aa2b742e5b3212fc11837c79920bb05ac01057b15c3a4a237\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 20 19:17:46.404171 kubelet[3163]: E0420 19:17:46.398695 3163 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bbd71cc0e3ae810aa2b742e5b3212fc11837c79920bb05ac01057b15c3a4a237\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5h6vg" Apr 20 19:17:46.404171 kubelet[3163]: E0420 19:17:46.398717 3163 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bbd71cc0e3ae810aa2b742e5b3212fc11837c79920bb05ac01057b15c3a4a237\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5h6vg" Apr 20 19:17:46.404041 systemd[1]: run-netns-cni\x2d73490c95\x2d5740\x2d1047\x2d7a79\x2da7f3826059a2.mount: Deactivated successfully. Apr 20 19:17:46.422879 kubelet[3163]: E0420 19:17:46.398913 3163 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-5h6vg_calico-system(9f02930c-961c-4c4b-8334-b61cbd5c3d20)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-5h6vg_calico-system(9f02930c-961c-4c4b-8334-b61cbd5c3d20)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bbd71cc0e3ae810aa2b742e5b3212fc11837c79920bb05ac01057b15c3a4a237\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5h6vg" podUID="9f02930c-961c-4c4b-8334-b61cbd5c3d20" Apr 20 19:17:47.383000 audit[4768]: AUDIT1101 pid=4768 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:17:47.406911 sshd[4768]: Accepted publickey for core from 10.0.0.1 port 48054 ssh2: RSA SHA256:ZNIzts6V4KYKlrJxXaosrimCRlmsV/+NkZ5UtjwHrjE Apr 20 19:17:47.537000 audit[4768]: AUDIT1103 pid=4768 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:17:47.539000 audit[4768]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcfd7a9690 a2=3 a3=0 items=0 ppid=1 pid=4768 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=33 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:17:47.539000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:17:47.554221 sshd-session[4768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 19:17:47.884373 systemd-logind[1627]: New session '33' of user 'core' with class 'user' and type 'tty'. Apr 20 19:17:47.942925 systemd[1]: Started session-33.scope - Session 33 of User core. Apr 20 19:17:48.022000 audit[4768]: AUDIT1105 pid=4768 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:17:48.090000 audit[4772]: AUDIT1103 pid=4772 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:17:50.220862 sshd[4772]: Connection closed by 10.0.0.1 port 48054 Apr 20 19:17:50.341016 sshd-session[4768]: pam_unix(sshd:session): session closed for user core Apr 20 19:17:50.386000 audit[4768]: AUDIT1106 pid=4768 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:17:50.409000 audit[4768]: AUDIT1104 pid=4768 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:17:50.496048 systemd[1]: sshd@31-12293-10.0.0.14:22-10.0.0.1:48054.service: Deactivated successfully. Apr 20 19:17:50.496000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-12293-10.0.0.14:22-10.0.0.1:48054 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:17:50.553675 systemd[1]: session-33.scope: Deactivated successfully. Apr 20 19:17:50.612156 systemd[1]: session-33.scope: Consumed 1.687s CPU time, 24.2M memory peak. Apr 20 19:17:50.672012 systemd-logind[1627]: Session 33 logged out. Waiting for processes to exit. Apr 20 19:17:50.795887 systemd[1]: Started sshd@32-8209-10.0.0.14:22-10.0.0.1:48066.service - OpenSSH per-connection server daemon (10.0.0.1:48066). Apr 20 19:17:50.804000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@32-8209-10.0.0.14:22-10.0.0.1:48066 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:17:50.835029 systemd-logind[1627]: Removed session 33. Apr 20 19:17:51.228000 audit[4784]: AUDIT1101 pid=4784 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:17:51.234087 kernel: kauditd_printk_skb: 17 callbacks suppressed Apr 20 19:17:51.234835 sshd[4784]: Accepted publickey for core from 10.0.0.1 port 48066 ssh2: RSA SHA256:ZNIzts6V4KYKlrJxXaosrimCRlmsV/+NkZ5UtjwHrjE Apr 20 19:17:51.236149 kernel: audit: type=1101 audit(1776712671.228:826): pid=4784 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:17:51.236387 sshd-session[4784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 19:17:51.234000 audit[4784]: AUDIT1103 pid=4784 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:17:51.247262 kernel: audit: type=1103 audit(1776712671.234:827): pid=4784 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:17:51.247858 kernel: audit: type=1006 audit(1776712671.235:828): pid=4784 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=34 res=1 Apr 20 19:17:51.235000 audit[4784]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff8270c260 a2=3 a3=0 items=0 ppid=1 pid=4784 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=34 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:17:51.262221 kernel: audit: type=1300 audit(1776712671.235:828): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff8270c260 a2=3 a3=0 items=0 ppid=1 pid=4784 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=34 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:17:51.235000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:17:51.267980 kernel: audit: type=1327 audit(1776712671.235:828): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:17:51.286351 systemd-logind[1627]: New session '34' of user 'core' with class 'user' and type 'tty'. Apr 20 19:17:51.303104 systemd[1]: Started session-34.scope - Session 34 of User core. Apr 20 19:17:51.306000 audit[4784]: AUDIT1105 pid=4784 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:17:51.316659 kernel: audit: type=1105 audit(1776712671.306:829): pid=4784 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:17:51.318000 audit[4796]: AUDIT1103 pid=4796 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:17:51.327058 kernel: audit: type=1103 audit(1776712671.318:830): pid=4796 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:17:51.939239 kubelet[3163]: E0420 19:17:51.937326 3163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:17:52.713638 sshd[4796]: Connection closed by 10.0.0.1 port 48066 Apr 20 19:17:52.713000 audit[4784]: AUDIT1106 pid=4784 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:17:52.713206 sshd-session[4784]: pam_unix(sshd:session): session closed for user core Apr 20 19:17:52.714000 audit[4784]: AUDIT1104 pid=4784 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:17:52.738608 kernel: audit: type=1106 audit(1776712672.713:831): pid=4784 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:17:52.738706 kernel: audit: type=1104 audit(1776712672.714:832): pid=4784 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:17:52.763213 systemd[1]: sshd@32-8209-10.0.0.14:22-10.0.0.1:48066.service: Deactivated successfully. Apr 20 19:17:52.778000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@32-8209-10.0.0.14:22-10.0.0.1:48066 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:17:52.823972 kernel: audit: type=1131 audit(1776712672.778:833): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@32-8209-10.0.0.14:22-10.0.0.1:48066 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:17:52.850842 systemd[1]: session-34.scope: Deactivated successfully. Apr 20 19:17:52.851775 systemd[1]: session-34.scope: Consumed 1.161s CPU time, 16M memory peak. Apr 20 19:17:52.882682 systemd-logind[1627]: Session 34 logged out. Waiting for processes to exit. Apr 20 19:17:52.903690 systemd-logind[1627]: Removed session 34. Apr 20 19:17:57.867634 containerd[1659]: time="2026-04-20T19:17:57.864756954Z" level=info msg="RunPodSandbox for name:\"csi-node-driver-5h6vg\" uid:\"9f02930c-961c-4c4b-8334-b61cbd5c3d20\" namespace:\"calico-system\"" Apr 20 19:17:58.260770 systemd[1]: Started sshd@33-8210-10.0.0.14:22-10.0.0.1:58654.service - OpenSSH per-connection server daemon (10.0.0.1:58654). Apr 20 19:17:58.262000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@33-8210-10.0.0.14:22-10.0.0.1:58654 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:17:58.361896 kernel: audit: type=1130 audit(1776712678.262:834): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@33-8210-10.0.0.14:22-10.0.0.1:58654 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:18:00.317000 audit[4933]: AUDIT1101 pid=4933 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:18:00.355657 kernel: audit: type=1101 audit(1776712680.317:835): pid=4933 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:18:00.364346 sshd[4933]: Accepted publickey for core from 10.0.0.1 port 58654 ssh2: RSA SHA256:ZNIzts6V4KYKlrJxXaosrimCRlmsV/+NkZ5UtjwHrjE Apr 20 19:18:00.381000 audit[4933]: AUDIT1103 pid=4933 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:18:00.412000 audit[4933]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffde84f6b40 a2=3 a3=0 items=0 ppid=1 pid=4933 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=35 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:00.424294 kernel: audit: type=1103 audit(1776712680.381:836): pid=4933 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:18:00.438714 kernel: audit: type=1006 audit(1776712680.412:837): pid=4933 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=35 res=1 Apr 20 19:18:00.425683 sshd-session[4933]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 19:18:00.412000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:18:00.465768 kernel: audit: type=1300 audit(1776712680.412:837): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffde84f6b40 a2=3 a3=0 items=0 ppid=1 pid=4933 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=35 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:00.465890 kernel: audit: type=1327 audit(1776712680.412:837): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:18:00.811737 systemd-logind[1627]: New session '35' of user 'core' with class 'user' and type 'tty'. Apr 20 19:18:01.043216 systemd[1]: Started session-35.scope - Session 35 of User core. Apr 20 19:18:01.412000 audit[4933]: AUDIT1105 pid=4933 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:18:01.439755 kernel: audit: type=1105 audit(1776712681.412:838): pid=4933 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:18:01.533000 audit[4949]: AUDIT1103 pid=4949 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:18:01.568998 kernel: audit: type=1103 audit(1776712681.533:839): pid=4949 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:18:05.199613 sshd[4949]: Connection closed by 10.0.0.1 port 58654 Apr 20 19:18:05.207000 audit[4933]: AUDIT1106 pid=4933 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:18:05.207612 sshd-session[4933]: pam_unix(sshd:session): session closed for user core Apr 20 19:18:05.217753 kernel: audit: type=1106 audit(1776712685.207:840): pid=4933 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:18:05.219000 audit[4933]: AUDIT1104 pid=4933 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:18:05.232658 kernel: audit: type=1104 audit(1776712685.219:841): pid=4933 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:18:05.232601 systemd[1]: sshd@33-8210-10.0.0.14:22-10.0.0.1:58654.service: Deactivated successfully. Apr 20 19:18:05.232000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@33-8210-10.0.0.14:22-10.0.0.1:58654 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:18:05.242353 kernel: audit: type=1131 audit(1776712685.232:842): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@33-8210-10.0.0.14:22-10.0.0.1:58654 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:18:05.388451 systemd[1]: session-35.scope: Deactivated successfully. Apr 20 19:18:05.389255 systemd[1]: session-35.scope: Consumed 2.018s CPU time, 17M memory peak. Apr 20 19:18:05.462048 systemd-logind[1627]: Session 35 logged out. Waiting for processes to exit. Apr 20 19:18:05.575714 systemd-logind[1627]: Removed session 35. Apr 20 19:18:09.607000 audit[4996]: NETFILTER_CFG table=filter:143 family=2 entries=9 op=nft_register_rule pid=4996 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Apr 20 19:18:09.607000 audit[4996]: SYSCALL arch=c000003e syscall=46 success=yes exit=3016 a0=3 a1=7ffe9b7687b0 a2=0 a3=7ffe9b76879c items=0 ppid=3270 pid=4996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:09.607000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Apr 20 19:18:09.734016 kernel: audit: type=1325 audit(1776712689.607:843): table=filter:143 family=2 entries=9 op=nft_register_rule pid=4996 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Apr 20 19:18:09.738634 kernel: audit: type=1300 audit(1776712689.607:843): arch=c000003e syscall=46 success=yes exit=3016 a0=3 a1=7ffe9b7687b0 a2=0 a3=7ffe9b76879c items=0 ppid=3270 pid=4996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:09.740157 kernel: audit: type=1327 audit(1776712689.607:843): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Apr 20 19:18:09.937163 systemd-networkd[1441]: cali05ffc2f4b44: Link UP Apr 20 19:18:09.936000 audit[4996]: NETFILTER_CFG table=nat:144 family=2 entries=31 op=nft_register_chain pid=4996 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Apr 20 19:18:09.936000 audit[4996]: SYSCALL arch=c000003e syscall=46 success=yes exit=10884 a0=3 a1=7ffe9b7687b0 a2=0 a3=7ffe9b76879c items=0 ppid=3270 pid=4996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:09.936000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Apr 20 19:18:09.986668 kernel: audit: type=1325 audit(1776712689.936:844): table=nat:144 family=2 entries=31 op=nft_register_chain pid=4996 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Apr 20 19:18:09.967788 systemd-networkd[1441]: cali05ffc2f4b44: Gained carrier Apr 20 19:18:09.987077 kernel: audit: type=1300 audit(1776712689.936:844): arch=c000003e syscall=46 success=yes exit=10884 a0=3 a1=7ffe9b7687b0 a2=0 a3=7ffe9b76879c items=0 ppid=3270 pid=4996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:09.987120 kernel: audit: type=1327 audit(1776712689.936:844): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Apr 20 19:18:10.593000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@34-12294-10.0.0.14:22-10.0.0.1:53872 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:18:10.593353 systemd[1]: Started sshd@34-12294-10.0.0.14:22-10.0.0.1:53872.service - OpenSSH per-connection server daemon (10.0.0.1:53872). Apr 20 19:18:10.616615 kernel: audit: type=1130 audit(1776712690.593:845): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@34-12294-10.0.0.14:22-10.0.0.1:53872 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:18:11.142714 kubelet[3163]: I0420 19:18:11.073061 3163 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-g9fs5" podStartSLOduration=57.072224395 podStartE2EDuration="2m29.06539312s" podCreationTimestamp="2026-04-20 19:15:42 +0000 UTC" firstStartedPulling="2026-04-20 19:15:50.747946776 +0000 UTC m=+448.842196473" lastFinishedPulling="2026-04-20 19:17:22.74111549 +0000 UTC m=+540.835365198" observedRunningTime="2026-04-20 19:17:40.673943079 +0000 UTC m=+558.768192805" watchObservedRunningTime="2026-04-20 19:18:11.06539312 +0000 UTC m=+589.159642840" Apr 20 19:18:11.163168 containerd[1659]: 2026-04-20 19:18:03.838 [ERROR][4928] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 20 19:18:11.163168 containerd[1659]: 2026-04-20 19:18:05.163 [INFO][4928] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--5h6vg-eth0 csi-node-driver- calico-system 9f02930c-961c-4c4b-8334-b61cbd5c3d20 1576 0 2026-04-20 19:15:44 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-5h6vg eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali05ffc2f4b44 [] [] }} ContainerID="a95a152fb136b1fbc85dca6a88ab5e5cb65de4b808808cc8910b0dc8f0933850" Namespace="calico-system" Pod="csi-node-driver-5h6vg" WorkloadEndpoint="localhost-k8s-csi--node--driver--5h6vg-" Apr 20 19:18:11.163168 containerd[1659]: 2026-04-20 19:18:05.175 [INFO][4928] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a95a152fb136b1fbc85dca6a88ab5e5cb65de4b808808cc8910b0dc8f0933850" Namespace="calico-system" Pod="csi-node-driver-5h6vg" WorkloadEndpoint="localhost-k8s-csi--node--driver--5h6vg-eth0" Apr 20 19:18:11.163168 containerd[1659]: 2026-04-20 19:18:07.164 [INFO][4975] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a95a152fb136b1fbc85dca6a88ab5e5cb65de4b808808cc8910b0dc8f0933850" HandleID="k8s-pod-network.a95a152fb136b1fbc85dca6a88ab5e5cb65de4b808808cc8910b0dc8f0933850" Workload="localhost-k8s-csi--node--driver--5h6vg-eth0" Apr 20 19:18:11.240600 containerd[1659]: 2026-04-20 19:18:08.068 [INFO][4975] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="a95a152fb136b1fbc85dca6a88ab5e5cb65de4b808808cc8910b0dc8f0933850" HandleID="k8s-pod-network.a95a152fb136b1fbc85dca6a88ab5e5cb65de4b808808cc8910b0dc8f0933850" Workload="localhost-k8s-csi--node--driver--5h6vg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00011ed50), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-5h6vg", "timestamp":"2026-04-20 19:18:07.154323395 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0005fe840)} Apr 20 19:18:11.240600 containerd[1659]: 2026-04-20 19:18:08.161 [INFO][4975] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 20 19:18:11.240600 containerd[1659]: 2026-04-20 19:18:08.249 [INFO][4975] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 20 19:18:11.240600 containerd[1659]: 2026-04-20 19:18:08.282 [INFO][4975] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 20 19:18:11.240600 containerd[1659]: 2026-04-20 19:18:08.403 [INFO][4975] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.a95a152fb136b1fbc85dca6a88ab5e5cb65de4b808808cc8910b0dc8f0933850" host="localhost" Apr 20 19:18:11.240600 containerd[1659]: 2026-04-20 19:18:08.561 [INFO][4975] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 20 19:18:11.240600 containerd[1659]: 2026-04-20 19:18:08.648 [INFO][4975] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 20 19:18:11.240600 containerd[1659]: 2026-04-20 19:18:08.759 [INFO][4975] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 20 19:18:11.240600 containerd[1659]: 2026-04-20 19:18:08.795 [INFO][4975] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 20 19:18:11.240600 containerd[1659]: 2026-04-20 19:18:08.795 [INFO][4975] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a95a152fb136b1fbc85dca6a88ab5e5cb65de4b808808cc8910b0dc8f0933850" host="localhost" Apr 20 19:18:11.244588 containerd[1659]: 2026-04-20 19:18:08.838 [INFO][4975] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.a95a152fb136b1fbc85dca6a88ab5e5cb65de4b808808cc8910b0dc8f0933850 Apr 20 19:18:11.244588 containerd[1659]: 2026-04-20 19:18:08.979 [INFO][4975] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a95a152fb136b1fbc85dca6a88ab5e5cb65de4b808808cc8910b0dc8f0933850" host="localhost" Apr 20 19:18:11.244588 containerd[1659]: 2026-04-20 19:18:09.271 [INFO][4975] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.a95a152fb136b1fbc85dca6a88ab5e5cb65de4b808808cc8910b0dc8f0933850" host="localhost" Apr 20 19:18:11.244588 containerd[1659]: 2026-04-20 19:18:09.272 [INFO][4975] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.a95a152fb136b1fbc85dca6a88ab5e5cb65de4b808808cc8910b0dc8f0933850" host="localhost" Apr 20 19:18:11.244588 containerd[1659]: 2026-04-20 19:18:09.272 [INFO][4975] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 20 19:18:11.244588 containerd[1659]: 2026-04-20 19:18:09.272 [INFO][4975] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="a95a152fb136b1fbc85dca6a88ab5e5cb65de4b808808cc8910b0dc8f0933850" HandleID="k8s-pod-network.a95a152fb136b1fbc85dca6a88ab5e5cb65de4b808808cc8910b0dc8f0933850" Workload="localhost-k8s-csi--node--driver--5h6vg-eth0" Apr 20 19:18:11.244722 containerd[1659]: 2026-04-20 19:18:09.306 [INFO][4928] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a95a152fb136b1fbc85dca6a88ab5e5cb65de4b808808cc8910b0dc8f0933850" Namespace="calico-system" Pod="csi-node-driver-5h6vg" WorkloadEndpoint="localhost-k8s-csi--node--driver--5h6vg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--5h6vg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9f02930c-961c-4c4b-8334-b61cbd5c3d20", ResourceVersion:"1576", Generation:0, CreationTimestamp:time.Date(2026, time.April, 20, 19, 15, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-5h6vg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali05ffc2f4b44", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 20 19:18:11.244875 containerd[1659]: 2026-04-20 19:18:09.308 [INFO][4928] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="a95a152fb136b1fbc85dca6a88ab5e5cb65de4b808808cc8910b0dc8f0933850" Namespace="calico-system" Pod="csi-node-driver-5h6vg" WorkloadEndpoint="localhost-k8s-csi--node--driver--5h6vg-eth0" Apr 20 19:18:11.244875 containerd[1659]: 2026-04-20 19:18:09.308 [INFO][4928] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali05ffc2f4b44 ContainerID="a95a152fb136b1fbc85dca6a88ab5e5cb65de4b808808cc8910b0dc8f0933850" Namespace="calico-system" Pod="csi-node-driver-5h6vg" WorkloadEndpoint="localhost-k8s-csi--node--driver--5h6vg-eth0" Apr 20 19:18:11.244875 containerd[1659]: 2026-04-20 19:18:09.986 [INFO][4928] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a95a152fb136b1fbc85dca6a88ab5e5cb65de4b808808cc8910b0dc8f0933850" Namespace="calico-system" Pod="csi-node-driver-5h6vg" WorkloadEndpoint="localhost-k8s-csi--node--driver--5h6vg-eth0" Apr 20 19:18:11.244927 containerd[1659]: 2026-04-20 19:18:10.044 [INFO][4928] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a95a152fb136b1fbc85dca6a88ab5e5cb65de4b808808cc8910b0dc8f0933850" Namespace="calico-system" Pod="csi-node-driver-5h6vg" WorkloadEndpoint="localhost-k8s-csi--node--driver--5h6vg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--5h6vg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9f02930c-961c-4c4b-8334-b61cbd5c3d20", ResourceVersion:"1576", Generation:0, CreationTimestamp:time.Date(2026, time.April, 20, 19, 15, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a95a152fb136b1fbc85dca6a88ab5e5cb65de4b808808cc8910b0dc8f0933850", Pod:"csi-node-driver-5h6vg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali05ffc2f4b44", MAC:"12:40:2a:38:11:98", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 20 19:18:11.245014 containerd[1659]: 2026-04-20 19:18:11.011 [INFO][4928] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a95a152fb136b1fbc85dca6a88ab5e5cb65de4b808808cc8910b0dc8f0933850" Namespace="calico-system" Pod="csi-node-driver-5h6vg" WorkloadEndpoint="localhost-k8s-csi--node--driver--5h6vg-eth0" Apr 20 19:18:12.033629 systemd-networkd[1441]: cali05ffc2f4b44: Gained IPv6LL Apr 20 19:18:13.286940 containerd[1659]: time="2026-04-20T19:18:13.265609981Z" level=info msg="connecting to shim a95a152fb136b1fbc85dca6a88ab5e5cb65de4b808808cc8910b0dc8f0933850" address="unix:///run/containerd/s/0eb0305fea608ca907cd45f8e4caa2dc3006719b8287158ec4c075e9e2b37cfa" namespace=k8s.io protocol=ttrpc version=3 Apr 20 19:18:13.491000 audit[5001]: AUDIT1101 pid=5001 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:18:13.519415 kernel: audit: type=1101 audit(1776712693.491:846): pid=5001 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:18:13.519961 sshd[5001]: Accepted publickey for core from 10.0.0.1 port 53872 ssh2: RSA SHA256:ZNIzts6V4KYKlrJxXaosrimCRlmsV/+NkZ5UtjwHrjE Apr 20 19:18:13.525000 audit[5001]: AUDIT1103 pid=5001 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:18:13.552282 kernel: audit: type=1103 audit(1776712693.525:847): pid=5001 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:18:13.552000 audit[5001]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff4f138fa0 a2=3 a3=0 items=0 ppid=1 pid=5001 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=36 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:13.552000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:18:13.615821 kernel: audit: type=1006 audit(1776712693.552:848): pid=5001 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=36 res=1 Apr 20 19:18:13.616019 kernel: audit: type=1300 audit(1776712693.552:848): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff4f138fa0 a2=3 a3=0 items=0 ppid=1 pid=5001 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=36 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:13.616054 kernel: audit: type=1327 audit(1776712693.552:848): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:18:13.616243 sshd-session[5001]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 19:18:13.843914 systemd-logind[1627]: New session '36' of user 'core' with class 'user' and type 'tty'. Apr 20 19:18:13.875329 systemd[1]: Started session-36.scope - Session 36 of User core. Apr 20 19:18:13.897000 audit[5001]: AUDIT1105 pid=5001 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:18:13.911192 kernel: audit: type=1105 audit(1776712693.897:849): pid=5001 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:18:13.913000 audit[5061]: AUDIT1103 pid=5061 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:18:13.922828 kernel: audit: type=1103 audit(1776712693.913:850): pid=5061 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:18:14.166833 systemd[1]: Started cri-containerd-a95a152fb136b1fbc85dca6a88ab5e5cb65de4b808808cc8910b0dc8f0933850.scope - libcontainer container a95a152fb136b1fbc85dca6a88ab5e5cb65de4b808808cc8910b0dc8f0933850. Apr 20 19:18:15.062000 audit: BPF prog-id=155 op=LOAD Apr 20 19:18:15.123923 kernel: audit: type=1334 audit(1776712695.062:851): prog-id=155 op=LOAD Apr 20 19:18:15.137000 audit: BPF prog-id=156 op=LOAD Apr 20 19:18:15.137000 audit[5050]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130240 a2=98 a3=0 items=0 ppid=5035 pid=5050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:15.137000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139356131353266623133366231666263383564636136613838616235 Apr 20 19:18:15.137000 audit: BPF prog-id=156 op=UNLOAD Apr 20 19:18:15.137000 audit[5050]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5035 pid=5050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:15.137000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139356131353266623133366231666263383564636136613838616235 Apr 20 19:18:15.241926 kernel: audit: type=1334 audit(1776712695.137:852): prog-id=156 op=LOAD Apr 20 19:18:15.236000 audit: BPF prog-id=157 op=LOAD Apr 20 19:18:15.236000 audit[5050]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130490 a2=98 a3=0 items=0 ppid=5035 pid=5050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:15.236000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139356131353266623133366231666263383564636136613838616235 Apr 20 19:18:15.236000 audit: BPF prog-id=158 op=LOAD Apr 20 19:18:15.236000 audit[5050]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130220 a2=98 a3=0 items=0 ppid=5035 pid=5050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:15.236000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139356131353266623133366231666263383564636136613838616235 Apr 20 19:18:15.236000 audit: BPF prog-id=158 op=UNLOAD Apr 20 19:18:15.236000 audit[5050]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5035 pid=5050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:15.236000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139356131353266623133366231666263383564636136613838616235 Apr 20 19:18:15.239000 audit: BPF prog-id=157 op=UNLOAD Apr 20 19:18:15.239000 audit[5050]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5035 pid=5050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:15.239000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139356131353266623133366231666263383564636136613838616235 Apr 20 19:18:15.240000 audit: BPF prog-id=159 op=LOAD Apr 20 19:18:15.240000 audit[5050]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306f0 a2=98 a3=0 items=0 ppid=5035 pid=5050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:15.240000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139356131353266623133366231666263383564636136613838616235 Apr 20 19:18:15.473856 systemd-resolved[1410]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 20 19:18:16.929398 containerd[1659]: time="2026-04-20T19:18:16.920523142Z" level=info msg="RunPodSandbox for name:\"csi-node-driver-5h6vg\" uid:\"9f02930c-961c-4c4b-8334-b61cbd5c3d20\" namespace:\"calico-system\" returns sandbox id \"a95a152fb136b1fbc85dca6a88ab5e5cb65de4b808808cc8910b0dc8f0933850\"" Apr 20 19:18:16.971000 audit[5001]: AUDIT1106 pid=5001 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:18:16.972000 audit[5001]: AUDIT1104 pid=5001 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:18:17.109141 sshd[5061]: Connection closed by 10.0.0.1 port 53872 Apr 20 19:18:17.112396 kernel: kauditd_printk_skb: 20 callbacks suppressed Apr 20 19:18:16.954916 sshd-session[5001]: pam_unix(sshd:session): session closed for user core Apr 20 19:18:17.113506 kernel: audit: type=1106 audit(1776712696.971:859): pid=5001 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:18:17.113763 kernel: audit: type=1104 audit(1776712696.972:860): pid=5001 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:18:17.125185 systemd[1]: sshd@34-12294-10.0.0.14:22-10.0.0.1:53872.service: Deactivated successfully. Apr 20 19:18:17.287000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@34-12294-10.0.0.14:22-10.0.0.1:53872 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:18:17.300024 kernel: audit: type=1131 audit(1776712697.287:861): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@34-12294-10.0.0.14:22-10.0.0.1:53872 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:18:17.306111 systemd[1]: session-36.scope: Deactivated successfully. Apr 20 19:18:17.306486 systemd[1]: session-36.scope: Consumed 1.959s CPU time, 16.6M memory peak. Apr 20 19:18:17.355676 containerd[1659]: time="2026-04-20T19:18:17.339339029Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 20 19:18:17.391257 systemd-logind[1627]: Session 36 logged out. Waiting for processes to exit. Apr 20 19:18:17.597718 systemd-logind[1627]: Removed session 36. Apr 20 19:18:20.094000 audit: BPF prog-id=160 op=LOAD Apr 20 19:18:20.094000 audit[4845]: SYSCALL arch=c000003e syscall=321 success=yes exit=13 a0=5 a1=7f6baa09da70 a2=74 a3=0 items=0 ppid=4822 pid=4845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:20.103089 kernel: audit: type=1334 audit(1776712700.094:862): prog-id=160 op=LOAD Apr 20 19:18:20.103242 kernel: audit: type=1300 audit(1776712700.094:862): arch=c000003e syscall=321 success=yes exit=13 a0=5 a1=7f6baa09da70 a2=74 a3=0 items=0 ppid=4822 pid=4845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:20.094000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Apr 20 19:18:20.122065 kernel: audit: type=1327 audit(1776712700.094:862): proctitle=63616C69636F2D6E6F6465002D66656C6978 Apr 20 19:18:20.098000 audit: BPF prog-id=160 op=UNLOAD Apr 20 19:18:20.098000 audit[4845]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=d a1=7f6baa09da70 a2=0 a3=0 items=0 ppid=4822 pid=4845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:20.146389 kernel: audit: type=1334 audit(1776712700.098:863): prog-id=160 op=UNLOAD Apr 20 19:18:20.146695 kernel: audit: type=1300 audit(1776712700.098:863): arch=c000003e syscall=3 success=yes exit=0 a0=d a1=7f6baa09da70 a2=0 a3=0 items=0 ppid=4822 pid=4845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:20.098000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Apr 20 19:18:20.098000 audit: BPF prog-id=161 op=LOAD Apr 20 19:18:20.098000 audit[4845]: SYSCALL arch=c000003e syscall=321 success=yes exit=13 a0=5 a1=7f6baa09daa0 a2=94 a3=2 items=0 ppid=4822 pid=4845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:20.098000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Apr 20 19:18:20.098000 audit: BPF prog-id=161 op=UNLOAD Apr 20 19:18:20.098000 audit[4845]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=d a1=7f6baa09daa0 a2=0 a3=2 items=0 ppid=4822 pid=4845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:20.098000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Apr 20 19:18:20.098000 audit: BPF prog-id=162 op=LOAD Apr 20 19:18:20.098000 audit[4845]: SYSCALL arch=c000003e syscall=321 success=yes exit=31 a0=5 a1=7f6baa09d960 a2=40 a3=4 items=0 ppid=4822 pid=4845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:20.098000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Apr 20 19:18:20.098000 audit: BPF prog-id=162 op=UNLOAD Apr 20 19:18:20.098000 audit[4845]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=1f a1=7f6baa09d960 a2=0 a3=4 items=0 ppid=4822 pid=4845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:20.098000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Apr 20 19:18:20.098000 audit: BPF prog-id=163 op=LOAD Apr 20 19:18:20.098000 audit[4845]: SYSCALL arch=c000003e syscall=321 success=yes exit=31 a0=5 a1=7f6baa09da60 a2=94 a3=7f6baa09dbe0 items=0 ppid=4822 pid=4845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:20.098000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Apr 20 19:18:20.098000 audit: BPF prog-id=163 op=UNLOAD Apr 20 19:18:20.098000 audit[4845]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=1f a1=7f6baa09da60 a2=0 a3=7f6baa09dbe0 items=0 ppid=4822 pid=4845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:20.098000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Apr 20 19:18:20.271202 kernel: audit: type=1327 audit(1776712700.098:863): proctitle=63616C69636F2D6E6F6465002D66656C6978 Apr 20 19:18:20.271372 kernel: audit: type=1334 audit(1776712700.098:864): prog-id=161 op=LOAD Apr 20 19:18:22.090000 audit: BPF prog-id=164 op=LOAD Apr 20 19:18:22.117033 kernel: kauditd_printk_skb: 17 callbacks suppressed Apr 20 19:18:22.122371 kernel: audit: type=1334 audit(1776712702.090:870): prog-id=164 op=LOAD Apr 20 19:18:22.090000 audit[4845]: SYSCALL arch=c000003e syscall=321 success=yes exit=12 a0=5 a1=7f6baa09d120 a2=94 a3=2 items=0 ppid=4822 pid=4845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:22.090000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Apr 20 19:18:22.090000 audit: BPF prog-id=164 op=UNLOAD Apr 20 19:18:22.090000 audit[4845]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=c a1=7f6baa09d120 a2=0 a3=2 items=0 ppid=4822 pid=4845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:22.090000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Apr 20 19:18:22.090000 audit: BPF prog-id=165 op=LOAD Apr 20 19:18:22.090000 audit[4845]: SYSCALL arch=c000003e syscall=321 success=yes exit=12 a0=5 a1=7f6baa09d290 a2=94 a3=2 items=0 ppid=4822 pid=4845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:22.090000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Apr 20 19:18:22.175000 audit: BPF prog-id=165 op=UNLOAD Apr 20 19:18:22.175000 audit[4845]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=c a1=c000003180 a2=0 a3=0 items=0 ppid=4822 pid=4845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:22.175000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Apr 20 19:18:22.265865 kernel: audit: type=1300 audit(1776712702.090:870): arch=c000003e syscall=321 success=yes exit=12 a0=5 a1=7f6baa09d120 a2=94 a3=2 items=0 ppid=4822 pid=4845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:22.266049 kernel: audit: type=1327 audit(1776712702.090:870): proctitle=63616C69636F2D6E6F6465002D66656C6978 Apr 20 19:18:22.266064 kernel: audit: type=1334 audit(1776712702.090:871): prog-id=164 op=UNLOAD Apr 20 19:18:22.266078 kernel: audit: type=1300 audit(1776712702.090:871): arch=c000003e syscall=3 success=yes exit=0 a0=c a1=7f6baa09d120 a2=0 a3=2 items=0 ppid=4822 pid=4845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:22.268333 kernel: audit: type=1327 audit(1776712702.090:871): proctitle=63616C69636F2D6E6F6465002D66656C6978 Apr 20 19:18:22.269972 kernel: audit: type=1334 audit(1776712702.090:872): prog-id=165 op=LOAD Apr 20 19:18:22.270025 kernel: audit: type=1300 audit(1776712702.090:872): arch=c000003e syscall=321 success=yes exit=12 a0=5 a1=7f6baa09d290 a2=94 a3=2 items=0 ppid=4822 pid=4845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:22.270045 kernel: audit: type=1327 audit(1776712702.090:872): proctitle=63616C69636F2D6E6F6465002D66656C6978 Apr 20 19:18:22.270083 kernel: audit: type=1334 audit(1776712702.175:873): prog-id=165 op=UNLOAD Apr 20 19:18:22.443000 audit: BPF prog-id=166 op=LOAD Apr 20 19:18:22.443000 audit[4845]: SYSCALL arch=c000003e syscall=321 success=yes exit=12 a0=5 a1=7f6b93ffeaa0 a2=94 a3=2 items=0 ppid=4822 pid=4845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:22.443000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Apr 20 19:18:22.443000 audit: BPF prog-id=166 op=UNLOAD Apr 20 19:18:22.443000 audit[4845]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=c a1=7f6b93ffeaa0 a2=0 a3=2 items=0 ppid=4822 pid=4845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:22.443000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Apr 20 19:18:22.445036 kubelet[3163]: E0420 19:18:22.424639 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.394s" Apr 20 19:18:22.473000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@35-12295-10.0.0.14:22-10.0.0.1:45134 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:18:22.473360 systemd[1]: Started sshd@35-12295-10.0.0.14:22-10.0.0.1:45134.service - OpenSSH per-connection server daemon (10.0.0.1:45134). Apr 20 19:18:22.825000 audit: BPF prog-id=167 op=LOAD Apr 20 19:18:22.825000 audit[4845]: SYSCALL arch=c000003e syscall=321 success=yes exit=30 a0=5 a1=7f6b93ffe290 a2=94 a3=2 items=0 ppid=4822 pid=4845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:22.825000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Apr 20 19:18:22.858000 audit: BPF prog-id=167 op=UNLOAD Apr 20 19:18:22.858000 audit[4845]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=1e a1=c0006b1180 a2=0 a3=0 items=0 ppid=4822 pid=4845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:22.858000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Apr 20 19:18:22.931000 audit: BPF prog-id=168 op=LOAD Apr 20 19:18:22.931000 audit[4845]: SYSCALL arch=c000003e syscall=321 success=yes exit=12 a0=5 a1=7f6b93ffeaa0 a2=94 a3=2 items=0 ppid=4822 pid=4845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:22.931000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Apr 20 19:18:22.934000 audit: BPF prog-id=168 op=UNLOAD Apr 20 19:18:22.934000 audit[4845]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=c a1=7f6b93ffeaa0 a2=0 a3=2 items=0 ppid=4822 pid=4845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:22.934000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Apr 20 19:18:23.317000 audit: BPF prog-id=169 op=LOAD Apr 20 19:18:23.317000 audit[4845]: SYSCALL arch=c000003e syscall=321 success=yes exit=33 a0=5 a1=7f6b93ffe290 a2=94 a3=2 items=0 ppid=4822 pid=4845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:23.317000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Apr 20 19:18:23.328000 audit: BPF prog-id=169 op=UNLOAD Apr 20 19:18:23.328000 audit[4845]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=21 a1=c0006b1180 a2=0 a3=0 items=0 ppid=4822 pid=4845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:23.328000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Apr 20 19:18:23.469000 audit: BPF prog-id=170 op=LOAD Apr 20 19:18:23.469000 audit[5176]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe3c89d070 a2=98 a3=3 items=0 ppid=4845 pid=5176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:23.469000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Apr 20 19:18:23.470000 audit: BPF prog-id=170 op=UNLOAD Apr 20 19:18:23.470000 audit[5176]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe3c89d040 a3=0 items=0 ppid=4845 pid=5176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:23.470000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Apr 20 19:18:23.476000 audit: BPF prog-id=171 op=LOAD Apr 20 19:18:23.476000 audit[5176]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe3c89ce60 a2=94 a3=54428f items=0 ppid=4845 pid=5176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:23.476000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Apr 20 19:18:23.477000 audit: BPF prog-id=171 op=UNLOAD Apr 20 19:18:23.477000 audit[5176]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe3c89ce60 a2=94 a3=54428f items=0 ppid=4845 pid=5176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:23.477000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Apr 20 19:18:23.477000 audit: BPF prog-id=172 op=LOAD Apr 20 19:18:23.477000 audit[5176]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe3c89ce90 a2=94 a3=2 items=0 ppid=4845 pid=5176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:23.477000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Apr 20 19:18:23.477000 audit: BPF prog-id=172 op=UNLOAD Apr 20 19:18:23.477000 audit[5176]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe3c89ce90 a2=0 a3=2 items=0 ppid=4845 pid=5176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:23.477000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Apr 20 19:18:23.566000 audit[5173]: AUDIT1101 pid=5173 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:18:23.576931 sshd[5173]: Accepted publickey for core from 10.0.0.1 port 45134 ssh2: RSA SHA256:ZNIzts6V4KYKlrJxXaosrimCRlmsV/+NkZ5UtjwHrjE Apr 20 19:18:23.654126 sshd-session[5173]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 19:18:23.645000 audit[5173]: AUDIT1103 pid=5173 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:18:23.648000 audit[5173]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc0656bd20 a2=3 a3=0 items=0 ppid=1 pid=5173 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=37 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:23.648000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:18:23.693375 systemd-logind[1627]: New session '37' of user 'core' with class 'user' and type 'tty'. Apr 20 19:18:23.722025 containerd[1659]: time="2026-04-20T19:18:23.708054861Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=1, bytes read=3256320" Apr 20 19:18:23.777154 containerd[1659]: time="2026-04-20T19:18:23.731257928Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 19:18:23.724308 systemd[1]: Started session-37.scope - Session 37 of User core. Apr 20 19:18:23.867943 containerd[1659]: time="2026-04-20T19:18:23.865489367Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 19:18:23.879496 containerd[1659]: time="2026-04-20T19:18:23.879199005Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 19:18:23.881000 audit[5173]: AUDIT1105 pid=5173 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:18:23.900878 containerd[1659]: time="2026-04-20T19:18:23.890283874Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 6.527662461s" Apr 20 19:18:23.902336 containerd[1659]: time="2026-04-20T19:18:23.902146185Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Apr 20 19:18:23.933000 audit[5178]: AUDIT1103 pid=5178 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:18:23.946960 containerd[1659]: time="2026-04-20T19:18:23.946829421Z" level=info msg="CreateContainer within sandbox \"a95a152fb136b1fbc85dca6a88ab5e5cb65de4b808808cc8910b0dc8f0933850\" for container name:\"calico-csi\"" Apr 20 19:18:24.087000 audit: BPF prog-id=173 op=LOAD Apr 20 19:18:24.087000 audit[5176]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe3c89cd50 a2=94 a3=1 items=0 ppid=4845 pid=5176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:24.087000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Apr 20 19:18:24.089000 audit: BPF prog-id=173 op=UNLOAD Apr 20 19:18:24.089000 audit[5176]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe3c89cd50 a2=94 a3=1 items=0 ppid=4845 pid=5176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:24.089000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Apr 20 19:18:24.465000 audit: BPF prog-id=174 op=LOAD Apr 20 19:18:24.465000 audit[5176]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe3c89cd40 a2=94 a3=4 items=0 ppid=4845 pid=5176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:24.465000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Apr 20 19:18:24.465000 audit: BPF prog-id=174 op=UNLOAD Apr 20 19:18:24.465000 audit[5176]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffe3c89cd40 a2=0 a3=4 items=0 ppid=4845 pid=5176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:24.465000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Apr 20 19:18:24.472000 audit: BPF prog-id=175 op=LOAD Apr 20 19:18:24.472000 audit[5176]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe3c89cba0 a2=94 a3=5 items=0 ppid=4845 pid=5176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:24.472000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Apr 20 19:18:24.473000 audit: BPF prog-id=175 op=UNLOAD Apr 20 19:18:24.473000 audit[5176]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffe3c89cba0 a2=0 a3=5 items=0 ppid=4845 pid=5176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:24.473000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Apr 20 19:18:24.475000 audit: BPF prog-id=176 op=LOAD Apr 20 19:18:24.475000 audit[5176]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe3c89cdc0 a2=94 a3=6 items=0 ppid=4845 pid=5176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:24.475000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Apr 20 19:18:24.475000 audit: BPF prog-id=176 op=UNLOAD Apr 20 19:18:24.475000 audit[5176]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffe3c89cdc0 a2=0 a3=6 items=0 ppid=4845 pid=5176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:24.475000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Apr 20 19:18:24.501000 audit: BPF prog-id=177 op=LOAD Apr 20 19:18:24.501000 audit[5176]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe3c89c570 a2=94 a3=88 items=0 ppid=4845 pid=5176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:24.501000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Apr 20 19:18:24.502000 audit: BPF prog-id=178 op=LOAD Apr 20 19:18:24.502000 audit[5176]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffe3c89c3f0 a2=94 a3=2 items=0 ppid=4845 pid=5176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:24.502000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Apr 20 19:18:24.502000 audit: BPF prog-id=178 op=UNLOAD Apr 20 19:18:24.502000 audit[5176]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffe3c89c420 a2=0 a3=7ffe3c89c520 items=0 ppid=4845 pid=5176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:24.502000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Apr 20 19:18:24.504000 audit: BPF prog-id=177 op=UNLOAD Apr 20 19:18:24.504000 audit[5176]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=39817d10 a2=0 a3=d0508d21ccb8340e items=0 ppid=4845 pid=5176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:24.504000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Apr 20 19:18:24.539659 containerd[1659]: time="2026-04-20T19:18:24.521231600Z" level=info msg="Container 7741ff0e99f5739abda292d10780e87385f1fe549e3c8a3421f5007714bdc536: CDI devices from CRI Config.CDIDevices: []" Apr 20 19:18:25.071693 containerd[1659]: time="2026-04-20T19:18:25.071369323Z" level=info msg="CreateContainer within sandbox \"a95a152fb136b1fbc85dca6a88ab5e5cb65de4b808808cc8910b0dc8f0933850\" for name:\"calico-csi\" returns container id \"7741ff0e99f5739abda292d10780e87385f1fe549e3c8a3421f5007714bdc536\"" Apr 20 19:18:25.515000 audit: BPF prog-id=179 op=LOAD Apr 20 19:18:25.515000 audit[5187]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffeccfb3160 a2=98 a3=1999999999999999 items=0 ppid=4845 pid=5187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:25.515000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Apr 20 19:18:25.518000 audit: BPF prog-id=179 op=UNLOAD Apr 20 19:18:25.518000 audit[5187]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffeccfb3130 a3=0 items=0 ppid=4845 pid=5187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:25.518000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Apr 20 19:18:25.518000 audit: BPF prog-id=180 op=LOAD Apr 20 19:18:25.518000 audit[5187]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffeccfb3040 a2=94 a3=ffff items=0 ppid=4845 pid=5187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:25.518000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Apr 20 19:18:25.520000 audit: BPF prog-id=180 op=UNLOAD Apr 20 19:18:25.520000 audit[5187]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffeccfb3040 a2=94 a3=ffff items=0 ppid=4845 pid=5187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:25.520000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Apr 20 19:18:25.520000 audit: BPF prog-id=181 op=LOAD Apr 20 19:18:25.520000 audit[5187]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffeccfb3080 a2=94 a3=7ffeccfb3260 items=0 ppid=4845 pid=5187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:25.520000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Apr 20 19:18:25.521000 audit: BPF prog-id=181 op=UNLOAD Apr 20 19:18:25.521000 audit[5187]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffeccfb3080 a2=94 a3=7ffeccfb3260 items=0 ppid=4845 pid=5187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:25.521000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Apr 20 19:18:25.961471 kubelet[3163]: E0420 19:18:25.877876 3163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:18:25.961471 kubelet[3163]: E0420 19:18:25.929635 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.041s" Apr 20 19:18:26.003994 containerd[1659]: time="2026-04-20T19:18:25.968302522Z" level=info msg="StartContainer for \"7741ff0e99f5739abda292d10780e87385f1fe549e3c8a3421f5007714bdc536\"" Apr 20 19:18:26.186865 containerd[1659]: time="2026-04-20T19:18:26.186815875Z" level=info msg="connecting to shim 7741ff0e99f5739abda292d10780e87385f1fe549e3c8a3421f5007714bdc536" address="unix:///run/containerd/s/0eb0305fea608ca907cd45f8e4caa2dc3006719b8287158ec4c075e9e2b37cfa" protocol=ttrpc version=3 Apr 20 19:18:26.895000 audit[5192]: NETFILTER_CFG table=filter:145 family=2 entries=8 op=nft_register_rule pid=5192 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Apr 20 19:18:26.895000 audit[5192]: SYSCALL arch=c000003e syscall=46 success=yes exit=3016 a0=3 a1=7ffc4da2e6e0 a2=0 a3=7ffc4da2e6cc items=0 ppid=3270 pid=5192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:26.895000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Apr 20 19:18:26.938000 audit[5192]: NETFILTER_CFG table=nat:146 family=2 entries=44 op=nft_register_chain pid=5192 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Apr 20 19:18:26.938000 audit[5192]: SYSCALL arch=c000003e syscall=46 success=yes exit=14660 a0=3 a1=7ffc4da2e6e0 a2=0 a3=7ffc4da2e6cc items=0 ppid=3270 pid=5192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:26.938000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Apr 20 19:18:27.121332 kubelet[3163]: I0420 19:18:27.120774 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/dfb0b7d2-b28d-4433-9fba-0074dfdf81ee-calico-apiserver-certs\") pod \"calico-apiserver-84684997fc-zpm5v\" (UID: \"dfb0b7d2-b28d-4433-9fba-0074dfdf81ee\") " pod="calico-system/calico-apiserver-84684997fc-zpm5v" Apr 20 19:18:27.388187 kubelet[3163]: I0420 19:18:27.387462 3163 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kld4g\" (UniqueName: \"kubernetes.io/projected/dfb0b7d2-b28d-4433-9fba-0074dfdf81ee-kube-api-access-kld4g\") pod \"calico-apiserver-84684997fc-zpm5v\" (UID: \"dfb0b7d2-b28d-4433-9fba-0074dfdf81ee\") " pod="calico-system/calico-apiserver-84684997fc-zpm5v" Apr 20 19:18:28.312000 audit[5218]: NETFILTER_CFG table=filter:147 family=2 entries=8 op=nft_register_rule pid=5218 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Apr 20 19:18:28.312000 audit[5218]: SYSCALL arch=c000003e syscall=46 success=yes exit=3016 a0=3 a1=7ffe4c2a2da0 a2=0 a3=7ffe4c2a2d8c items=0 ppid=3270 pid=5218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:28.312000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Apr 20 19:18:28.432002 kernel: kauditd_printk_skb: 112 callbacks suppressed Apr 20 19:18:28.435246 kernel: audit: type=1325 audit(1776712708.312:914): table=filter:147 family=2 entries=8 op=nft_register_rule pid=5218 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Apr 20 19:18:28.435481 kernel: audit: type=1300 audit(1776712708.312:914): arch=c000003e syscall=46 success=yes exit=3016 a0=3 a1=7ffe4c2a2da0 a2=0 a3=7ffe4c2a2d8c items=0 ppid=3270 pid=5218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:28.435501 kernel: audit: type=1327 audit(1776712708.312:914): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Apr 20 19:18:28.490067 systemd[1]: Started cri-containerd-7741ff0e99f5739abda292d10780e87385f1fe549e3c8a3421f5007714bdc536.scope - libcontainer container 7741ff0e99f5739abda292d10780e87385f1fe549e3c8a3421f5007714bdc536. Apr 20 19:18:28.597000 audit[5218]: NETFILTER_CFG table=nat:148 family=2 entries=44 op=nft_unregister_chain pid=5218 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Apr 20 19:18:28.597000 audit[5218]: SYSCALL arch=c000003e syscall=46 success=yes exit=12900 a0=3 a1=7ffe4c2a2da0 a2=0 a3=7ffe4c2a2d8c items=0 ppid=3270 pid=5218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:28.597000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Apr 20 19:18:28.768398 kernel: audit: type=1325 audit(1776712708.597:915): table=nat:148 family=2 entries=44 op=nft_unregister_chain pid=5218 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Apr 20 19:18:28.603965 systemd[1]: Created slice kubepods-besteffort-poddfb0b7d2_b28d_4433_9fba_0074dfdf81ee.slice - libcontainer container kubepods-besteffort-poddfb0b7d2_b28d_4433_9fba_0074dfdf81ee.slice. Apr 20 19:18:28.801729 kernel: audit: type=1300 audit(1776712708.597:915): arch=c000003e syscall=46 success=yes exit=12900 a0=3 a1=7ffe4c2a2da0 a2=0 a3=7ffe4c2a2d8c items=0 ppid=3270 pid=5218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:28.803747 kernel: audit: type=1327 audit(1776712708.597:915): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Apr 20 19:18:28.939680 containerd[1659]: time="2026-04-20T19:18:28.937358971Z" level=info msg="RunPodSandbox for name:\"calico-apiserver-84684997fc-zpm5v\" uid:\"dfb0b7d2-b28d-4433-9fba-0074dfdf81ee\" namespace:\"calico-system\"" Apr 20 19:18:29.266001 sshd[5178]: Connection closed by 10.0.0.1 port 45134 Apr 20 19:18:29.311009 sshd-session[5173]: pam_unix(sshd:session): session closed for user core Apr 20 19:18:29.355000 audit[5173]: AUDIT1106 pid=5173 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:18:29.390690 kernel: audit: type=1106 audit(1776712709.355:916): pid=5173 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:18:29.368000 audit[5173]: AUDIT1104 pid=5173 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:18:29.436518 kernel: audit: type=1104 audit(1776712709.368:917): pid=5173 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:18:29.544000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@35-12295-10.0.0.14:22-10.0.0.1:45134 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:18:29.544771 systemd[1]: sshd@35-12295-10.0.0.14:22-10.0.0.1:45134.service: Deactivated successfully. Apr 20 19:18:29.604107 kernel: audit: type=1131 audit(1776712709.544:918): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@35-12295-10.0.0.14:22-10.0.0.1:45134 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:18:29.858211 systemd[1]: session-37.scope: Deactivated successfully. Apr 20 19:18:29.947706 systemd[1]: session-37.scope: Consumed 2.304s CPU time, 44.2M memory peak. Apr 20 19:18:30.017071 systemd-logind[1627]: Session 37 logged out. Waiting for processes to exit. Apr 20 19:18:30.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@36-4104-10.0.0.14:22-10.0.0.1:56876 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:18:30.104224 systemd[1]: Started sshd@36-4104-10.0.0.14:22-10.0.0.1:56876.service - OpenSSH per-connection server daemon (10.0.0.1:56876). Apr 20 19:18:30.118162 kernel: audit: type=1130 audit(1776712710.104:919): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@36-4104-10.0.0.14:22-10.0.0.1:56876 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:18:30.114922 systemd-logind[1627]: Removed session 37. Apr 20 19:18:30.825642 containerd[1659]: time="2026-04-20T19:18:30.825388399Z" level=error msg="get state for 7741ff0e99f5739abda292d10780e87385f1fe549e3c8a3421f5007714bdc536" error="context deadline exceeded" Apr 20 19:18:30.836704 containerd[1659]: time="2026-04-20T19:18:30.831303834Z" level=warning msg="unknown status" status=0 Apr 20 19:18:31.197000 audit: BPF prog-id=182 op=LOAD Apr 20 19:18:31.197000 audit[5196]: SYSCALL arch=c000003e syscall=321 success=yes exit=19 a0=5 a1=c00010c490 a2=98 a3=0 items=0 ppid=5035 pid=5196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:31.197000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737343166663065393966353733396162646132393264313037383065 Apr 20 19:18:31.197000 audit: BPF prog-id=183 op=LOAD Apr 20 19:18:31.197000 audit[5196]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c220 a2=98 a3=0 items=0 ppid=5035 pid=5196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:31.197000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737343166663065393966353733396162646132393264313037383065 Apr 20 19:18:31.197000 audit: BPF prog-id=183 op=UNLOAD Apr 20 19:18:31.197000 audit[5196]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5035 pid=5196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:31.197000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737343166663065393966353733396162646132393264313037383065 Apr 20 19:18:31.197000 audit: BPF prog-id=182 op=UNLOAD Apr 20 19:18:31.197000 audit[5196]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=13 a1=0 a2=0 a3=0 items=0 ppid=5035 pid=5196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:31.197000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737343166663065393966353733396162646132393264313037383065 Apr 20 19:18:31.197000 audit: BPF prog-id=184 op=LOAD Apr 20 19:18:31.197000 audit[5196]: SYSCALL arch=c000003e syscall=321 success=yes exit=19 a0=5 a1=c00010c6f0 a2=98 a3=0 items=0 ppid=5035 pid=5196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:31.197000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737343166663065393966353733396162646132393264313037383065 Apr 20 19:18:31.466000 audit[5240]: AUDIT1101 pid=5240 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:18:31.486000 audit[5240]: AUDIT1103 pid=5240 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:18:31.488318 sshd[5240]: Accepted publickey for core from 10.0.0.1 port 56876 ssh2: RSA SHA256:ZNIzts6V4KYKlrJxXaosrimCRlmsV/+NkZ5UtjwHrjE Apr 20 19:18:31.487000 audit[5240]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc8e1908c0 a2=3 a3=0 items=0 ppid=1 pid=5240 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=38 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:31.487000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:18:31.504732 sshd-session[5240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 19:18:31.550000 audit[5252]: NETFILTER_CFG table=filter:149 family=2 entries=8 op=nft_register_rule pid=5252 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Apr 20 19:18:31.550000 audit[5252]: SYSCALL arch=c000003e syscall=46 success=yes exit=3016 a0=3 a1=7ffcc7430420 a2=0 a3=7ffcc743040c items=0 ppid=3270 pid=5252 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:31.550000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Apr 20 19:18:31.564834 containerd[1659]: time="2026-04-20T19:18:31.502238471Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 20 19:18:31.650000 audit[5252]: NETFILTER_CFG table=nat:150 family=2 entries=40 op=nft_register_rule pid=5252 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Apr 20 19:18:31.650000 audit[5252]: SYSCALL arch=c000003e syscall=46 success=yes exit=12772 a0=3 a1=7ffcc7430420 a2=0 a3=7ffcc743040c items=0 ppid=3270 pid=5252 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:31.650000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Apr 20 19:18:31.663331 systemd-logind[1627]: New session '38' of user 'core' with class 'user' and type 'tty'. Apr 20 19:18:31.706412 systemd[1]: Started session-38.scope - Session 38 of User core. Apr 20 19:18:31.948000 audit[5240]: AUDIT1105 pid=5240 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:18:32.014000 audit[5253]: AUDIT1103 pid=5253 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:18:33.998051 sshd[5253]: Connection closed by 10.0.0.1 port 56876 Apr 20 19:18:34.037000 audit[5240]: AUDIT1106 pid=5240 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:18:34.052277 kernel: kauditd_printk_skb: 28 callbacks suppressed Apr 20 19:18:34.037000 audit[5240]: AUDIT1104 pid=5240 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:18:34.032408 sshd-session[5240]: pam_unix(sshd:session): session closed for user core Apr 20 19:18:34.153469 kernel: audit: type=1106 audit(1776712714.037:932): pid=5240 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:18:34.161614 kernel: audit: type=1104 audit(1776712714.037:933): pid=5240 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:18:34.225488 systemd[1]: sshd@36-4104-10.0.0.14:22-10.0.0.1:56876.service: Deactivated successfully. Apr 20 19:18:34.236000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@36-4104-10.0.0.14:22-10.0.0.1:56876 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:18:34.356935 kernel: audit: type=1131 audit(1776712714.236:934): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@36-4104-10.0.0.14:22-10.0.0.1:56876 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:18:34.562072 systemd[1]: session-38.scope: Deactivated successfully. Apr 20 19:18:34.616221 systemd[1]: session-38.scope: Consumed 1.104s CPU time, 56.5M memory peak. Apr 20 19:18:34.675138 systemd-logind[1627]: Session 38 logged out. Waiting for processes to exit. Apr 20 19:18:34.676481 containerd[1659]: time="2026-04-20T19:18:34.676126103Z" level=info msg="StartContainer for \"7741ff0e99f5739abda292d10780e87385f1fe549e3c8a3421f5007714bdc536\" returns successfully" Apr 20 19:18:34.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@37-5-10.0.0.14:22-10.0.0.1:56886 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:18:34.723143 systemd[1]: Started sshd@37-5-10.0.0.14:22-10.0.0.1:56886.service - OpenSSH per-connection server daemon (10.0.0.1:56886). Apr 20 19:18:34.753474 containerd[1659]: time="2026-04-20T19:18:34.739269816Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 20 19:18:34.776696 kernel: audit: type=1130 audit(1776712714.722:935): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@37-5-10.0.0.14:22-10.0.0.1:56886 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:18:34.967475 systemd-logind[1627]: Removed session 38. Apr 20 19:18:36.254000 audit[5276]: AUDIT1101 pid=5276 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:18:36.346839 kernel: audit: type=1101 audit(1776712716.254:936): pid=5276 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:18:36.446948 sshd[5276]: Accepted publickey for core from 10.0.0.1 port 56886 ssh2: RSA SHA256:ZNIzts6V4KYKlrJxXaosrimCRlmsV/+NkZ5UtjwHrjE Apr 20 19:18:36.567000 audit[5276]: AUDIT1103 pid=5276 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:18:36.592488 kernel: audit: type=1103 audit(1776712716.567:937): pid=5276 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:18:36.601914 kernel: audit: type=1006 audit(1776712716.579:938): pid=5276 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=39 res=1 Apr 20 19:18:36.579000 audit[5276]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcd2971880 a2=3 a3=0 items=0 ppid=1 pid=5276 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=39 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:36.579000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:18:36.679972 sshd-session[5276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 19:18:36.736627 kernel: audit: type=1300 audit(1776712716.579:938): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcd2971880 a2=3 a3=0 items=0 ppid=1 pid=5276 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=39 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:36.738897 kernel: audit: type=1327 audit(1776712716.579:938): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:18:37.467917 systemd-logind[1627]: New session '39' of user 'core' with class 'user' and type 'tty'. Apr 20 19:18:37.649458 systemd[1]: Started session-39.scope - Session 39 of User core. Apr 20 19:18:37.941000 audit[5276]: AUDIT1105 pid=5276 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:18:38.027685 kernel: audit: type=1105 audit(1776712717.941:939): pid=5276 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:18:38.100000 audit[5281]: AUDIT1103 pid=5281 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:18:38.359344 kubelet[3163]: E0420 19:18:38.311402 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.329s" Apr 20 19:18:44.739287 sshd[5281]: Connection closed by 10.0.0.1 port 56886 Apr 20 19:18:44.748000 audit[5276]: AUDIT1106 pid=5276 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:18:44.746912 sshd-session[5276]: pam_unix(sshd:session): session closed for user core Apr 20 19:18:44.779474 kernel: kauditd_printk_skb: 1 callbacks suppressed Apr 20 19:18:44.748000 audit[5276]: AUDIT1104 pid=5276 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:18:44.819457 kernel: audit: type=1106 audit(1776712724.748:941): pid=5276 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:18:44.819714 kernel: audit: type=1104 audit(1776712724.748:942): pid=5276 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:18:44.967162 systemd[1]: sshd@37-5-10.0.0.14:22-10.0.0.1:56886.service: Deactivated successfully. Apr 20 19:18:44.988000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@37-5-10.0.0.14:22-10.0.0.1:56886 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:18:45.002343 kernel: audit: type=1131 audit(1776712724.988:943): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@37-5-10.0.0.14:22-10.0.0.1:56886 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:18:45.239182 systemd[1]: session-39.scope: Deactivated successfully. Apr 20 19:18:45.313649 systemd[1]: session-39.scope: Consumed 2.168s CPU time, 43.7M memory peak. Apr 20 19:18:45.470744 systemd-logind[1627]: Session 39 logged out. Waiting for processes to exit. Apr 20 19:18:45.507000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@38-6-10.0.0.14:22-10.0.0.1:55178 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:18:45.508159 systemd[1]: Started sshd@38-6-10.0.0.14:22-10.0.0.1:55178.service - OpenSSH per-connection server daemon (10.0.0.1:55178). Apr 20 19:18:45.668109 kernel: audit: type=1130 audit(1776712725.507:944): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@38-6-10.0.0.14:22-10.0.0.1:55178 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:18:45.707169 systemd-logind[1627]: Removed session 39. Apr 20 19:18:46.457614 kubelet[3163]: E0420 19:18:46.457253 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.38s" Apr 20 19:18:46.703000 audit[5316]: NETFILTER_CFG table=filter:151 family=2 entries=8 op=nft_register_rule pid=5316 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Apr 20 19:18:46.703000 audit[5316]: SYSCALL arch=c000003e syscall=46 success=yes exit=3016 a0=3 a1=7ffcd48d7230 a2=0 a3=7ffcd48d721c items=0 ppid=3270 pid=5316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:46.703000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Apr 20 19:18:46.891000 audit[5316]: NETFILTER_CFG table=nat:152 family=2 entries=26 op=nft_register_rule pid=5316 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Apr 20 19:18:46.891000 audit[5316]: SYSCALL arch=c000003e syscall=46 success=yes exit=8076 a0=3 a1=7ffcd48d7230 a2=0 a3=7ffcd48d721c items=0 ppid=3270 pid=5316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:46.891000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Apr 20 19:18:47.090675 kernel: audit: type=1325 audit(1776712726.703:945): table=filter:151 family=2 entries=8 op=nft_register_rule pid=5316 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Apr 20 19:18:47.091101 kernel: audit: type=1300 audit(1776712726.703:945): arch=c000003e syscall=46 success=yes exit=3016 a0=3 a1=7ffcd48d7230 a2=0 a3=7ffcd48d721c items=0 ppid=3270 pid=5316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:47.091273 kernel: audit: type=1327 audit(1776712726.703:945): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Apr 20 19:18:47.091296 kernel: audit: type=1325 audit(1776712726.891:946): table=nat:152 family=2 entries=26 op=nft_register_rule pid=5316 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Apr 20 19:18:47.091312 kernel: audit: type=1300 audit(1776712726.891:946): arch=c000003e syscall=46 success=yes exit=8076 a0=3 a1=7ffcd48d7230 a2=0 a3=7ffcd48d721c items=0 ppid=3270 pid=5316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:47.091328 kernel: audit: type=1327 audit(1776712726.891:946): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Apr 20 19:18:47.362000 audit[5311]: AUDIT1101 pid=5311 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:18:47.464000 audit[5311]: AUDIT1103 pid=5311 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:18:47.467000 audit[5311]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc56744b50 a2=3 a3=0 items=0 ppid=1 pid=5311 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=40 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:47.467000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:18:47.510000 audit[5320]: NETFILTER_CFG table=filter:153 family=2 entries=20 op=nft_register_rule pid=5320 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Apr 20 19:18:47.510000 audit[5320]: SYSCALL arch=c000003e syscall=46 success=yes exit=11944 a0=3 a1=7ffdee3b2b50 a2=0 a3=7ffdee3b2b3c items=0 ppid=3270 pid=5320 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:47.510000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Apr 20 19:18:47.599277 sshd[5311]: Accepted publickey for core from 10.0.0.1 port 55178 ssh2: RSA SHA256:ZNIzts6V4KYKlrJxXaosrimCRlmsV/+NkZ5UtjwHrjE Apr 20 19:18:47.605000 audit[5320]: NETFILTER_CFG table=nat:154 family=2 entries=26 op=nft_register_rule pid=5320 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Apr 20 19:18:47.510488 sshd-session[5311]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 19:18:47.605000 audit[5320]: SYSCALL arch=c000003e syscall=46 success=yes exit=8076 a0=3 a1=7ffdee3b2b50 a2=0 a3=0 items=0 ppid=3270 pid=5320 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:47.605000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Apr 20 19:18:47.781106 systemd-networkd[1441]: vxlan.calico: Link UP Apr 20 19:18:47.781154 systemd-networkd[1441]: vxlan.calico: Gained carrier Apr 20 19:18:47.848015 systemd-logind[1627]: New session '40' of user 'core' with class 'user' and type 'tty'. Apr 20 19:18:47.854924 systemd[1]: Started session-40.scope - Session 40 of User core. Apr 20 19:18:47.980309 kubelet[3163]: E0420 19:18:47.976866 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.08s" Apr 20 19:18:48.039000 audit[5311]: AUDIT1105 pid=5311 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:18:48.087000 audit[5326]: AUDIT1103 pid=5326 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:18:48.801701 systemd-networkd[1441]: caliecb05da646a: Link UP Apr 20 19:18:49.242339 systemd-networkd[1441]: caliecb05da646a: Gained carrier Apr 20 19:18:49.418620 containerd[1659]: time="2026-04-20T19:18:49.404297108Z" level=info msg="container event discarded" container=4dc5f3cee3e4345df4d1f6cd625d29a3d7985c094a29b2b716e633b97294a72b type=CONTAINER_STOPPED_EVENT Apr 20 19:18:49.587341 systemd-networkd[1441]: vxlan.calico: Gained IPv6LL Apr 20 19:18:50.091830 kubelet[3163]: E0420 19:18:50.057135 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.171s" Apr 20 19:18:50.340676 containerd[1659]: 2026-04-20 19:18:39.566 [INFO][5236] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--84684997fc--zpm5v-eth0 calico-apiserver-84684997fc- calico-system dfb0b7d2-b28d-4433-9fba-0074dfdf81ee 2335 0 2026-04-20 19:18:26 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:84684997fc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-84684997fc-zpm5v eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] caliecb05da646a [] [] }} ContainerID="de19448814837f2189e818ee408a7454c5277a1c9b2f91f664708851a1478571" Namespace="calico-system" Pod="calico-apiserver-84684997fc-zpm5v" WorkloadEndpoint="localhost-k8s-calico--apiserver--84684997fc--zpm5v-" Apr 20 19:18:50.340676 containerd[1659]: 2026-04-20 19:18:39.612 [INFO][5236] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="de19448814837f2189e818ee408a7454c5277a1c9b2f91f664708851a1478571" Namespace="calico-system" Pod="calico-apiserver-84684997fc-zpm5v" WorkloadEndpoint="localhost-k8s-calico--apiserver--84684997fc--zpm5v-eth0" Apr 20 19:18:50.340676 containerd[1659]: 2026-04-20 19:18:43.377 [INFO][5294] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="de19448814837f2189e818ee408a7454c5277a1c9b2f91f664708851a1478571" HandleID="k8s-pod-network.de19448814837f2189e818ee408a7454c5277a1c9b2f91f664708851a1478571" Workload="localhost-k8s-calico--apiserver--84684997fc--zpm5v-eth0" Apr 20 19:18:50.351742 containerd[1659]: 2026-04-20 19:18:44.240 [INFO][5294] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="de19448814837f2189e818ee408a7454c5277a1c9b2f91f664708851a1478571" HandleID="k8s-pod-network.de19448814837f2189e818ee408a7454c5277a1c9b2f91f664708851a1478571" Workload="localhost-k8s-calico--apiserver--84684997fc--zpm5v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004951d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-84684997fc-zpm5v", "timestamp":"2026-04-20 19:18:43.376507689 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00075e6e0)} Apr 20 19:18:50.351742 containerd[1659]: 2026-04-20 19:18:44.246 [INFO][5294] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 20 19:18:50.351742 containerd[1659]: 2026-04-20 19:18:44.311 [INFO][5294] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 20 19:18:50.351742 containerd[1659]: 2026-04-20 19:18:44.475 [INFO][5294] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 20 19:18:50.351742 containerd[1659]: 2026-04-20 19:18:45.418 [INFO][5294] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.de19448814837f2189e818ee408a7454c5277a1c9b2f91f664708851a1478571" host="localhost" Apr 20 19:18:50.351742 containerd[1659]: 2026-04-20 19:18:46.602 [INFO][5294] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 20 19:18:50.351742 containerd[1659]: 2026-04-20 19:18:46.639 [INFO][5294] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 20 19:18:50.351742 containerd[1659]: 2026-04-20 19:18:46.690 [INFO][5294] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 20 19:18:50.351742 containerd[1659]: 2026-04-20 19:18:47.637 [INFO][5294] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 20 19:18:50.351742 containerd[1659]: 2026-04-20 19:18:47.637 [INFO][5294] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.de19448814837f2189e818ee408a7454c5277a1c9b2f91f664708851a1478571" host="localhost" Apr 20 19:18:50.352122 containerd[1659]: 2026-04-20 19:18:47.833 [INFO][5294] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.de19448814837f2189e818ee408a7454c5277a1c9b2f91f664708851a1478571 Apr 20 19:18:50.352122 containerd[1659]: 2026-04-20 19:18:47.954 [INFO][5294] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.de19448814837f2189e818ee408a7454c5277a1c9b2f91f664708851a1478571" host="localhost" Apr 20 19:18:50.352122 containerd[1659]: 2026-04-20 19:18:48.470 [INFO][5294] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.de19448814837f2189e818ee408a7454c5277a1c9b2f91f664708851a1478571" host="localhost" Apr 20 19:18:50.352122 containerd[1659]: 2026-04-20 19:18:48.471 [INFO][5294] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.de19448814837f2189e818ee408a7454c5277a1c9b2f91f664708851a1478571" host="localhost" Apr 20 19:18:50.352122 containerd[1659]: 2026-04-20 19:18:48.471 [INFO][5294] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 20 19:18:50.352122 containerd[1659]: 2026-04-20 19:18:48.471 [INFO][5294] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="de19448814837f2189e818ee408a7454c5277a1c9b2f91f664708851a1478571" HandleID="k8s-pod-network.de19448814837f2189e818ee408a7454c5277a1c9b2f91f664708851a1478571" Workload="localhost-k8s-calico--apiserver--84684997fc--zpm5v-eth0" Apr 20 19:18:50.352255 containerd[1659]: 2026-04-20 19:18:48.705 [INFO][5236] cni-plugin/k8s.go 418: Populated endpoint ContainerID="de19448814837f2189e818ee408a7454c5277a1c9b2f91f664708851a1478571" Namespace="calico-system" Pod="calico-apiserver-84684997fc-zpm5v" WorkloadEndpoint="localhost-k8s-calico--apiserver--84684997fc--zpm5v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--84684997fc--zpm5v-eth0", GenerateName:"calico-apiserver-84684997fc-", Namespace:"calico-system", SelfLink:"", UID:"dfb0b7d2-b28d-4433-9fba-0074dfdf81ee", ResourceVersion:"2335", Generation:0, CreationTimestamp:time.Date(2026, time.April, 20, 19, 18, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84684997fc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-84684997fc-zpm5v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"caliecb05da646a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 20 19:18:50.352346 containerd[1659]: 2026-04-20 19:18:48.707 [INFO][5236] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="de19448814837f2189e818ee408a7454c5277a1c9b2f91f664708851a1478571" Namespace="calico-system" Pod="calico-apiserver-84684997fc-zpm5v" WorkloadEndpoint="localhost-k8s-calico--apiserver--84684997fc--zpm5v-eth0" Apr 20 19:18:50.352346 containerd[1659]: 2026-04-20 19:18:48.733 [INFO][5236] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliecb05da646a ContainerID="de19448814837f2189e818ee408a7454c5277a1c9b2f91f664708851a1478571" Namespace="calico-system" Pod="calico-apiserver-84684997fc-zpm5v" WorkloadEndpoint="localhost-k8s-calico--apiserver--84684997fc--zpm5v-eth0" Apr 20 19:18:50.352346 containerd[1659]: 2026-04-20 19:18:49.420 [INFO][5236] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="de19448814837f2189e818ee408a7454c5277a1c9b2f91f664708851a1478571" Namespace="calico-system" Pod="calico-apiserver-84684997fc-zpm5v" WorkloadEndpoint="localhost-k8s-calico--apiserver--84684997fc--zpm5v-eth0" Apr 20 19:18:50.352400 containerd[1659]: 2026-04-20 19:18:49.465 [INFO][5236] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="de19448814837f2189e818ee408a7454c5277a1c9b2f91f664708851a1478571" Namespace="calico-system" Pod="calico-apiserver-84684997fc-zpm5v" WorkloadEndpoint="localhost-k8s-calico--apiserver--84684997fc--zpm5v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--84684997fc--zpm5v-eth0", GenerateName:"calico-apiserver-84684997fc-", Namespace:"calico-system", SelfLink:"", UID:"dfb0b7d2-b28d-4433-9fba-0074dfdf81ee", ResourceVersion:"2335", Generation:0, CreationTimestamp:time.Date(2026, time.April, 20, 19, 18, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84684997fc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"de19448814837f2189e818ee408a7454c5277a1c9b2f91f664708851a1478571", Pod:"calico-apiserver-84684997fc-zpm5v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"caliecb05da646a", MAC:"7e:13:69:d3:04:50", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 20 19:18:50.352524 containerd[1659]: 2026-04-20 19:18:50.220 [INFO][5236] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="de19448814837f2189e818ee408a7454c5277a1c9b2f91f664708851a1478571" Namespace="calico-system" Pod="calico-apiserver-84684997fc-zpm5v" WorkloadEndpoint="localhost-k8s-calico--apiserver--84684997fc--zpm5v-eth0" Apr 20 19:18:51.198337 systemd-networkd[1441]: caliecb05da646a: Gained IPv6LL Apr 20 19:18:51.557185 sshd[5326]: Connection closed by 10.0.0.1 port 55178 Apr 20 19:18:51.471759 sshd-session[5311]: pam_unix(sshd:session): session closed for user core Apr 20 19:18:51.632000 audit[5311]: AUDIT1106 pid=5311 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:18:51.633000 audit[5311]: AUDIT1104 pid=5311 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:18:51.717196 kernel: kauditd_printk_skb: 13 callbacks suppressed Apr 20 19:18:51.734657 kernel: audit: type=1106 audit(1776712731.632:954): pid=5311 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:18:51.735439 kernel: audit: type=1104 audit(1776712731.633:955): pid=5311 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:18:51.845000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@38-6-10.0.0.14:22-10.0.0.1:55178 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:18:51.896148 kernel: audit: type=1131 audit(1776712731.845:956): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@38-6-10.0.0.14:22-10.0.0.1:55178 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:18:51.837106 systemd[1]: sshd@38-6-10.0.0.14:22-10.0.0.1:55178.service: Deactivated successfully. Apr 20 19:18:51.896647 systemd[1]: session-40.scope: Deactivated successfully. Apr 20 19:18:51.896989 systemd[1]: session-40.scope: Consumed 1.802s CPU time, 33.7M memory peak. Apr 20 19:18:52.063258 systemd-logind[1627]: Session 40 logged out. Waiting for processes to exit. Apr 20 19:18:52.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@39-12296-10.0.0.14:22-10.0.0.1:51712 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:18:52.161831 systemd[1]: Started sshd@39-12296-10.0.0.14:22-10.0.0.1:51712.service - OpenSSH per-connection server daemon (10.0.0.1:51712). Apr 20 19:18:52.254368 kernel: audit: type=1130 audit(1776712732.161:957): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@39-12296-10.0.0.14:22-10.0.0.1:51712 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:18:52.303931 systemd-logind[1627]: Removed session 40. Apr 20 19:18:52.435000 audit: BPF prog-id=185 op=LOAD Apr 20 19:18:52.435000 audit[5383]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd64ad7140 a2=98 a3=0 items=0 ppid=4845 pid=5383 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:52.435000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Apr 20 19:18:52.461000 audit: BPF prog-id=185 op=UNLOAD Apr 20 19:18:52.461000 audit[5383]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd64ad7110 a3=0 items=0 ppid=4845 pid=5383 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:52.461000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Apr 20 19:18:52.519000 audit: BPF prog-id=186 op=LOAD Apr 20 19:18:52.519000 audit[5383]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd64ad6f50 a2=94 a3=54428f items=0 ppid=4845 pid=5383 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:52.519000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Apr 20 19:18:52.524000 audit: BPF prog-id=186 op=UNLOAD Apr 20 19:18:52.524000 audit[5383]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd64ad6f50 a2=94 a3=54428f items=0 ppid=4845 pid=5383 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:52.524000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Apr 20 19:18:52.526000 audit: BPF prog-id=187 op=LOAD Apr 20 19:18:52.526000 audit[5383]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd64ad6f80 a2=94 a3=2 items=0 ppid=4845 pid=5383 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:52.526000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Apr 20 19:18:52.528000 audit: BPF prog-id=187 op=UNLOAD Apr 20 19:18:52.528000 audit[5383]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd64ad6f80 a2=0 a3=2 items=0 ppid=4845 pid=5383 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:52.528000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Apr 20 19:18:52.528000 audit: BPF prog-id=188 op=LOAD Apr 20 19:18:52.528000 audit[5383]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd64ad6d30 a2=94 a3=4 items=0 ppid=4845 pid=5383 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:52.528000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Apr 20 19:18:52.528000 audit: BPF prog-id=188 op=UNLOAD Apr 20 19:18:52.528000 audit[5383]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd64ad6d30 a2=94 a3=4 items=0 ppid=4845 pid=5383 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:52.528000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Apr 20 19:18:52.528000 audit: BPF prog-id=189 op=LOAD Apr 20 19:18:52.528000 audit[5383]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd64ad6e30 a2=94 a3=7ffd64ad6fb0 items=0 ppid=4845 pid=5383 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:52.528000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Apr 20 19:18:52.529000 audit: BPF prog-id=189 op=UNLOAD Apr 20 19:18:52.529000 audit[5383]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd64ad6e30 a2=0 a3=7ffd64ad6fb0 items=0 ppid=4845 pid=5383 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:52.529000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Apr 20 19:18:52.530000 audit: BPF prog-id=190 op=LOAD Apr 20 19:18:52.531173 kernel: audit: type=1334 audit(1776712732.435:958): prog-id=185 op=LOAD Apr 20 19:18:52.531197 kernel: audit: type=1300 audit(1776712732.435:958): arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd64ad7140 a2=98 a3=0 items=0 ppid=4845 pid=5383 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:52.531266 kernel: audit: type=1327 audit(1776712732.435:958): proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Apr 20 19:18:52.531277 kernel: audit: type=1334 audit(1776712732.461:959): prog-id=185 op=UNLOAD Apr 20 19:18:52.531291 kernel: audit: type=1300 audit(1776712732.461:959): arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd64ad7110 a3=0 items=0 ppid=4845 pid=5383 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:52.531306 kernel: audit: type=1327 audit(1776712732.461:959): proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Apr 20 19:18:52.530000 audit[5383]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd64ad6560 a2=94 a3=2 items=0 ppid=4845 pid=5383 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:52.530000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Apr 20 19:18:52.537000 audit: BPF prog-id=190 op=UNLOAD Apr 20 19:18:52.537000 audit[5383]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd64ad6560 a2=0 a3=2 items=0 ppid=4845 pid=5383 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:52.537000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Apr 20 19:18:52.537000 audit: BPF prog-id=191 op=LOAD Apr 20 19:18:52.537000 audit[5383]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd64ad6660 a2=94 a3=30 items=0 ppid=4845 pid=5383 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:52.537000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Apr 20 19:18:53.090000 audit: BPF prog-id=192 op=LOAD Apr 20 19:18:53.090000 audit[5397]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd5e095120 a2=98 a3=0 items=0 ppid=4845 pid=5397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:53.090000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Apr 20 19:18:53.090000 audit: BPF prog-id=192 op=UNLOAD Apr 20 19:18:53.090000 audit[5397]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd5e0950f0 a3=0 items=0 ppid=4845 pid=5397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:53.090000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Apr 20 19:18:53.091000 audit: BPF prog-id=193 op=LOAD Apr 20 19:18:53.091000 audit[5397]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd5e094f10 a2=94 a3=54428f items=0 ppid=4845 pid=5397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:53.091000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Apr 20 19:18:53.091000 audit: BPF prog-id=193 op=UNLOAD Apr 20 19:18:53.091000 audit[5397]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd5e094f10 a2=94 a3=54428f items=0 ppid=4845 pid=5397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:53.091000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Apr 20 19:18:53.091000 audit: BPF prog-id=194 op=LOAD Apr 20 19:18:53.091000 audit[5397]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd5e094f40 a2=94 a3=2 items=0 ppid=4845 pid=5397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:53.091000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Apr 20 19:18:53.091000 audit: BPF prog-id=194 op=UNLOAD Apr 20 19:18:53.091000 audit[5397]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd5e094f40 a2=0 a3=2 items=0 ppid=4845 pid=5397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:53.091000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Apr 20 19:18:53.277716 containerd[1659]: time="2026-04-20T19:18:53.276222018Z" level=info msg="connecting to shim de19448814837f2189e818ee408a7454c5277a1c9b2f91f664708851a1478571" address="unix:///run/containerd/s/9f25d20f4617cde34f7397032d9ecbc0b43cd780bc15ce3e8713428f4b2ceb63" namespace=k8s.io protocol=ttrpc version=3 Apr 20 19:18:53.394641 kubelet[3163]: E0420 19:18:53.393208 3163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:18:53.534000 audit[5378]: AUDIT1101 pid=5378 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:18:53.537657 sshd[5378]: Accepted publickey for core from 10.0.0.1 port 51712 ssh2: RSA SHA256:ZNIzts6V4KYKlrJxXaosrimCRlmsV/+NkZ5UtjwHrjE Apr 20 19:18:53.544000 audit[5378]: AUDIT1103 pid=5378 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:18:53.544000 audit[5378]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc405d4560 a2=3 a3=0 items=0 ppid=1 pid=5378 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=41 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:53.544000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:18:53.552525 sshd-session[5378]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 19:18:53.676184 systemd-logind[1627]: New session '41' of user 'core' with class 'user' and type 'tty'. Apr 20 19:18:53.756471 systemd[1]: Started session-41.scope - Session 41 of User core. Apr 20 19:18:53.848000 audit[5378]: AUDIT1105 pid=5378 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:18:53.897000 audit[5426]: AUDIT1103 pid=5426 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:18:54.955685 systemd[1]: Started cri-containerd-de19448814837f2189e818ee408a7454c5277a1c9b2f91f664708851a1478571.scope - libcontainer container de19448814837f2189e818ee408a7454c5277a1c9b2f91f664708851a1478571. Apr 20 19:18:55.101884 sshd[5426]: Connection closed by 10.0.0.1 port 51712 Apr 20 19:18:55.103000 audit[5378]: AUDIT1106 pid=5378 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:18:55.103000 audit[5378]: AUDIT1104 pid=5378 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:18:55.103091 sshd-session[5378]: pam_unix(sshd:session): session closed for user core Apr 20 19:18:55.136000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@39-12296-10.0.0.14:22-10.0.0.1:51712 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:18:55.130995 systemd[1]: sshd@39-12296-10.0.0.14:22-10.0.0.1:51712.service: Deactivated successfully. Apr 20 19:18:55.223843 systemd[1]: session-41.scope: Deactivated successfully. Apr 20 19:18:55.272860 systemd-logind[1627]: Session 41 logged out. Waiting for processes to exit. Apr 20 19:18:55.274142 systemd-logind[1627]: Removed session 41. Apr 20 19:18:55.378000 audit: BPF prog-id=195 op=LOAD Apr 20 19:18:55.393000 audit: BPF prog-id=196 op=LOAD Apr 20 19:18:55.393000 audit[5420]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130240 a2=98 a3=0 items=0 ppid=5396 pid=5420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:55.393000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465313934343838313438333766323138396538313865653430386137 Apr 20 19:18:55.393000 audit: BPF prog-id=196 op=UNLOAD Apr 20 19:18:55.393000 audit[5420]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5396 pid=5420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:55.393000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465313934343838313438333766323138396538313865653430386137 Apr 20 19:18:55.393000 audit: BPF prog-id=197 op=LOAD Apr 20 19:18:55.393000 audit[5420]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130490 a2=98 a3=0 items=0 ppid=5396 pid=5420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:55.393000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465313934343838313438333766323138396538313865653430386137 Apr 20 19:18:55.393000 audit: BPF prog-id=198 op=LOAD Apr 20 19:18:55.393000 audit[5420]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130220 a2=98 a3=0 items=0 ppid=5396 pid=5420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:55.393000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465313934343838313438333766323138396538313865653430386137 Apr 20 19:18:55.393000 audit: BPF prog-id=198 op=UNLOAD Apr 20 19:18:55.393000 audit[5420]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5396 pid=5420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:55.393000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465313934343838313438333766323138396538313865653430386137 Apr 20 19:18:55.415000 audit: BPF prog-id=197 op=UNLOAD Apr 20 19:18:55.415000 audit[5420]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5396 pid=5420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:55.415000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465313934343838313438333766323138396538313865653430386137 Apr 20 19:18:55.423000 audit: BPF prog-id=199 op=LOAD Apr 20 19:18:55.423000 audit[5420]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306f0 a2=98 a3=0 items=0 ppid=5396 pid=5420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:55.423000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465313934343838313438333766323138396538313865653430386137 Apr 20 19:18:55.448466 systemd-resolved[1410]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 20 19:18:55.722000 audit: BPF prog-id=200 op=LOAD Apr 20 19:18:55.722000 audit[5397]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd5e094e00 a2=94 a3=1 items=0 ppid=4845 pid=5397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:55.722000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Apr 20 19:18:55.724000 audit: BPF prog-id=200 op=UNLOAD Apr 20 19:18:55.724000 audit[5397]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd5e094e00 a2=94 a3=1 items=0 ppid=4845 pid=5397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:55.724000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Apr 20 19:18:55.750000 audit: BPF prog-id=201 op=LOAD Apr 20 19:18:55.750000 audit[5397]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd5e094df0 a2=94 a3=4 items=0 ppid=4845 pid=5397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:55.750000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Apr 20 19:18:55.758000 audit: BPF prog-id=201 op=UNLOAD Apr 20 19:18:55.758000 audit[5397]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffd5e094df0 a2=0 a3=4 items=0 ppid=4845 pid=5397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:55.758000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Apr 20 19:18:55.758000 audit: BPF prog-id=202 op=LOAD Apr 20 19:18:55.758000 audit[5397]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd5e094c50 a2=94 a3=5 items=0 ppid=4845 pid=5397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:55.758000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Apr 20 19:18:55.759000 audit: BPF prog-id=202 op=UNLOAD Apr 20 19:18:55.759000 audit[5397]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd5e094c50 a2=0 a3=5 items=0 ppid=4845 pid=5397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:55.759000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Apr 20 19:18:55.759000 audit: BPF prog-id=203 op=LOAD Apr 20 19:18:55.759000 audit[5397]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd5e094e70 a2=94 a3=6 items=0 ppid=4845 pid=5397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:55.759000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Apr 20 19:18:55.759000 audit: BPF prog-id=203 op=UNLOAD Apr 20 19:18:55.759000 audit[5397]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffd5e094e70 a2=0 a3=6 items=0 ppid=4845 pid=5397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:55.759000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Apr 20 19:18:55.760000 audit: BPF prog-id=204 op=LOAD Apr 20 19:18:55.760000 audit[5397]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd5e094620 a2=94 a3=88 items=0 ppid=4845 pid=5397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:55.760000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Apr 20 19:18:55.760000 audit: BPF prog-id=205 op=LOAD Apr 20 19:18:55.760000 audit[5397]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffd5e0944a0 a2=94 a3=2 items=0 ppid=4845 pid=5397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:55.760000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Apr 20 19:18:55.760000 audit: BPF prog-id=205 op=UNLOAD Apr 20 19:18:55.760000 audit[5397]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffd5e0944d0 a2=0 a3=7ffd5e0945d0 items=0 ppid=4845 pid=5397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:55.760000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Apr 20 19:18:55.761000 audit: BPF prog-id=204 op=UNLOAD Apr 20 19:18:55.761000 audit[5397]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=e970d10 a2=0 a3=504b60356dbb6548 items=0 ppid=4845 pid=5397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:55.761000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Apr 20 19:18:55.762865 containerd[1659]: time="2026-04-20T19:18:55.762818342Z" level=info msg="RunPodSandbox for name:\"calico-apiserver-84684997fc-zpm5v\" uid:\"dfb0b7d2-b28d-4433-9fba-0074dfdf81ee\" namespace:\"calico-system\" returns sandbox id \"de19448814837f2189e818ee408a7454c5277a1c9b2f91f664708851a1478571\"" Apr 20 19:18:55.892000 audit: BPF prog-id=191 op=UNLOAD Apr 20 19:18:55.892000 audit[4845]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c000c60c00 a2=0 a3=0 items=0 ppid=4822 pid=4845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:55.892000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Apr 20 19:18:58.406307 containerd[1659]: time="2026-04-20T19:18:58.404436276Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 19:18:58.535467 containerd[1659]: time="2026-04-20T19:18:58.450039103Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=1, bytes read=14680805" Apr 20 19:18:58.589692 containerd[1659]: time="2026-04-20T19:18:58.588735792Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 19:18:58.752944 containerd[1659]: time="2026-04-20T19:18:58.752002571Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 19:18:58.815616 containerd[1659]: time="2026-04-20T19:18:58.814392053Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 24.059269503s" Apr 20 19:18:58.830458 containerd[1659]: time="2026-04-20T19:18:58.830007259Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Apr 20 19:18:58.977446 containerd[1659]: time="2026-04-20T19:18:58.976786132Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 20 19:18:59.099771 containerd[1659]: time="2026-04-20T19:18:59.099108094Z" level=info msg="CreateContainer within sandbox \"a95a152fb136b1fbc85dca6a88ab5e5cb65de4b808808cc8910b0dc8f0933850\" for container name:\"csi-node-driver-registrar\"" Apr 20 19:18:59.128000 audit[5486]: NETFILTER_CFG table=mangle:155 family=2 entries=18 op=nft_register_chain pid=5486 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Apr 20 19:18:59.128000 audit[5486]: SYSCALL arch=c000003e syscall=46 success=yes exit=7984 a0=3 a1=7ffc3e28d130 a2=0 a3=7ffc3e28d11c items=0 ppid=4845 pid=5486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:59.128000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Apr 20 19:18:59.218300 kernel: kauditd_printk_skb: 122 callbacks suppressed Apr 20 19:18:59.218746 kernel: audit: type=1325 audit(1776712739.128:1006): table=mangle:155 family=2 entries=18 op=nft_register_chain pid=5486 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Apr 20 19:18:59.220953 kernel: audit: type=1300 audit(1776712739.128:1006): arch=c000003e syscall=46 success=yes exit=7984 a0=3 a1=7ffc3e28d130 a2=0 a3=7ffc3e28d11c items=0 ppid=4845 pid=5486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:59.222962 kernel: audit: type=1327 audit(1776712739.128:1006): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Apr 20 19:18:59.306871 containerd[1659]: time="2026-04-20T19:18:59.305163432Z" level=info msg="Container 14ed93037cc2e60e5e2c3a7165a10dc161fa627ce835db63b0b80b4e4ca7ba97: CDI devices from CRI Config.CDIDevices: []" Apr 20 19:18:59.312000 audit[5491]: NETFILTER_CFG table=nat:156 family=2 entries=15 op=nft_register_chain pid=5491 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Apr 20 19:18:59.321721 kernel: audit: type=1325 audit(1776712739.312:1007): table=nat:156 family=2 entries=15 op=nft_register_chain pid=5491 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Apr 20 19:18:59.312000 audit[5491]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7fffb2674bc0 a2=0 a3=7fffb2674bac items=0 ppid=4845 pid=5491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:59.312000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Apr 20 19:18:59.359000 audit[5487]: NETFILTER_CFG table=filter:157 family=2 entries=73 op=nft_register_chain pid=5487 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Apr 20 19:18:59.359000 audit[5487]: SYSCALL arch=c000003e syscall=46 success=yes exit=38620 a0=3 a1=7ffda5a37fa0 a2=0 a3=5651ebc45000 items=0 ppid=4845 pid=5487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:59.359000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Apr 20 19:18:59.489240 kernel: audit: type=1300 audit(1776712739.312:1007): arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7fffb2674bc0 a2=0 a3=7fffb2674bac items=0 ppid=4845 pid=5491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:59.435165 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2207363888.mount: Deactivated successfully. Apr 20 19:18:59.489812 kernel: audit: type=1327 audit(1776712739.312:1007): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Apr 20 19:18:59.489844 kernel: audit: type=1325 audit(1776712739.359:1008): table=filter:157 family=2 entries=73 op=nft_register_chain pid=5487 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Apr 20 19:18:59.489866 kernel: audit: type=1300 audit(1776712739.359:1008): arch=c000003e syscall=46 success=yes exit=38620 a0=3 a1=7ffda5a37fa0 a2=0 a3=5651ebc45000 items=0 ppid=4845 pid=5487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:59.489936 kernel: audit: type=1327 audit(1776712739.359:1008): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Apr 20 19:18:59.632000 audit[5488]: NETFILTER_CFG table=raw:158 family=2 entries=21 op=nft_register_chain pid=5488 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Apr 20 19:18:59.632000 audit[5488]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffc24cb3270 a2=0 a3=7ffc24cb325c items=0 ppid=4845 pid=5488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:18:59.632000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Apr 20 19:18:59.648822 kernel: audit: type=1325 audit(1776712739.632:1009): table=raw:158 family=2 entries=21 op=nft_register_chain pid=5488 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Apr 20 19:18:59.650346 containerd[1659]: time="2026-04-20T19:18:59.649175188Z" level=info msg="CreateContainer within sandbox \"a95a152fb136b1fbc85dca6a88ab5e5cb65de4b808808cc8910b0dc8f0933850\" for name:\"csi-node-driver-registrar\" returns container id \"14ed93037cc2e60e5e2c3a7165a10dc161fa627ce835db63b0b80b4e4ca7ba97\"" Apr 20 19:18:59.683697 containerd[1659]: time="2026-04-20T19:18:59.680624376Z" level=info msg="StartContainer for \"14ed93037cc2e60e5e2c3a7165a10dc161fa627ce835db63b0b80b4e4ca7ba97\"" Apr 20 19:18:59.709102 containerd[1659]: time="2026-04-20T19:18:59.709057088Z" level=info msg="connecting to shim 14ed93037cc2e60e5e2c3a7165a10dc161fa627ce835db63b0b80b4e4ca7ba97" address="unix:///run/containerd/s/0eb0305fea608ca907cd45f8e4caa2dc3006719b8287158ec4c075e9e2b37cfa" protocol=ttrpc version=3 Apr 20 19:19:00.156056 systemd[1]: Started cri-containerd-14ed93037cc2e60e5e2c3a7165a10dc161fa627ce835db63b0b80b4e4ca7ba97.scope - libcontainer container 14ed93037cc2e60e5e2c3a7165a10dc161fa627ce835db63b0b80b4e4ca7ba97. Apr 20 19:19:00.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@40-8211-10.0.0.14:22-10.0.0.1:42008 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:19:00.248064 systemd[1]: Started sshd@40-8211-10.0.0.14:22-10.0.0.1:42008.service - OpenSSH per-connection server daemon (10.0.0.1:42008). Apr 20 19:19:01.408000 audit[5519]: AUDIT1101 pid=5519 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:19:01.424000 audit: BPF prog-id=206 op=LOAD Apr 20 19:19:01.424000 audit[5498]: SYSCALL arch=c000003e syscall=321 success=yes exit=19 a0=5 a1=c0001a0490 a2=98 a3=0 items=0 ppid=5035 pid=5498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:19:01.424000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134656439333033376363326536306535653263336137313635613130 Apr 20 19:19:01.424000 audit: BPF prog-id=207 op=LOAD Apr 20 19:19:01.424000 audit[5498]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0220 a2=98 a3=0 items=0 ppid=5035 pid=5498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:19:01.424000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134656439333033376363326536306535653263336137313635613130 Apr 20 19:19:01.424000 audit: BPF prog-id=207 op=UNLOAD Apr 20 19:19:01.424000 audit[5498]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5035 pid=5498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:19:01.424000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134656439333033376363326536306535653263336137313635613130 Apr 20 19:19:01.424000 audit: BPF prog-id=206 op=UNLOAD Apr 20 19:19:01.424000 audit[5498]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=13 a1=0 a2=0 a3=0 items=0 ppid=5035 pid=5498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:19:01.424000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134656439333033376363326536306535653263336137313635613130 Apr 20 19:19:01.424000 audit: BPF prog-id=208 op=LOAD Apr 20 19:19:01.424000 audit[5498]: SYSCALL arch=c000003e syscall=321 success=yes exit=19 a0=5 a1=c0001a06f0 a2=98 a3=0 items=0 ppid=5035 pid=5498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:19:01.424000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134656439333033376363326536306535653263336137313635613130 Apr 20 19:19:01.447000 audit[5519]: AUDIT1103 pid=5519 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:19:01.447000 audit[5519]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeae12d0c0 a2=3 a3=0 items=0 ppid=1 pid=5519 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=42 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:19:01.447000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:19:02.010366 sshd[5519]: Accepted publickey for core from 10.0.0.1 port 42008 ssh2: RSA SHA256:ZNIzts6V4KYKlrJxXaosrimCRlmsV/+NkZ5UtjwHrjE Apr 20 19:19:01.568335 sshd-session[5519]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 19:19:02.024957 kubelet[3163]: E0420 19:19:01.972472 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.036s" Apr 20 19:19:02.425884 containerd[1659]: time="2026-04-20T19:19:02.424364702Z" level=error msg="get state for 14ed93037cc2e60e5e2c3a7165a10dc161fa627ce835db63b0b80b4e4ca7ba97" error="context deadline exceeded" Apr 20 19:19:02.425884 containerd[1659]: time="2026-04-20T19:19:02.424452986Z" level=warning msg="unknown status" status=0 Apr 20 19:19:02.441897 systemd-logind[1627]: New session '42' of user 'core' with class 'user' and type 'tty'. Apr 20 19:19:02.449496 systemd[1]: Started session-42.scope - Session 42 of User core. Apr 20 19:19:02.521000 audit[5519]: AUDIT1105 pid=5519 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:19:02.565000 audit[5527]: AUDIT1103 pid=5527 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:19:02.648707 containerd[1659]: time="2026-04-20T19:19:02.648504303Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 20 19:19:03.835036 kubelet[3163]: E0420 19:19:03.834592 3163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:19:04.421605 kubelet[3163]: E0420 19:19:04.420879 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.453s" Apr 20 19:19:04.476333 kubelet[3163]: E0420 19:19:04.462316 3163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:19:05.697755 sshd[5527]: Connection closed by 10.0.0.1 port 42008 Apr 20 19:19:05.737000 audit[5519]: AUDIT1106 pid=5519 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:19:05.737000 audit[5519]: AUDIT1104 pid=5519 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:19:05.855069 kernel: kauditd_printk_skb: 25 callbacks suppressed Apr 20 19:19:05.707914 sshd-session[5519]: pam_unix(sshd:session): session closed for user core Apr 20 19:19:05.856797 kernel: audit: type=1106 audit(1776712745.737:1021): pid=5519 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:19:05.856820 kernel: audit: type=1104 audit(1776712745.737:1022): pid=5519 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:19:05.896000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@40-8211-10.0.0.14:22-10.0.0.1:42008 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:19:05.896744 systemd[1]: sshd@40-8211-10.0.0.14:22-10.0.0.1:42008.service: Deactivated successfully. Apr 20 19:19:05.981197 kernel: audit: type=1131 audit(1776712745.896:1023): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@40-8211-10.0.0.14:22-10.0.0.1:42008 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:19:06.048297 systemd[1]: session-42.scope: Deactivated successfully. Apr 20 19:19:06.056772 systemd[1]: session-42.scope: Consumed 1.773s CPU time, 15.8M memory peak. Apr 20 19:19:06.132843 systemd-logind[1627]: Session 42 logged out. Waiting for processes to exit. Apr 20 19:19:06.266152 systemd-logind[1627]: Removed session 42. Apr 20 19:19:06.304078 kubelet[3163]: E0420 19:19:06.288361 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.419s" Apr 20 19:19:07.114119 containerd[1659]: time="2026-04-20T19:19:07.113966541Z" level=info msg="StartContainer for \"14ed93037cc2e60e5e2c3a7165a10dc161fa627ce835db63b0b80b4e4ca7ba97\" returns successfully" Apr 20 19:19:08.142124 kubelet[3163]: E0420 19:19:08.140526 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.247s" Apr 20 19:19:09.015000 audit[5554]: NETFILTER_CFG table=filter:159 family=2 entries=44 op=nft_register_chain pid=5554 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Apr 20 19:19:09.015000 audit[5554]: SYSCALL arch=c000003e syscall=46 success=yes exit=25188 a0=3 a1=7ffd893e2ef0 a2=0 a3=7ffd893e2edc items=0 ppid=4845 pid=5554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:19:09.015000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Apr 20 19:19:09.082624 kernel: audit: type=1325 audit(1776712749.015:1024): table=filter:159 family=2 entries=44 op=nft_register_chain pid=5554 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Apr 20 19:19:09.082772 kernel: audit: type=1300 audit(1776712749.015:1024): arch=c000003e syscall=46 success=yes exit=25188 a0=3 a1=7ffd893e2ef0 a2=0 a3=7ffd893e2edc items=0 ppid=4845 pid=5554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:19:09.082873 kernel: audit: type=1327 audit(1776712749.015:1024): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Apr 20 19:19:10.122088 kubelet[3163]: E0420 19:19:10.121861 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.259s" Apr 20 19:19:10.810390 kubelet[3163]: E0420 19:19:10.810027 3163 goroutinemap.go:150] Operation for "/var/lib/kubelet/plugins_registry/csi.tigera.io-reg.sock" failed. No retries permitted until 2026-04-20 19:19:11.308276714 +0000 UTC m=+649.402526417 (durationBeforeRetry 500ms). Error: RegisterPlugin error -- failed to get plugin info using RPC GetInfo at socket /var/lib/kubelet/plugins_registry/csi.tigera.io-reg.sock, err: rpc error: code = DeadlineExceeded desc = context deadline exceeded Apr 20 19:19:11.253000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@41-8212-10.0.0.14:22-10.0.0.1:41084 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:19:11.491257 kernel: audit: type=1130 audit(1776712751.253:1025): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@41-8212-10.0.0.14:22-10.0.0.1:41084 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:19:11.248225 systemd[1]: Started sshd@41-8212-10.0.0.14:22-10.0.0.1:41084.service - OpenSSH per-connection server daemon (10.0.0.1:41084). Apr 20 19:19:11.694213 kubelet[3163]: I0420 19:19:11.151494 3163 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-5h6vg" podStartSLOduration=165.563923787 podStartE2EDuration="3m27.13846071s" podCreationTimestamp="2026-04-20 19:15:44 +0000 UTC" firstStartedPulling="2026-04-20 19:18:17.306300966 +0000 UTC m=+595.400550667" lastFinishedPulling="2026-04-20 19:18:58.880837894 +0000 UTC m=+636.975087590" observedRunningTime="2026-04-20 19:19:10.66613857 +0000 UTC m=+648.760388286" watchObservedRunningTime="2026-04-20 19:19:11.13846071 +0000 UTC m=+649.232710427" Apr 20 19:19:12.008313 containerd[1659]: time="2026-04-20T19:19:12.004887856Z" level=info msg="container event discarded" container=bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf type=CONTAINER_CREATED_EVENT Apr 20 19:19:13.273907 kubelet[3163]: E0420 19:19:13.273865 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.241s" Apr 20 19:19:13.432126 kubelet[3163]: I0420 19:19:13.432103 3163 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 20 19:19:13.475103 kubelet[3163]: I0420 19:19:13.475074 3163 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 20 19:19:13.610000 audit[5568]: AUDIT1101 pid=5568 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:19:13.622098 sshd[5568]: Accepted publickey for core from 10.0.0.1 port 41084 ssh2: RSA SHA256:ZNIzts6V4KYKlrJxXaosrimCRlmsV/+NkZ5UtjwHrjE Apr 20 19:19:13.622508 sshd-session[5568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 19:19:13.634016 kernel: audit: type=1101 audit(1776712753.610:1026): pid=5568 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:19:13.618000 audit[5568]: AUDIT1103 pid=5568 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:19:13.648257 kernel: audit: type=1103 audit(1776712753.618:1027): pid=5568 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:19:13.668005 kernel: audit: type=1006 audit(1776712753.619:1028): pid=5568 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=43 res=1 Apr 20 19:19:13.619000 audit[5568]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffb9911ee0 a2=3 a3=0 items=0 ppid=1 pid=5568 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=43 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:19:13.619000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:19:13.690593 kernel: audit: type=1300 audit(1776712753.619:1028): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffb9911ee0 a2=3 a3=0 items=0 ppid=1 pid=5568 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=43 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:19:13.690720 kernel: audit: type=1327 audit(1776712753.619:1028): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:19:13.722740 systemd-logind[1627]: New session '43' of user 'core' with class 'user' and type 'tty'. Apr 20 19:19:13.737222 systemd[1]: Started session-43.scope - Session 43 of User core. Apr 20 19:19:14.146000 audit[5568]: AUDIT1105 pid=5568 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:19:14.242363 kernel: audit: type=1105 audit(1776712754.146:1029): pid=5568 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:19:14.526000 audit[5573]: AUDIT1103 pid=5573 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:19:14.827715 kernel: audit: type=1103 audit(1776712754.526:1030): pid=5573 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:19:16.755273 kubelet[3163]: E0420 19:19:16.748265 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.571s" Apr 20 19:19:17.907739 containerd[1659]: time="2026-04-20T19:19:17.891136115Z" level=info msg="container event discarded" container=bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf type=CONTAINER_STARTED_EVENT Apr 20 19:19:18.106320 sshd[5573]: Connection closed by 10.0.0.1 port 41084 Apr 20 19:19:18.106698 sshd-session[5568]: pam_unix(sshd:session): session closed for user core Apr 20 19:19:18.176000 audit[5568]: AUDIT1106 pid=5568 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:19:18.256159 kernel: audit: type=1106 audit(1776712758.176:1031): pid=5568 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:19:18.247000 audit[5568]: AUDIT1104 pid=5568 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:19:18.361313 kernel: audit: type=1104 audit(1776712758.247:1032): pid=5568 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:19:18.442793 systemd[1]: sshd@41-8212-10.0.0.14:22-10.0.0.1:41084.service: Deactivated successfully. Apr 20 19:19:18.443000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@41-8212-10.0.0.14:22-10.0.0.1:41084 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:19:18.502383 kernel: audit: type=1131 audit(1776712758.443:1033): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@41-8212-10.0.0.14:22-10.0.0.1:41084 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:19:18.503615 kubelet[3163]: E0420 19:19:18.502358 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.473s" Apr 20 19:19:18.503879 systemd[1]: session-43.scope: Deactivated successfully. Apr 20 19:19:18.504364 systemd[1]: session-43.scope: Consumed 2.256s CPU time, 18M memory peak. Apr 20 19:19:18.507037 systemd-logind[1627]: Session 43 logged out. Waiting for processes to exit. Apr 20 19:19:18.520033 systemd-logind[1627]: Removed session 43. Apr 20 19:19:23.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@42-8213-10.0.0.14:22-10.0.0.1:58062 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:19:23.381923 systemd[1]: Started sshd@42-8213-10.0.0.14:22-10.0.0.1:58062.service - OpenSSH per-connection server daemon (10.0.0.1:58062). Apr 20 19:19:23.409221 kernel: audit: type=1130 audit(1776712763.380:1034): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@42-8213-10.0.0.14:22-10.0.0.1:58062 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:19:24.786000 audit[5619]: AUDIT1101 pid=5619 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:19:24.822891 sshd[5619]: Accepted publickey for core from 10.0.0.1 port 58062 ssh2: RSA SHA256:ZNIzts6V4KYKlrJxXaosrimCRlmsV/+NkZ5UtjwHrjE Apr 20 19:19:24.825352 kernel: audit: type=1101 audit(1776712764.786:1035): pid=5619 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:19:24.824990 sshd-session[5619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 19:19:24.822000 audit[5619]: AUDIT1103 pid=5619 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:19:24.848469 kernel: audit: type=1103 audit(1776712764.822:1036): pid=5619 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:19:24.822000 audit[5619]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcffb135c0 a2=3 a3=0 items=0 ppid=1 pid=5619 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=44 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:19:24.822000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:19:24.934255 kernel: audit: type=1006 audit(1776712764.822:1037): pid=5619 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=44 res=1 Apr 20 19:19:24.935803 kernel: audit: type=1300 audit(1776712764.822:1037): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcffb135c0 a2=3 a3=0 items=0 ppid=1 pid=5619 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=44 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:19:24.936130 kernel: audit: type=1327 audit(1776712764.822:1037): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:19:25.038163 systemd-logind[1627]: New session '44' of user 'core' with class 'user' and type 'tty'. Apr 20 19:19:25.116299 systemd[1]: Started session-44.scope - Session 44 of User core. Apr 20 19:19:25.490000 audit[5619]: AUDIT1105 pid=5619 uid=0 auid=500 ses=44 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:19:25.511165 kernel: audit: type=1105 audit(1776712765.490:1038): pid=5619 uid=0 auid=500 ses=44 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:19:25.542000 audit[5628]: AUDIT1103 pid=5628 uid=0 auid=500 ses=44 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:19:25.571598 kernel: audit: type=1103 audit(1776712765.542:1039): pid=5628 uid=0 auid=500 ses=44 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:19:28.460316 kubelet[3163]: E0420 19:19:28.418103 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.451s" Apr 20 19:19:28.768330 sshd[5628]: Connection closed by 10.0.0.1 port 58062 Apr 20 19:19:28.766299 sshd-session[5619]: pam_unix(sshd:session): session closed for user core Apr 20 19:19:28.866000 audit[5619]: AUDIT1106 pid=5619 uid=0 auid=500 ses=44 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:19:28.866000 audit[5619]: AUDIT1104 pid=5619 uid=0 auid=500 ses=44 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:19:29.022246 kernel: audit: type=1106 audit(1776712768.866:1040): pid=5619 uid=0 auid=500 ses=44 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:19:29.019000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@42-8213-10.0.0.14:22-10.0.0.1:58062 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:19:29.019945 systemd[1]: sshd@42-8213-10.0.0.14:22-10.0.0.1:58062.service: Deactivated successfully. Apr 20 19:19:29.060178 kernel: audit: type=1104 audit(1776712768.866:1041): pid=5619 uid=0 auid=500 ses=44 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:19:29.060293 kernel: audit: type=1131 audit(1776712769.019:1042): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@42-8213-10.0.0.14:22-10.0.0.1:58062 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:19:29.083843 systemd[1]: session-44.scope: Deactivated successfully. Apr 20 19:19:29.094880 systemd[1]: session-44.scope: Consumed 1.890s CPU time, 15.9M memory peak. Apr 20 19:19:29.253201 systemd-logind[1627]: Session 44 logged out. Waiting for processes to exit. Apr 20 19:19:29.306713 systemd-logind[1627]: Removed session 44. Apr 20 19:19:35.566101 systemd[1]: Started sshd@43-8214-10.0.0.14:22-10.0.0.1:50030.service - OpenSSH per-connection server daemon (10.0.0.1:50030). Apr 20 19:19:35.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@43-8214-10.0.0.14:22-10.0.0.1:50030 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:19:36.018936 kernel: audit: type=1130 audit(1776712775.683:1043): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@43-8214-10.0.0.14:22-10.0.0.1:50030 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:19:38.975442 kubelet[3163]: E0420 19:19:38.930094 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.998s" Apr 20 19:19:41.091000 audit[5651]: AUDIT1101 pid=5651 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:19:41.146794 kernel: audit: type=1101 audit(1776712781.091:1044): pid=5651 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:19:41.152967 sshd[5651]: Accepted publickey for core from 10.0.0.1 port 50030 ssh2: RSA SHA256:ZNIzts6V4KYKlrJxXaosrimCRlmsV/+NkZ5UtjwHrjE Apr 20 19:19:41.249000 audit[5651]: AUDIT1103 pid=5651 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:19:41.305288 kernel: audit: type=1103 audit(1776712781.249:1045): pid=5651 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:19:41.309344 kubelet[3163]: E0420 19:19:41.263310 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.942s" Apr 20 19:19:41.303000 audit[5651]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffecf439870 a2=3 a3=0 items=0 ppid=1 pid=5651 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=45 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:19:41.303000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:19:41.521225 kernel: audit: type=1006 audit(1776712781.303:1046): pid=5651 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=45 res=1 Apr 20 19:19:41.340202 sshd-session[5651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 19:19:41.682132 kernel: audit: type=1300 audit(1776712781.303:1046): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffecf439870 a2=3 a3=0 items=0 ppid=1 pid=5651 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=45 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:19:41.695117 kernel: audit: type=1327 audit(1776712781.303:1046): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:19:42.140247 systemd-logind[1627]: New session '45' of user 'core' with class 'user' and type 'tty'. Apr 20 19:19:43.066995 systemd[1]: Started session-45.scope - Session 45 of User core. Apr 20 19:19:43.956114 kubelet[3163]: E0420 19:19:43.955915 3163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:19:43.986000 audit[5651]: AUDIT1105 pid=5651 uid=0 auid=500 ses=45 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:19:44.022046 kernel: audit: type=1105 audit(1776712783.986:1047): pid=5651 uid=0 auid=500 ses=45 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:19:44.036000 audit[5668]: AUDIT1103 pid=5668 uid=0 auid=500 ses=45 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:19:44.082350 kernel: audit: type=1103 audit(1776712784.036:1048): pid=5668 uid=0 auid=500 ses=45 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:20:09.840982 sshd[5668]: Connection closed by 10.0.0.1 port 50030 Apr 20 19:20:10.004199 sshd-session[5651]: pam_unix(sshd:session): session closed for user core Apr 20 19:20:10.339000 audit[5651]: AUDIT1106 pid=5651 uid=0 auid=500 ses=45 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:20:10.364000 audit[5651]: AUDIT1104 pid=5651 uid=0 auid=500 ses=45 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:20:10.588894 kernel: audit: type=1106 audit(1776712810.339:1049): pid=5651 uid=0 auid=500 ses=45 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:20:10.591273 kernel: audit: type=1104 audit(1776712810.364:1050): pid=5651 uid=0 auid=500 ses=45 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:20:10.670651 systemd[1]: sshd@43-8214-10.0.0.14:22-10.0.0.1:50030.service: Deactivated successfully. Apr 20 19:20:10.670000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@43-8214-10.0.0.14:22-10.0.0.1:50030 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:20:10.672302 systemd[1]: sshd@43-8214-10.0.0.14:22-10.0.0.1:50030.service: Consumed 1.775s CPU time, 4.1M memory peak. Apr 20 19:20:10.861233 kernel: audit: type=1131 audit(1776712810.670:1051): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@43-8214-10.0.0.14:22-10.0.0.1:50030 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:20:11.219061 systemd[1]: session-45.scope: Deactivated successfully. Apr 20 19:20:11.472300 systemd[1]: session-45.scope: Consumed 11.939s CPU time, 17.9M memory peak. Apr 20 19:20:12.365641 systemd-logind[1627]: Session 45 logged out. Waiting for processes to exit. Apr 20 19:20:13.058093 systemd[1]: cri-containerd-d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf.scope: Deactivated successfully. Apr 20 19:20:13.188000 audit: BPF prog-id=83 op=UNLOAD Apr 20 19:20:13.195000 audit: BPF prog-id=106 op=UNLOAD Apr 20 19:20:13.770993 kernel: audit: type=1334 audit(1776712813.188:1052): prog-id=83 op=UNLOAD Apr 20 19:20:13.197455 systemd[1]: cri-containerd-d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf.scope: Consumed 1min 43.854s CPU time, 83.2M memory peak, 18.1M read from disk. Apr 20 19:20:13.902511 kernel: audit: type=1334 audit(1776712813.195:1053): prog-id=106 op=UNLOAD Apr 20 19:20:13.973836 systemd-logind[1627]: Removed session 45. Apr 20 19:20:17.553252 systemd[1]: Started sshd@44-12297-10.0.0.14:22-10.0.0.1:58342.service - OpenSSH per-connection server daemon (10.0.0.1:58342). Apr 20 19:20:17.716000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@44-12297-10.0.0.14:22-10.0.0.1:58342 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:20:17.793051 kernel: audit: type=1130 audit(1776712817.716:1054): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@44-12297-10.0.0.14:22-10.0.0.1:58342 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:20:17.908524 containerd[1659]: time="2026-04-20T19:20:17.878230873Z" level=info msg="received container exit event container_id:\"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\" id:\"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\" pid:2993 exit_status:1 exited_at:{seconds:1776712815 nanos:539071424}" Apr 20 19:20:20.247485 systemd[1]: cri-containerd-ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729.scope: Deactivated successfully. Apr 20 19:20:20.251000 audit: BPF prog-id=78 op=UNLOAD Apr 20 19:20:20.296000 audit: BPF prog-id=107 op=UNLOAD Apr 20 19:20:20.720274 kernel: audit: type=1334 audit(1776712820.251:1055): prog-id=78 op=UNLOAD Apr 20 19:20:20.314453 systemd[1]: cri-containerd-ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729.scope: Consumed 57.490s CPU time, 33.4M memory peak, 10.1M read from disk. Apr 20 19:20:20.831191 kernel: audit: type=1334 audit(1776712820.296:1056): prog-id=107 op=UNLOAD Apr 20 19:20:23.044168 containerd[1659]: time="2026-04-20T19:20:23.037158007Z" level=info msg="received container exit event container_id:\"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\" id:\"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\" pid:2981 exit_status:1 exited_at:{seconds:1776712820 nanos:249587616}" Apr 20 19:20:28.876396 containerd[1659]: time="2026-04-20T19:20:28.763080232Z" level=error msg="failed to handle container TaskExit event container_id:\"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\" id:\"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\" pid:2993 exit_status:1 exited_at:{seconds:1776712815 nanos:539071424}" error="failed to stop container: context deadline exceeded" Apr 20 19:20:30.084491 containerd[1659]: time="2026-04-20T19:20:30.082016917Z" level=error msg="ttrpc: received message on inactive stream" stream=139 Apr 20 19:20:30.695859 containerd[1659]: time="2026-04-20T19:20:30.672296181Z" level=info msg="TaskExit event container_id:\"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\" id:\"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\" pid:2993 exit_status:1 exited_at:{seconds:1776712815 nanos:539071424}" Apr 20 19:20:30.740335 containerd[1659]: time="2026-04-20T19:20:30.734235567Z" level=error msg="ttrpc: received message on inactive stream" stream=135 Apr 20 19:20:33.825835 containerd[1659]: time="2026-04-20T19:20:33.824823670Z" level=error msg="failed to handle container TaskExit event container_id:\"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\" id:\"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\" pid:2981 exit_status:1 exited_at:{seconds:1776712820 nanos:249587616}" error="failed to stop container: context deadline exceeded" Apr 20 19:20:35.170239 containerd[1659]: time="2026-04-20T19:20:35.169881845Z" level=error msg="ttrpc: received message on inactive stream" stream=139 Apr 20 19:20:35.373086 containerd[1659]: time="2026-04-20T19:20:35.371589526Z" level=error msg="ttrpc: received message on inactive stream" stream=135 Apr 20 19:20:36.103000 audit[5695]: NETFILTER_CFG table=filter:160 family=2 entries=20 op=nft_register_rule pid=5695 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Apr 20 19:20:36.103000 audit[5695]: SYSCALL arch=c000003e syscall=46 success=yes exit=3016 a0=3 a1=7ffe2aa6d920 a2=0 a3=7ffe2aa6d90c items=0 ppid=3270 pid=5695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:20:36.103000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Apr 20 19:20:36.538313 kernel: audit: type=1325 audit(1776712836.103:1057): table=filter:160 family=2 entries=20 op=nft_register_rule pid=5695 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Apr 20 19:20:36.566926 kernel: audit: type=1300 audit(1776712836.103:1057): arch=c000003e syscall=46 success=yes exit=3016 a0=3 a1=7ffe2aa6d920 a2=0 a3=7ffe2aa6d90c items=0 ppid=3270 pid=5695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:20:36.651982 kernel: audit: type=1327 audit(1776712836.103:1057): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Apr 20 19:20:37.482000 audit[5695]: NETFILTER_CFG table=nat:161 family=2 entries=110 op=nft_register_chain pid=5695 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Apr 20 19:20:37.482000 audit[5695]: SYSCALL arch=c000003e syscall=46 success=yes exit=50988 a0=3 a1=7ffe2aa6d920 a2=0 a3=7ffe2aa6d90c items=0 ppid=3270 pid=5695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:20:37.482000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Apr 20 19:20:37.632146 kernel: audit: type=1325 audit(1776712837.482:1058): table=nat:161 family=2 entries=110 op=nft_register_chain pid=5695 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Apr 20 19:20:37.633096 kernel: audit: type=1300 audit(1776712837.482:1058): arch=c000003e syscall=46 success=yes exit=50988 a0=3 a1=7ffe2aa6d920 a2=0 a3=7ffe2aa6d90c items=0 ppid=3270 pid=5695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:20:37.633122 kernel: audit: type=1327 audit(1776712837.482:1058): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Apr 20 19:20:38.817000 audit[5686]: AUDIT1101 pid=5686 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:20:38.983953 kernel: audit: type=1101 audit(1776712838.817:1059): pid=5686 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:20:39.001917 sshd[5686]: Accepted publickey for core from 10.0.0.1 port 58342 ssh2: RSA SHA256:ZNIzts6V4KYKlrJxXaosrimCRlmsV/+NkZ5UtjwHrjE Apr 20 19:20:39.218000 audit[5686]: AUDIT1103 pid=5686 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:20:39.218000 audit[5686]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe13719fa0 a2=3 a3=0 items=0 ppid=1 pid=5686 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=46 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:20:39.218000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:20:39.366467 kernel: audit: type=1103 audit(1776712839.218:1060): pid=5686 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:20:39.366892 kernel: audit: type=1006 audit(1776712839.218:1061): pid=5686 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=46 res=1 Apr 20 19:20:39.366929 kernel: audit: type=1300 audit(1776712839.218:1061): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe13719fa0 a2=3 a3=0 items=0 ppid=1 pid=5686 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=46 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:20:39.558874 sshd-session[5686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 19:20:41.026164 systemd-logind[1627]: New session '46' of user 'core' with class 'user' and type 'tty'. Apr 20 19:20:41.766155 kubelet[3163]: E0420 19:20:41.084987 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="59.736s" Apr 20 19:20:41.797335 systemd[1]: Started session-46.scope - Session 46 of User core. Apr 20 19:20:42.044000 audit: BPF prog-id=133 op=UNLOAD Apr 20 19:20:42.044000 audit: BPF prog-id=137 op=UNLOAD Apr 20 19:20:42.415209 containerd[1659]: time="2026-04-20T19:20:41.878862151Z" level=error msg="Failed to handle backOff event container_id:\"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\" id:\"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\" pid:2993 exit_status:1 exited_at:{seconds:1776712815 nanos:539071424} for d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 20 19:20:42.415209 containerd[1659]: time="2026-04-20T19:20:41.880005547Z" level=info msg="TaskExit event container_id:\"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\" id:\"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\" pid:2981 exit_status:1 exited_at:{seconds:1776712820 nanos:249587616}" Apr 20 19:20:42.779831 kernel: kauditd_printk_skb: 1 callbacks suppressed Apr 20 19:20:42.024483 systemd[1]: cri-containerd-bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf.scope: Deactivated successfully. Apr 20 19:20:42.817050 kernel: audit: type=1334 audit(1776712842.044:1062): prog-id=133 op=UNLOAD Apr 20 19:20:42.046397 systemd[1]: cri-containerd-bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf.scope: Consumed 1min 15.554s CPU time, 112.1M memory peak, 12.5M read from disk. Apr 20 19:20:42.823457 kernel: audit: type=1334 audit(1776712842.044:1063): prog-id=137 op=UNLOAD Apr 20 19:20:42.851114 containerd[1659]: time="2026-04-20T19:20:42.848897864Z" level=error msg="ttrpc: received message on inactive stream" stream=145 Apr 20 19:20:42.979924 containerd[1659]: time="2026-04-20T19:20:42.875305843Z" level=error msg="ttrpc: received message on inactive stream" stream=149 Apr 20 19:20:43.282292 containerd[1659]: time="2026-04-20T19:20:43.277281583Z" level=info msg="received container exit event container_id:\"bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf\" id:\"bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf\" pid:3768 exit_status:1 exited_at:{seconds:1776712842 nanos:981099034}" Apr 20 19:20:43.481000 audit[5686]: AUDIT1105 pid=5686 uid=0 auid=500 ses=46 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:20:43.835116 kernel: audit: type=1105 audit(1776712843.481:1064): pid=5686 uid=0 auid=500 ses=46 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:20:43.986000 audit[5711]: AUDIT1103 pid=5711 uid=0 auid=500 ses=46 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:20:44.113339 kernel: audit: type=1103 audit(1776712843.986:1065): pid=5711 uid=0 auid=500 ses=46 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:20:45.263831 kubelet[3163]: E0420 19:20:45.263795 3163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:20:46.761028 kubelet[3163]: E0420 19:20:46.217140 3163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:20:51.248328 containerd[1659]: time="2026-04-20T19:20:50.684892196Z" level=info msg="container event discarded" container=1bba4a8c43a21f7135445620d6423b332e6946df1fe8c521ac8720d95194888f type=CONTAINER_CREATED_EVENT Apr 20 19:20:51.635492 containerd[1659]: time="2026-04-20T19:20:51.285101768Z" level=info msg="container event discarded" container=1bba4a8c43a21f7135445620d6423b332e6946df1fe8c521ac8720d95194888f type=CONTAINER_STARTED_EVENT Apr 20 19:20:51.950489 containerd[1659]: time="2026-04-20T19:20:51.947251358Z" level=error msg="Failed to handle backOff event container_id:\"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\" id:\"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\" pid:2981 exit_status:1 exited_at:{seconds:1776712820 nanos:249587616} for ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 20 19:20:52.051455 containerd[1659]: time="2026-04-20T19:20:52.050024306Z" level=info msg="TaskExit event container_id:\"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\" id:\"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\" pid:2993 exit_status:1 exited_at:{seconds:1776712815 nanos:539071424}" Apr 20 19:20:52.246291 containerd[1659]: time="2026-04-20T19:20:52.231167542Z" level=error msg="ttrpc: received message on inactive stream" stream=145 Apr 20 19:20:52.246291 containerd[1659]: time="2026-04-20T19:20:52.234770963Z" level=error msg="ttrpc: received message on inactive stream" stream=149 Apr 20 19:20:53.558346 containerd[1659]: time="2026-04-20T19:20:53.551096939Z" level=error msg="failed to handle container TaskExit event container_id:\"bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf\" id:\"bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf\" pid:3768 exit_status:1 exited_at:{seconds:1776712842 nanos:981099034}" error="failed to stop container: context deadline exceeded" Apr 20 19:20:55.414509 containerd[1659]: time="2026-04-20T19:20:55.406869017Z" level=error msg="ttrpc: received message on inactive stream" stream=77 Apr 20 19:20:55.823079 containerd[1659]: time="2026-04-20T19:20:55.487001693Z" level=error msg="ttrpc: received message on inactive stream" stream=81 Apr 20 19:21:01.063177 kubelet[3163]: E0420 19:21:01.047595 3163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:21:02.123650 containerd[1659]: time="2026-04-20T19:21:02.123232054Z" level=error msg="Failed to handle backOff event container_id:\"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\" id:\"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\" pid:2993 exit_status:1 exited_at:{seconds:1776712815 nanos:539071424} for d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 20 19:21:03.163632 containerd[1659]: time="2026-04-20T19:21:02.834223851Z" level=info msg="TaskExit event container_id:\"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\" id:\"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\" pid:2981 exit_status:1 exited_at:{seconds:1776712820 nanos:249587616}" Apr 20 19:21:03.533291 containerd[1659]: time="2026-04-20T19:21:03.474088859Z" level=info msg="container event discarded" container=5e0c1918f38592c73fcb73bd95d17e0d6767d8a16147c688f68fc7ce7991db55 type=CONTAINER_CREATED_EVENT Apr 20 19:21:04.367012 containerd[1659]: time="2026-04-20T19:21:04.362994655Z" level=error msg="ttrpc: received message on inactive stream" stream=155 Apr 20 19:21:04.858239 containerd[1659]: time="2026-04-20T19:21:04.569020139Z" level=error msg="ttrpc: received message on inactive stream" stream=159 Apr 20 19:21:05.785468 sshd[5711]: Connection closed by 10.0.0.1 port 58342 Apr 20 19:21:05.984150 containerd[1659]: time="2026-04-20T19:21:05.768902598Z" level=info msg="container event discarded" container=5e0c1918f38592c73fcb73bd95d17e0d6767d8a16147c688f68fc7ce7991db55 type=CONTAINER_STARTED_EVENT Apr 20 19:21:05.874470 sshd-session[5686]: pam_unix(sshd:session): session closed for user core Apr 20 19:21:06.059000 audit[5686]: AUDIT1106 pid=5686 uid=0 auid=500 ses=46 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:21:06.087000 audit[5686]: AUDIT1104 pid=5686 uid=0 auid=500 ses=46 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:21:06.628407 kernel: audit: type=1106 audit(1776712866.059:1066): pid=5686 uid=0 auid=500 ses=46 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:21:06.676209 systemd[1]: sshd@44-12297-10.0.0.14:22-10.0.0.1:58342.service: Deactivated successfully. Apr 20 19:21:06.807000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@44-12297-10.0.0.14:22-10.0.0.1:58342 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:21:07.166141 kernel: audit: type=1104 audit(1776712866.087:1067): pid=5686 uid=0 auid=500 ses=46 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:21:06.832348 systemd[1]: sshd@44-12297-10.0.0.14:22-10.0.0.1:58342.service: Consumed 6.336s CPU time, 4.3M memory peak. Apr 20 19:21:07.173303 kernel: audit: type=1131 audit(1776712866.807:1068): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@44-12297-10.0.0.14:22-10.0.0.1:58342 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:21:07.551197 systemd[1]: session-46.scope: Deactivated successfully. Apr 20 19:21:07.757706 systemd[1]: session-46.scope: Consumed 12.806s CPU time, 16M memory peak. Apr 20 19:21:07.948664 containerd[1659]: time="2026-04-20T19:21:07.948100008Z" level=info msg="container event discarded" container=5e0c1918f38592c73fcb73bd95d17e0d6767d8a16147c688f68fc7ce7991db55 type=CONTAINER_STOPPED_EVENT Apr 20 19:21:08.037988 systemd-logind[1627]: Session 46 logged out. Waiting for processes to exit. Apr 20 19:21:08.696779 systemd-logind[1627]: Removed session 46. Apr 20 19:21:13.798856 containerd[1659]: time="2026-04-20T19:21:13.368448278Z" level=error msg="Failed to handle backOff event container_id:\"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\" id:\"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\" pid:2981 exit_status:1 exited_at:{seconds:1776712820 nanos:249587616} for ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 20 19:21:14.273000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@45-12298-10.0.0.14:22-10.0.0.1:46242 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:21:14.130749 systemd[1]: Started sshd@45-12298-10.0.0.14:22-10.0.0.1:46242.service - OpenSSH per-connection server daemon (10.0.0.1:46242). Apr 20 19:21:14.827009 kernel: audit: type=1130 audit(1776712874.273:1069): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@45-12298-10.0.0.14:22-10.0.0.1:46242 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:21:14.858478 containerd[1659]: time="2026-04-20T19:21:14.481271107Z" level=info msg="TaskExit event container_id:\"bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf\" id:\"bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf\" pid:3768 exit_status:1 exited_at:{seconds:1776712842 nanos:981099034}" Apr 20 19:21:15.966334 containerd[1659]: time="2026-04-20T19:21:15.966171422Z" level=error msg="ttrpc: received message on inactive stream" stream=155 Apr 20 19:21:16.363321 containerd[1659]: time="2026-04-20T19:21:16.008197708Z" level=error msg="ttrpc: received message on inactive stream" stream=159 Apr 20 19:21:25.941798 containerd[1659]: time="2026-04-20T19:21:25.878325265Z" level=error msg="Failed to handle backOff event container_id:\"bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf\" id:\"bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf\" pid:3768 exit_status:1 exited_at:{seconds:1776712842 nanos:981099034} for bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 20 19:21:25.941798 containerd[1659]: time="2026-04-20T19:21:25.940300469Z" level=info msg="TaskExit event container_id:\"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\" id:\"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\" pid:2981 exit_status:1 exited_at:{seconds:1776712820 nanos:249587616}" Apr 20 19:21:28.082246 containerd[1659]: time="2026-04-20T19:21:28.068222380Z" level=error msg="ttrpc: received message on inactive stream" stream=87 Apr 20 19:21:28.472010 containerd[1659]: time="2026-04-20T19:21:28.117055857Z" level=error msg="ttrpc: received message on inactive stream" stream=91 Apr 20 19:21:33.266000 audit[5755]: AUDIT1101 pid=5755 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:21:33.368119 kernel: audit: type=1101 audit(1776712893.266:1070): pid=5755 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:21:33.438439 sshd[5755]: Accepted publickey for core from 10.0.0.1 port 46242 ssh2: RSA SHA256:ZNIzts6V4KYKlrJxXaosrimCRlmsV/+NkZ5UtjwHrjE Apr 20 19:21:33.777000 audit[5755]: AUDIT1103 pid=5755 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:21:33.900000 audit[5755]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffc47359f0 a2=3 a3=0 items=0 ppid=1 pid=5755 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=47 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:21:33.900000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:21:34.244034 kernel: audit: type=1103 audit(1776712893.777:1071): pid=5755 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:21:34.266923 kernel: audit: type=1006 audit(1776712893.900:1072): pid=5755 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=47 res=1 Apr 20 19:21:34.322332 kernel: audit: type=1300 audit(1776712893.900:1072): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffc47359f0 a2=3 a3=0 items=0 ppid=1 pid=5755 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=47 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:21:34.414352 kernel: audit: type=1327 audit(1776712893.900:1072): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:21:34.464344 sshd-session[5755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 19:21:37.303885 systemd-logind[1627]: New session '47' of user 'core' with class 'user' and type 'tty'. Apr 20 19:21:37.524237 containerd[1659]: time="2026-04-20T19:21:37.519253880Z" level=error msg="Failed to handle backOff event container_id:\"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\" id:\"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\" pid:2981 exit_status:1 exited_at:{seconds:1776712820 nanos:249587616} for ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 20 19:21:37.771595 containerd[1659]: time="2026-04-20T19:21:37.771037131Z" level=error msg="ttrpc: received message on inactive stream" stream=167 Apr 20 19:21:37.833910 systemd[1]: Started session-47.scope - Session 47 of User core. Apr 20 19:21:37.981029 containerd[1659]: time="2026-04-20T19:21:37.771472774Z" level=error msg="ttrpc: received message on inactive stream" stream=169 Apr 20 19:21:38.150090 containerd[1659]: time="2026-04-20T19:21:38.128443250Z" level=info msg="TaskExit event container_id:\"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\" id:\"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\" pid:2993 exit_status:1 exited_at:{seconds:1776712815 nanos:539071424}" Apr 20 19:21:38.742188 kubelet[3163]: E0420 19:21:38.483892 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="54.535s" Apr 20 19:21:39.937000 audit[5755]: AUDIT1105 pid=5755 uid=0 auid=500 ses=47 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:21:40.337147 kernel: audit: type=1105 audit(1776712899.937:1073): pid=5755 uid=0 auid=500 ses=47 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:21:41.187000 audit[5775]: AUDIT1103 pid=5775 uid=0 auid=500 ses=47 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:21:41.285696 kernel: audit: type=1103 audit(1776712901.187:1074): pid=5775 uid=0 auid=500 ses=47 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:21:47.958049 containerd[1659]: time="2026-04-20T19:21:47.938263533Z" level=error msg="Failed to handle backOff event container_id:\"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\" id:\"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\" pid:2993 exit_status:1 exited_at:{seconds:1776712815 nanos:539071424} for d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 20 19:21:47.958049 containerd[1659]: time="2026-04-20T19:21:47.944071951Z" level=info msg="TaskExit event container_id:\"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\" id:\"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\" pid:2981 exit_status:1 exited_at:{seconds:1776712820 nanos:249587616}" Apr 20 19:21:49.277492 kubelet[3163]: E0420 19:21:47.611033 3163 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = DeadlineExceeded desc = stream terminated by RST_STREAM with error code: CANCEL" containerID="7ad53f3fafdaae0a232b3f26afb202844744846718e9540c77ad7513e837efd9" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 20 19:21:50.175054 containerd[1659]: time="2026-04-20T19:21:50.172427231Z" level=error msg="get state for 9cda8f369855ee0b6f686be8ed2fce70c156be1bab096f4e877537d597b03998" error="context deadline exceeded" Apr 20 19:21:50.563508 containerd[1659]: time="2026-04-20T19:21:50.173487413Z" level=warning msg="unknown status" status=0 Apr 20 19:21:51.159969 containerd[1659]: time="2026-04-20T19:21:50.280829821Z" level=error msg="ttrpc: received message on inactive stream" stream=165 Apr 20 19:21:52.055941 containerd[1659]: time="2026-04-20T19:21:51.679644672Z" level=error msg="ttrpc: received message on inactive stream" stream=169 Apr 20 19:21:52.684640 containerd[1659]: time="2026-04-20T19:21:52.061220564Z" level=error msg="ttrpc: received message on inactive stream" stream=185 Apr 20 19:21:53.774319 containerd[1659]: time="2026-04-20T19:21:53.773924001Z" level=error msg="ExecSync for \"7ad53f3fafdaae0a232b3f26afb202844744846718e9540c77ad7513e837efd9\" failed" error="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" Apr 20 19:21:55.833757 kubelet[3163]: E0420 19:21:54.821219 3163 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 20 19:21:58.025855 containerd[1659]: time="2026-04-20T19:21:58.020826836Z" level=error msg="Failed to handle backOff event container_id:\"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\" id:\"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\" pid:2981 exit_status:1 exited_at:{seconds:1776712820 nanos:249587616} for ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 20 19:21:58.626359 containerd[1659]: time="2026-04-20T19:21:58.621134926Z" level=info msg="TaskExit event container_id:\"bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf\" id:\"bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf\" pid:3768 exit_status:1 exited_at:{seconds:1776712842 nanos:981099034}" Apr 20 19:22:00.865701 containerd[1659]: time="2026-04-20T19:22:00.841201952Z" level=error msg="ttrpc: received message on inactive stream" stream=179 Apr 20 19:22:01.022433 containerd[1659]: time="2026-04-20T19:22:01.010200282Z" level=error msg="ttrpc: received message on inactive stream" stream=181 Apr 20 19:22:09.550333 containerd[1659]: time="2026-04-20T19:22:07.916236650Z" level=info msg="container event discarded" container=de0d9f22d1f757f858246be59dd8870879fb3814bf5601fd5aafefa1aed27380 type=CONTAINER_CREATED_EVENT Apr 20 19:22:09.973039 containerd[1659]: time="2026-04-20T19:22:09.460235306Z" level=error msg="Failed to handle backOff event container_id:\"bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf\" id:\"bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf\" pid:3768 exit_status:1 exited_at:{seconds:1776712842 nanos:981099034} for bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 20 19:22:10.111965 containerd[1659]: time="2026-04-20T19:22:10.019266864Z" level=info msg="TaskExit event container_id:\"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\" id:\"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\" pid:2993 exit_status:1 exited_at:{seconds:1776712815 nanos:539071424}" Apr 20 19:22:11.746073 containerd[1659]: time="2026-04-20T19:22:11.732277846Z" level=error msg="ttrpc: received message on inactive stream" stream=99 Apr 20 19:22:12.322223 containerd[1659]: time="2026-04-20T19:22:11.738058508Z" level=error msg="ttrpc: received message on inactive stream" stream=103 Apr 20 19:22:12.550116 containerd[1659]: time="2026-04-20T19:22:12.510900294Z" level=info msg="container event discarded" container=de0d9f22d1f757f858246be59dd8870879fb3814bf5601fd5aafefa1aed27380 type=CONTAINER_STARTED_EVENT Apr 20 19:22:12.845701 containerd[1659]: time="2026-04-20T19:22:12.523291201Z" level=info msg="container event discarded" container=de0d9f22d1f757f858246be59dd8870879fb3814bf5601fd5aafefa1aed27380 type=CONTAINER_STOPPED_EVENT Apr 20 19:22:21.660254 sshd[5775]: Connection closed by 10.0.0.1 port 46242 Apr 20 19:22:21.742914 sshd-session[5755]: pam_unix(sshd:session): session closed for user core Apr 20 19:22:22.117000 audit[5755]: AUDIT1106 pid=5755 uid=0 auid=500 ses=47 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:22:22.162000 audit[5755]: AUDIT1104 pid=5755 uid=0 auid=500 ses=47 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:22:22.482889 kernel: audit: type=1106 audit(1776712942.117:1075): pid=5755 uid=0 auid=500 ses=47 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:22:22.511885 kernel: audit: type=1104 audit(1776712942.162:1076): pid=5755 uid=0 auid=500 ses=47 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:22:22.538791 kubelet[3163]: E0420 19:22:21.304355 3163 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 20 19:22:22.617000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@45-12298-10.0.0.14:22-10.0.0.1:46242 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:22:22.539333 systemd[1]: sshd@45-12298-10.0.0.14:22-10.0.0.1:46242.service: Deactivated successfully. Apr 20 19:22:22.862248 kernel: audit: type=1131 audit(1776712942.617:1077): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@45-12298-10.0.0.14:22-10.0.0.1:46242 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:22:22.656081 systemd[1]: sshd@45-12298-10.0.0.14:22-10.0.0.1:46242.service: Consumed 6.522s CPU time, 4.3M memory peak. Apr 20 19:22:23.250473 systemd[1]: session-47.scope: Deactivated successfully. Apr 20 19:22:23.388060 systemd[1]: session-47.scope: Consumed 22.265s CPU time, 17M memory peak. Apr 20 19:22:23.727251 containerd[1659]: time="2026-04-20T19:22:23.483011079Z" level=info msg="container event discarded" container=9ded083b6efa3e8bd38dd84260b7356256b1201ad704d9a7d0fc68e43d407328 type=CONTAINER_CREATED_EVENT Apr 20 19:22:24.079527 containerd[1659]: time="2026-04-20T19:22:22.157508869Z" level=error msg="post event" error="context deadline exceeded" Apr 20 19:22:24.079527 containerd[1659]: time="2026-04-20T19:22:21.389189097Z" level=error msg="ttrpc: received message on inactive stream" stream=63 Apr 20 19:22:23.775912 systemd-logind[1627]: Session 47 logged out. Waiting for processes to exit. Apr 20 19:22:24.803694 systemd-logind[1627]: Removed session 47. Apr 20 19:22:25.470507 containerd[1659]: time="2026-04-20T19:22:25.468088908Z" level=error msg="get state for 4541a931ad8dcecb05315d64587cd8ba7190629062d02fc0133cc1309c2941e5" error="context deadline exceeded" Apr 20 19:22:25.664065 containerd[1659]: time="2026-04-20T19:22:25.522670550Z" level=warning msg="unknown status" status=0 Apr 20 19:22:26.187587 containerd[1659]: time="2026-04-20T19:22:26.185643262Z" level=info msg="container event discarded" container=9ded083b6efa3e8bd38dd84260b7356256b1201ad704d9a7d0fc68e43d407328 type=CONTAINER_STARTED_EVENT Apr 20 19:22:29.240705 containerd[1659]: time="2026-04-20T19:22:29.233785242Z" level=error msg="ttrpc: received message on inactive stream" stream=177 Apr 20 19:22:31.035682 systemd[1]: Started sshd@46-8215-10.0.0.14:22-10.0.0.1:57226.service - OpenSSH per-connection server daemon (10.0.0.1:57226). Apr 20 19:22:31.213000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@46-8215-10.0.0.14:22-10.0.0.1:57226 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:22:32.830642 kernel: audit: type=1130 audit(1776712951.213:1078): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@46-8215-10.0.0.14:22-10.0.0.1:57226 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:22:33.062308 containerd[1659]: time="2026-04-20T19:22:33.041318658Z" level=error msg="Failed to handle backOff event container_id:\"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\" id:\"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\" pid:2993 exit_status:1 exited_at:{seconds:1776712815 nanos:539071424} for d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 20 19:22:33.062308 containerd[1659]: time="2026-04-20T19:22:33.042059304Z" level=info msg="TaskExit event container_id:\"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\" id:\"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\" pid:2981 exit_status:1 exited_at:{seconds:1776712820 nanos:249587616}" Apr 20 19:22:34.597267 containerd[1659]: time="2026-04-20T19:22:34.568856599Z" level=error msg="ttrpc: received message on inactive stream" stream=179 Apr 20 19:22:37.421117 containerd[1659]: time="2026-04-20T19:22:37.362114520Z" level=info msg="container event discarded" container=9ded083b6efa3e8bd38dd84260b7356256b1201ad704d9a7d0fc68e43d407328 type=CONTAINER_STOPPED_EVENT Apr 20 19:22:38.883099 containerd[1659]: time="2026-04-20T19:22:38.866246639Z" level=info msg="container event discarded" container=7ad53f3fafdaae0a232b3f26afb202844744846718e9540c77ad7513e837efd9 type=CONTAINER_CREATED_EVENT Apr 20 19:22:39.679405 kubelet[3163]: E0420 19:22:38.845902 3163 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 20 19:22:40.169512 containerd[1659]: time="2026-04-20T19:22:40.151249510Z" level=info msg="container event discarded" container=7ad53f3fafdaae0a232b3f26afb202844744846718e9540c77ad7513e837efd9 type=CONTAINER_STARTED_EVENT Apr 20 19:22:43.761657 containerd[1659]: time="2026-04-20T19:22:43.576279887Z" level=error msg="Failed to handle backOff event container_id:\"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\" id:\"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\" pid:2981 exit_status:1 exited_at:{seconds:1776712820 nanos:249587616} for ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 20 19:22:44.469507 containerd[1659]: time="2026-04-20T19:22:44.436176772Z" level=info msg="TaskExit event container_id:\"bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf\" id:\"bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf\" pid:3768 exit_status:1 exited_at:{seconds:1776712842 nanos:981099034}" Apr 20 19:22:45.074425 containerd[1659]: time="2026-04-20T19:22:45.069838658Z" level=error msg="ttrpc: received message on inactive stream" stream=189 Apr 20 19:22:45.074425 containerd[1659]: time="2026-04-20T19:22:45.069973776Z" level=error msg="ttrpc: received message on inactive stream" stream=191 Apr 20 19:22:53.706454 kubelet[3163]: E0420 19:22:53.696679 3163 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" Apr 20 19:22:58.878255 containerd[1659]: time="2026-04-20T19:22:58.682191842Z" level=error msg="ttrpc: received message on inactive stream" stream=109 Apr 20 19:23:00.422655 containerd[1659]: time="2026-04-20T19:22:59.959961467Z" level=error msg="Failed to handle backOff event container_id:\"bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf\" id:\"bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf\" pid:3768 exit_status:1 exited_at:{seconds:1776712842 nanos:981099034} for bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 20 19:23:00.991186 containerd[1659]: time="2026-04-20T19:23:00.448745758Z" level=error msg="ttrpc: received message on inactive stream" stream=111 Apr 20 19:23:01.431875 containerd[1659]: time="2026-04-20T19:23:01.246685769Z" level=info msg="TaskExit event container_id:\"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\" id:\"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\" pid:2993 exit_status:1 exited_at:{seconds:1776712815 nanos:539071424}" Apr 20 19:23:09.966000 audit[5816]: AUDIT1101 pid=5816 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:23:10.364707 kernel: audit: type=1101 audit(1776712989.966:1079): pid=5816 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:23:10.420132 sshd[5816]: Accepted publickey for core from 10.0.0.1 port 57226 ssh2: RSA SHA256:ZNIzts6V4KYKlrJxXaosrimCRlmsV/+NkZ5UtjwHrjE Apr 20 19:23:11.385000 audit[5816]: AUDIT1103 pid=5816 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:23:11.554000 audit[5816]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe70003520 a2=3 a3=0 items=0 ppid=1 pid=5816 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=48 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:23:11.554000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:23:12.378288 kernel: audit: type=1103 audit(1776712991.385:1080): pid=5816 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:23:12.518329 containerd[1659]: time="2026-04-20T19:23:11.850243698Z" level=error msg="Failed to handle backOff event container_id:\"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\" id:\"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\" pid:2993 exit_status:1 exited_at:{seconds:1776712815 nanos:539071424} for d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 20 19:23:12.518329 containerd[1659]: time="2026-04-20T19:23:11.851117541Z" level=info msg="TaskExit event container_id:\"bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf\" id:\"bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf\" pid:3768 exit_status:1 exited_at:{seconds:1776712842 nanos:981099034}" Apr 20 19:23:11.908365 sshd-session[5816]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 19:23:13.879362 kernel: audit: type=1006 audit(1776712991.554:1081): pid=5816 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=48 res=1 Apr 20 19:23:13.926305 kernel: audit: type=1300 audit(1776712991.554:1081): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe70003520 a2=3 a3=0 items=0 ppid=1 pid=5816 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=48 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:23:13.948361 kernel: audit: type=1327 audit(1776712991.554:1081): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:23:15.266020 containerd[1659]: time="2026-04-20T19:23:14.855138296Z" level=error msg="ttrpc: received message on inactive stream" stream=185 Apr 20 19:23:16.113521 containerd[1659]: time="2026-04-20T19:23:15.947279997Z" level=error msg="ttrpc: received message on inactive stream" stream=189 Apr 20 19:23:17.848427 systemd-logind[1627]: New session '48' of user 'core' with class 'user' and type 'tty'. Apr 20 19:23:19.837216 systemd[1]: Started session-48.scope - Session 48 of User core. Apr 20 19:23:20.971003 kubelet[3163]: E0420 19:23:15.169367 3163 controller.go:195] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"localhost\": the object has been modified; please apply your changes to the latest version and try again" Apr 20 19:23:21.618639 containerd[1659]: time="2026-04-20T19:23:21.617253238Z" level=info msg="container event discarded" container=a95a152fb136b1fbc85dca6a88ab5e5cb65de4b808808cc8910b0dc8f0933850 type=CONTAINER_CREATED_EVENT Apr 20 19:23:22.189492 containerd[1659]: time="2026-04-20T19:23:21.639717291Z" level=info msg="container event discarded" container=a95a152fb136b1fbc85dca6a88ab5e5cb65de4b808808cc8910b0dc8f0933850 type=CONTAINER_STARTED_EVENT Apr 20 19:23:22.670000 audit[5816]: AUDIT1105 pid=5816 uid=0 auid=500 ses=48 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:23:23.115218 kernel: audit: type=1105 audit(1776713002.670:1082): pid=5816 uid=0 auid=500 ses=48 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:23:23.668000 audit[5832]: AUDIT1103 pid=5832 uid=0 auid=500 ses=48 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:23:23.837023 kernel: audit: type=1103 audit(1776713003.668:1083): pid=5832 uid=0 auid=500 ses=48 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:23:24.671879 containerd[1659]: time="2026-04-20T19:23:23.658885100Z" level=error msg="ttrpc: received message on inactive stream" stream=115 Apr 20 19:23:25.815287 containerd[1659]: time="2026-04-20T19:23:24.558080306Z" level=error msg="get state for bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf" error="context deadline exceeded" Apr 20 19:23:26.637772 containerd[1659]: time="2026-04-20T19:23:25.911182702Z" level=warning msg="unknown status" status=0 Apr 20 19:23:27.077284 containerd[1659]: time="2026-04-20T19:23:25.189490982Z" level=info msg="container event discarded" container=7741ff0e99f5739abda292d10780e87385f1fe549e3c8a3421f5007714bdc536 type=CONTAINER_CREATED_EVENT Apr 20 19:23:31.769035 containerd[1659]: time="2026-04-20T19:23:31.730249167Z" level=error msg="Failed to handle backOff event container_id:\"bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf\" id:\"bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf\" pid:3768 exit_status:1 exited_at:{seconds:1776712842 nanos:981099034} for bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 20 19:23:31.769035 containerd[1659]: time="2026-04-20T19:23:31.738321268Z" level=info msg="TaskExit event container_id:\"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\" id:\"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\" pid:2981 exit_status:1 exited_at:{seconds:1776712820 nanos:249587616}" Apr 20 19:23:33.481273 containerd[1659]: time="2026-04-20T19:23:33.356237490Z" level=error msg="ttrpc: received message on inactive stream" stream=117 Apr 20 19:23:34.589525 containerd[1659]: time="2026-04-20T19:23:34.223242832Z" level=info msg="container event discarded" container=7741ff0e99f5739abda292d10780e87385f1fe549e3c8a3421f5007714bdc536 type=CONTAINER_STARTED_EVENT Apr 20 19:23:34.852481 containerd[1659]: time="2026-04-20T19:23:34.819344427Z" level=error msg="ttrpc: received message on inactive stream" stream=119 Apr 20 19:23:36.697989 kubelet[3163]: E0420 19:23:36.688936 3163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="200ms" Apr 20 19:23:43.013364 containerd[1659]: time="2026-04-20T19:23:42.985271064Z" level=error msg="Failed to handle backOff event container_id:\"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\" id:\"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\" pid:2981 exit_status:1 exited_at:{seconds:1776712820 nanos:249587616} for ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 20 19:23:44.082497 containerd[1659]: time="2026-04-20T19:23:44.074970009Z" level=info msg="TaskExit event container_id:\"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\" id:\"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\" pid:2993 exit_status:1 exited_at:{seconds:1776712815 nanos:539071424}" Apr 20 19:23:44.321085 containerd[1659]: time="2026-04-20T19:23:44.086577052Z" level=error msg="ttrpc: received message on inactive stream" stream=197 Apr 20 19:23:44.321085 containerd[1659]: time="2026-04-20T19:23:44.086912478Z" level=error msg="ttrpc: received message on inactive stream" stream=201 Apr 20 19:23:50.829130 kubelet[3163]: I0420 19:23:50.803078 3163 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Apr 20 19:23:55.472195 containerd[1659]: time="2026-04-20T19:23:55.367523497Z" level=error msg="Failed to handle backOff event container_id:\"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\" id:\"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\" pid:2993 exit_status:1 exited_at:{seconds:1776712815 nanos:539071424} for d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 20 19:23:56.000987 containerd[1659]: time="2026-04-20T19:23:55.651146105Z" level=info msg="TaskExit event container_id:\"bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf\" id:\"bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf\" pid:3768 exit_status:1 exited_at:{seconds:1776712842 nanos:981099034}" Apr 20 19:23:56.544122 containerd[1659]: time="2026-04-20T19:23:55.888521883Z" level=error msg="ttrpc: received message on inactive stream" stream=195 Apr 20 19:23:57.456995 containerd[1659]: time="2026-04-20T19:23:55.958346335Z" level=info msg="container event discarded" container=de19448814837f2189e818ee408a7454c5277a1c9b2f91f664708851a1478571 type=CONTAINER_CREATED_EVENT Apr 20 19:23:57.456995 containerd[1659]: time="2026-04-20T19:23:57.185021455Z" level=error msg="ttrpc: received message on inactive stream" stream=199 Apr 20 19:23:58.087528 containerd[1659]: time="2026-04-20T19:23:57.757064891Z" level=info msg="container event discarded" container=de19448814837f2189e818ee408a7454c5277a1c9b2f91f664708851a1478571 type=CONTAINER_STARTED_EVENT Apr 20 19:23:59.236940 containerd[1659]: time="2026-04-20T19:23:59.236480077Z" level=error msg="get state for bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf" error="context deadline exceeded" Apr 20 19:24:00.242401 containerd[1659]: time="2026-04-20T19:24:00.231612125Z" level=warning msg="unknown status" status=0 Apr 20 19:24:01.499974 containerd[1659]: time="2026-04-20T19:24:00.427357999Z" level=info msg="container event discarded" container=14ed93037cc2e60e5e2c3a7165a10dc161fa627ce835db63b0b80b4e4ca7ba97 type=CONTAINER_CREATED_EVENT Apr 20 19:24:02.545153 containerd[1659]: time="2026-04-20T19:24:01.562813896Z" level=error msg="ttrpc: received message on inactive stream" stream=121 Apr 20 19:24:05.072317 containerd[1659]: time="2026-04-20T19:24:04.955858830Z" level=error msg="ttrpc: received message on inactive stream" stream=123 Apr 20 19:24:06.063263 containerd[1659]: time="2026-04-20T19:24:05.180054594Z" level=error msg="get state for bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf" error="context deadline exceeded" Apr 20 19:24:06.063263 containerd[1659]: time="2026-04-20T19:24:05.637169300Z" level=warning msg="unknown status" status=0 Apr 20 19:24:07.121294 containerd[1659]: time="2026-04-20T19:24:07.054349457Z" level=info msg="container event discarded" container=14ed93037cc2e60e5e2c3a7165a10dc161fa627ce835db63b0b80b4e4ca7ba97 type=CONTAINER_STARTED_EVENT Apr 20 19:24:09.630734 containerd[1659]: time="2026-04-20T19:24:09.609525363Z" level=error msg="get state for bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf" error="context deadline exceeded" Apr 20 19:24:09.630734 containerd[1659]: time="2026-04-20T19:24:09.615375420Z" level=warning msg="unknown status" status=0 Apr 20 19:24:10.349331 containerd[1659]: time="2026-04-20T19:24:10.343681537Z" level=error msg="failed to delete task" error="context deadline exceeded" id=bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf Apr 20 19:24:10.800305 containerd[1659]: time="2026-04-20T19:24:10.784013134Z" level=error msg="Failed to handle backOff event container_id:\"bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf\" id:\"bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf\" pid:3768 exit_status:1 exited_at:{seconds:1776712842 nanos:981099034} for bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded" Apr 20 19:24:11.876190 sshd[5832]: Connection closed by 10.0.0.1 port 57226 Apr 20 19:24:12.167170 sshd-session[5816]: pam_unix(sshd:session): session closed for user core Apr 20 19:24:13.360000 audit[5816]: AUDIT1106 pid=5816 uid=0 auid=500 ses=48 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:24:13.574000 audit[5816]: AUDIT1104 pid=5816 uid=0 auid=500 ses=48 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:24:14.638184 containerd[1659]: time="2026-04-20T19:24:12.528994241Z" level=error msg="ttrpc: received message on inactive stream" stream=125 Apr 20 19:24:15.145364 kernel: audit: type=1106 audit(1776713053.360:1084): pid=5816 uid=0 auid=500 ses=48 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:24:15.209165 kernel: audit: type=1104 audit(1776713053.574:1085): pid=5816 uid=0 auid=500 ses=48 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:24:15.204414 systemd[1]: sshd@46-8215-10.0.0.14:22-10.0.0.1:57226.service: Deactivated successfully. Apr 20 19:24:15.466000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@46-8215-10.0.0.14:22-10.0.0.1:57226 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:16.336126 kernel: audit: type=1131 audit(1776713055.466:1086): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@46-8215-10.0.0.14:22-10.0.0.1:57226 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:15.548409 systemd[1]: sshd@46-8215-10.0.0.14:22-10.0.0.1:57226.service: Consumed 10.594s CPU time, 4.2M memory peak. Apr 20 19:24:17.071296 systemd[1]: session-48.scope: Deactivated successfully. Apr 20 19:24:17.629999 systemd[1]: session-48.scope: Consumed 26.264s CPU time, 16.3M memory peak. Apr 20 19:24:18.837402 containerd[1659]: time="2026-04-20T19:24:18.835255254Z" level=error msg="ttrpc: received message on inactive stream" stream=127 Apr 20 19:24:19.381514 systemd-logind[1627]: Session 48 logged out. Waiting for processes to exit. Apr 20 19:24:21.973465 containerd[1659]: time="2026-04-20T19:24:18.864154017Z" level=error msg="ttrpc: received message on inactive stream" stream=203 Apr 20 19:24:22.837024 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf-rootfs.mount: Deactivated successfully. Apr 20 19:24:32.633278 kubelet[3163]: E0420 19:24:24.926138 3163 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="7ad53f3fafdaae0a232b3f26afb202844744846718e9540c77ad7513e837efd9" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 20 19:24:33.138000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@47-7-10.0.0.14:22-10.0.0.1:42614 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:33.987320 containerd[1659]: time="2026-04-20T19:24:33.633941120Z" level=error msg="ttrpc: received message on inactive stream" stream=201 Apr 20 19:24:34.219904 kernel: audit: type=1130 audit(1776713073.138:1087): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@47-7-10.0.0.14:22-10.0.0.1:42614 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:24:32.886396 systemd[1]: Started sshd@47-7-10.0.0.14:22-10.0.0.1:42614.service - OpenSSH per-connection server daemon (10.0.0.1:42614). Apr 20 19:24:34.399867 systemd-logind[1627]: Removed session 48. Apr 20 19:24:36.468077 kubelet[3163]: E0420 19:24:36.441081 3163 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 20 19:24:38.945228 containerd[1659]: time="2026-04-20T19:24:38.434301593Z" level=error msg="ExecSync for \"7ad53f3fafdaae0a232b3f26afb202844744846718e9540c77ad7513e837efd9\" failed" error="rpc error: code = Unknown desc = failed to exec in container: failed to start exec \"455e22e52dbec4ef0138dd20f28fc65b30eec409f31ad9046ebdfdbb3000d93f\": context canceled" Apr 20 19:24:43.353995 containerd[1659]: time="2026-04-20T19:24:43.333324967Z" level=info msg="TaskExit event container_id:\"bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf\" id:\"bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf\" pid:3768 exit_status:1 exited_at:{seconds:1776712842 nanos:981099034}" Apr 20 19:24:50.455353 containerd[1659]: time="2026-04-20T19:24:49.587135802Z" level=error msg="get state for bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf" error="context deadline exceeded" Apr 20 19:24:52.012321 containerd[1659]: time="2026-04-20T19:24:50.358384103Z" level=error msg="ttrpc: received message on inactive stream" stream=131 Apr 20 19:24:53.159270 containerd[1659]: time="2026-04-20T19:24:52.067828712Z" level=warning msg="unknown status" status=0 Apr 20 19:24:55.327971 containerd[1659]: time="2026-04-20T19:24:55.322415698Z" level=error msg="get state for 535cbf317370e2ee0ec5e64de676b160729bcf3ec8cac6f2b79f5d2eb1374a04" error="context deadline exceeded" Apr 20 19:24:55.729527 containerd[1659]: time="2026-04-20T19:24:55.329157465Z" level=warning msg="unknown status" status=0 Apr 20 19:24:56.393668 containerd[1659]: time="2026-04-20T19:24:56.368358955Z" level=error msg="ttrpc: received message on inactive stream" stream=47 Apr 20 19:24:56.946176 containerd[1659]: time="2026-04-20T19:24:56.926044272Z" level=error msg="Failed to handle backOff event container_id:\"bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf\" id:\"bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf\" pid:3768 exit_status:1 exited_at:{seconds:1776712842 nanos:981099034} for bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 20 19:24:57.153355 containerd[1659]: time="2026-04-20T19:24:57.075133949Z" level=info msg="TaskExit event container_id:\"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\" id:\"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\" pid:2981 exit_status:1 exited_at:{seconds:1776712820 nanos:249587616}" Apr 20 19:24:57.937263 containerd[1659]: time="2026-04-20T19:24:57.935949209Z" level=error msg="ttrpc: received message on inactive stream" stream=135 Apr 20 19:24:58.210138 containerd[1659]: time="2026-04-20T19:24:58.204636839Z" level=error msg="ttrpc: received message on inactive stream" stream=133 Apr 20 19:25:07.262704 containerd[1659]: time="2026-04-20T19:25:07.253218346Z" level=error msg="Failed to handle backOff event container_id:\"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\" id:\"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\" pid:2981 exit_status:1 exited_at:{seconds:1776712820 nanos:249587616} for ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 20 19:25:07.736096 containerd[1659]: time="2026-04-20T19:25:07.642329796Z" level=info msg="TaskExit event container_id:\"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\" id:\"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\" pid:2993 exit_status:1 exited_at:{seconds:1776712815 nanos:539071424}" Apr 20 19:25:08.702107 containerd[1659]: time="2026-04-20T19:25:08.569308543Z" level=error msg="ttrpc: received message on inactive stream" stream=211 Apr 20 19:25:08.702107 containerd[1659]: time="2026-04-20T19:25:08.664270161Z" level=error msg="ttrpc: received message on inactive stream" stream=207 Apr 20 19:25:17.351224 containerd[1659]: time="2026-04-20T19:25:17.350489020Z" level=error msg="Failed to handle backOff event container_id:\"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\" id:\"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\" pid:2993 exit_status:1 exited_at:{seconds:1776712815 nanos:539071424} for d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 20 19:25:17.953730 containerd[1659]: time="2026-04-20T19:25:17.381313068Z" level=error msg="ttrpc: received message on inactive stream" stream=207 Apr 20 19:25:18.112272 containerd[1659]: time="2026-04-20T19:25:17.977448200Z" level=error msg="ttrpc: received message on inactive stream" stream=205 Apr 20 19:25:19.561000 audit[5863]: AUDIT1101 pid=5863 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:25:19.868180 kernel: audit: type=1101 audit(1776713119.561:1088): pid=5863 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:25:20.268376 sshd[5863]: Accepted publickey for core from 10.0.0.1 port 42614 ssh2: RSA SHA256:ZNIzts6V4KYKlrJxXaosrimCRlmsV/+NkZ5UtjwHrjE Apr 20 19:25:21.913000 audit[5863]: AUDIT1103 pid=5863 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:25:22.193000 audit[5863]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffea5e14a80 a2=3 a3=0 items=0 ppid=1 pid=5863 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=49 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:25:22.193000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:25:26.231245 kernel: audit: type=1103 audit(1776713121.913:1089): pid=5863 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:25:23.384340 sshd-session[5863]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 19:25:26.629981 kubelet[3163]: E0420 19:25:20.659414 3163 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" Apr 20 19:25:27.146452 kernel: audit: type=1006 audit(1776713122.193:1090): pid=5863 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=49 res=1 Apr 20 19:25:27.309311 kernel: audit: type=1300 audit(1776713122.193:1090): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffea5e14a80 a2=3 a3=0 items=0 ppid=1 pid=5863 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=49 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:25:27.356356 kernel: audit: type=1327 audit(1776713122.193:1090): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:25:28.317185 systemd-logind[1627]: New session '49' of user 'core' with class 'user' and type 'tty'. Apr 20 19:25:29.821354 systemd[1]: Started session-49.scope - Session 49 of User core. Apr 20 19:25:34.591000 audit[5863]: AUDIT1105 pid=5863 uid=0 auid=500 ses=49 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:25:35.147854 kernel: audit: type=1105 audit(1776713134.591:1091): pid=5863 uid=0 auid=500 ses=49 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:25:36.177000 audit[5879]: AUDIT1103 pid=5879 uid=0 auid=500 ses=49 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:25:36.817917 kernel: audit: type=1103 audit(1776713136.177:1092): pid=5879 uid=0 auid=500 ses=49 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:25:57.122139 kubelet[3163]: E0420 19:25:52.848184 3163 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 20 19:26:01.268482 containerd[1659]: time="2026-04-20T19:26:01.266593635Z" level=info msg="TaskExit event container_id:\"bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf\" id:\"bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf\" pid:3768 exit_status:1 exited_at:{seconds:1776712842 nanos:981099034}" Apr 20 19:26:11.830241 containerd[1659]: time="2026-04-20T19:26:11.827790414Z" level=error msg="Failed to handle backOff event container_id:\"bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf\" id:\"bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf\" pid:3768 exit_status:1 exited_at:{seconds:1776712842 nanos:981099034} for bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 20 19:26:14.418350 containerd[1659]: time="2026-04-20T19:26:14.353234354Z" level=error msg="ttrpc: received message on inactive stream" stream=145 Apr 20 19:26:14.960242 kubelet[3163]: E0420 19:26:14.762241 3163 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" Apr 20 19:26:15.144038 containerd[1659]: time="2026-04-20T19:26:14.874856119Z" level=error msg="ttrpc: received message on inactive stream" stream=141 Apr 20 19:26:45.930309 kubelet[3163]: E0420 19:26:45.893400 3163 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 20 19:26:48.565870 sshd[5879]: Connection closed by 10.0.0.1 port 42614 Apr 20 19:26:48.741712 sshd-session[5863]: pam_unix(sshd:session): session closed for user core Apr 20 19:26:49.560000 audit[5863]: AUDIT1106 pid=5863 uid=0 auid=500 ses=49 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:26:49.804000 audit[5863]: AUDIT1104 pid=5863 uid=0 auid=500 ses=49 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:26:50.159721 kernel: audit: type=1106 audit(1776713209.560:1093): pid=5863 uid=0 auid=500 ses=49 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:26:50.179369 kernel: audit: type=1104 audit(1776713209.804:1094): pid=5863 uid=0 auid=500 ses=49 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:26:50.352351 kubelet[3163]: E0420 19:26:48.632045 3163 controller.go:123] "Will retry updating lease" err="failed 5 attempts to update lease" interval="10s" Apr 20 19:26:52.563659 systemd[1]: sshd@47-7-10.0.0.14:22-10.0.0.1:42614.service: Deactivated successfully. Apr 20 19:26:52.888000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@47-7-10.0.0.14:22-10.0.0.1:42614 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:26:52.913967 systemd[1]: sshd@47-7-10.0.0.14:22-10.0.0.1:42614.service: Consumed 11.985s CPU time, 4.3M memory peak. Apr 20 19:26:53.125013 kernel: audit: type=1131 audit(1776713212.888:1095): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@47-7-10.0.0.14:22-10.0.0.1:42614 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:26:54.958132 systemd[1]: session-49.scope: Deactivated successfully. Apr 20 19:26:55.225244 systemd[1]: session-49.scope: Consumed 31.834s CPU time, 16.2M memory peak. Apr 20 19:26:59.864086 systemd-logind[1627]: Session 49 logged out. Waiting for processes to exit. Apr 20 19:27:01.147386 systemd[1]: Started sshd@48-8-10.0.0.14:22-10.0.0.1:47298.service - OpenSSH per-connection server daemon (10.0.0.1:47298). Apr 20 19:27:01.267000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@48-8-10.0.0.14:22-10.0.0.1:47298 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:27:02.687865 kernel: audit: type=1130 audit(1776713221.267:1096): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@48-8-10.0.0.14:22-10.0.0.1:47298 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:27:02.527502 systemd-logind[1627]: Removed session 49. Apr 20 19:27:15.736685 containerd[1659]: time="2026-04-20T19:27:15.683333637Z" level=info msg="TaskExit event container_id:\"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\" id:\"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\" pid:2981 exit_status:1 exited_at:{seconds:1776712820 nanos:249587616}" Apr 20 19:27:18.752680 containerd[1659]: time="2026-04-20T19:27:18.752196602Z" level=error msg="get state for ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729" error="context deadline exceeded" Apr 20 19:27:19.272253 containerd[1659]: time="2026-04-20T19:27:18.768355137Z" level=warning msg="unknown status" status=0 Apr 20 19:27:19.857115 containerd[1659]: time="2026-04-20T19:27:19.852730339Z" level=error msg="ttrpc: received message on inactive stream" stream=213 Apr 20 19:27:21.403423 kubelet[3163]: E0420 19:27:21.402438 3163 controller.go:195] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"localhost\": the object has been modified; please apply your changes to the latest version and try again" Apr 20 19:27:25.563475 containerd[1659]: time="2026-04-20T19:27:25.556083588Z" level=error msg="failed to delete task" error="context deadline exceeded" id=ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729 Apr 20 19:27:26.509188 containerd[1659]: time="2026-04-20T19:27:26.477429020Z" level=error msg="ttrpc: received message on inactive stream" stream=219 Apr 20 19:27:27.338425 containerd[1659]: time="2026-04-20T19:27:26.596074965Z" level=error msg="Failed to handle backOff event container_id:\"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\" id:\"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\" pid:2981 exit_status:1 exited_at:{seconds:1776712820 nanos:249587616} for ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded" Apr 20 19:27:27.364744 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729-rootfs.mount: Deactivated successfully. Apr 20 19:27:28.738582 containerd[1659]: time="2026-04-20T19:27:28.351293603Z" level=info msg="TaskExit event container_id:\"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\" id:\"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\" pid:2993 exit_status:1 exited_at:{seconds:1776712815 nanos:539071424}" Apr 20 19:27:29.488000 audit[5905]: AUDIT1101 pid=5905 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:27:29.723996 kernel: audit: type=1101 audit(1776713249.488:1097): pid=5905 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:27:29.759245 sshd[5905]: Accepted publickey for core from 10.0.0.1 port 47298 ssh2: RSA SHA256:ZNIzts6V4KYKlrJxXaosrimCRlmsV/+NkZ5UtjwHrjE Apr 20 19:27:30.866000 audit[5905]: AUDIT1103 pid=5905 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:27:31.087000 audit[5905]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc74c4b470 a2=3 a3=0 items=0 ppid=1 pid=5905 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=50 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:27:31.087000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:27:32.271180 kernel: audit: type=1103 audit(1776713250.866:1098): pid=5905 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:27:31.580801 sshd-session[5905]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 19:27:33.215866 containerd[1659]: time="2026-04-20T19:27:31.036562907Z" level=error msg="get state for d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf" error="context deadline exceeded" Apr 20 19:27:33.215866 containerd[1659]: time="2026-04-20T19:27:31.635421509Z" level=error msg="ttrpc: received message on inactive stream" stream=209 Apr 20 19:27:34.084663 kernel: audit: type=1006 audit(1776713251.087:1099): pid=5905 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=50 res=1 Apr 20 19:27:34.184799 containerd[1659]: time="2026-04-20T19:27:33.380973164Z" level=warning msg="unknown status" status=0 Apr 20 19:27:34.207862 kernel: audit: type=1300 audit(1776713251.087:1099): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc74c4b470 a2=3 a3=0 items=0 ppid=1 pid=5905 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=50 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:27:34.221321 kernel: audit: type=1327 audit(1776713251.087:1099): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:27:38.552437 containerd[1659]: time="2026-04-20T19:27:38.550427379Z" level=error msg="failed to delete task" error="context deadline exceeded" id=d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf Apr 20 19:27:39.661427 systemd-logind[1627]: New session '50' of user 'core' with class 'user' and type 'tty'. Apr 20 19:27:40.785235 containerd[1659]: time="2026-04-20T19:27:40.137853783Z" level=error msg="Failed to handle backOff event container_id:\"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\" id:\"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\" pid:2993 exit_status:1 exited_at:{seconds:1776712815 nanos:539071424} for d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded" Apr 20 19:27:41.367010 systemd[1]: Started session-50.scope - Session 50 of User core. Apr 20 19:27:48.670681 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf-rootfs.mount: Deactivated successfully. Apr 20 19:27:50.369000 audit[5905]: AUDIT1105 pid=5905 uid=0 auid=500 ses=50 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:27:50.934771 kernel: audit: type=1105 audit(1776713270.369:1100): pid=5905 uid=0 auid=500 ses=50 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:27:51.562000 audit[5937]: AUDIT1103 pid=5937 uid=0 auid=500 ses=50 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:27:52.162426 kernel: audit: type=1103 audit(1776713271.562:1101): pid=5937 uid=0 auid=500 ses=50 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:27:52.427690 containerd[1659]: time="2026-04-20T19:27:51.937988209Z" level=error msg="ttrpc: received message on inactive stream" stream=215 Apr 20 19:27:53.152524 kubelet[3163]: E0420 19:27:46.324724 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T19:25:08Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T19:25:08Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T19:25:12Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T19:25:14Z\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"localhost\": Patch \"https://10.0.0.14:6443/api/v1/nodes/localhost/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 20 19:27:58.319029 kubelet[3163]: E0420 19:27:58.314387 3163 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" Apr 20 19:28:20.276840 containerd[1659]: time="2026-04-20T19:28:20.275062679Z" level=info msg="TaskExit event container_id:\"bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf\" id:\"bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf\" pid:3768 exit_status:1 exited_at:{seconds:1776712842 nanos:981099034}" Apr 20 19:28:27.821038 containerd[1659]: time="2026-04-20T19:28:27.053801419Z" level=error msg="get state for 535cbf317370e2ee0ec5e64de676b160729bcf3ec8cac6f2b79f5d2eb1374a04" error="context deadline exceeded" Apr 20 19:28:27.821038 containerd[1659]: time="2026-04-20T19:28:27.842069634Z" level=warning msg="unknown status" status=0 Apr 20 19:28:28.895308 containerd[1659]: time="2026-04-20T19:28:28.386085676Z" level=error msg="ttrpc: received message on inactive stream" stream=51 Apr 20 19:28:30.829230 containerd[1659]: time="2026-04-20T19:28:30.801160862Z" level=error msg="Failed to handle backOff event container_id:\"bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf\" id:\"bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf\" pid:3768 exit_status:1 exited_at:{seconds:1776712842 nanos:981099034} for bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 20 19:28:31.623853 containerd[1659]: time="2026-04-20T19:28:31.615463117Z" level=error msg="ttrpc: received message on inactive stream" stream=151 Apr 20 19:28:31.634800 containerd[1659]: time="2026-04-20T19:28:31.625132187Z" level=error msg="ttrpc: received message on inactive stream" stream=153 Apr 20 19:28:33.784722 kubelet[3163]: E0420 19:28:33.696183 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6m53.758s" Apr 20 19:28:35.958329 kubelet[3163]: E0420 19:28:25.846508 3163 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 20 19:28:38.965330 kubelet[3163]: E0420 19:28:32.365947 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.14:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded" Apr 20 19:29:03.573862 kubelet[3163]: E0420 19:29:03.572782 3163 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 20 19:29:05.536897 kubelet[3163]: E0420 19:29:05.521938 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.14:6443/api/v1/nodes/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 20 19:29:06.251227 containerd[1659]: time="2026-04-20T19:29:06.235198807Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 19:29:07.580798 containerd[1659]: time="2026-04-20T19:29:07.446225692Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48410453" Apr 20 19:29:19.462491 containerd[1659]: time="2026-04-20T19:29:19.453022404Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 19:29:28.938590 containerd[1659]: time="2026-04-20T19:29:28.837231156Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 20 19:29:31.801460 containerd[1659]: time="2026-04-20T19:29:31.772466873Z" level=info msg="StopContainer for \"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\" with timeout 30 (s)" Apr 20 19:29:33.982135 containerd[1659]: time="2026-04-20T19:29:33.956207054Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 10m34.979231129s" Apr 20 19:29:33.982135 containerd[1659]: time="2026-04-20T19:29:33.958284113Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 20 19:29:33.982135 containerd[1659]: time="2026-04-20T19:29:33.976744904Z" level=info msg="Stop container \"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\" with signal terminated" Apr 20 19:29:51.673254 kubelet[3163]: E0420 19:29:50.290358 3163 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" Apr 20 19:29:57.536382 sshd[5937]: Connection closed by 10.0.0.1 port 47298 Apr 20 19:29:57.768281 sshd-session[5905]: pam_unix(sshd:session): session closed for user core Apr 20 19:29:58.545000 audit[5905]: AUDIT1106 pid=5905 uid=0 auid=500 ses=50 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:29:58.854000 audit[5905]: AUDIT1104 pid=5905 uid=0 auid=500 ses=50 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:29:59.211131 kernel: audit: type=1106 audit(1776713398.545:1102): pid=5905 uid=0 auid=500 ses=50 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:29:59.326330 kernel: audit: type=1104 audit(1776713398.854:1103): pid=5905 uid=0 auid=500 ses=50 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:30:00.365429 systemd[1]: sshd@48-8-10.0.0.14:22-10.0.0.1:47298.service: Deactivated successfully. Apr 20 19:30:00.624000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@48-8-10.0.0.14:22-10.0.0.1:47298 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:30:01.378298 containerd[1659]: time="2026-04-20T19:29:59.822437363Z" level=info msg="StopContainer for \"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\" with timeout 30 (s)" Apr 20 19:30:02.108134 kernel: audit: type=1131 audit(1776713400.624:1104): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@48-8-10.0.0.14:22-10.0.0.1:47298 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:30:00.651070 systemd[1]: sshd@48-8-10.0.0.14:22-10.0.0.1:47298.service: Consumed 6.280s CPU time, 4.2M memory peak. Apr 20 19:30:02.467936 systemd[1]: session-50.scope: Deactivated successfully. Apr 20 19:30:02.844284 systemd[1]: session-50.scope: Consumed 52.493s CPU time, 17.6M memory peak. Apr 20 19:30:04.242779 systemd-logind[1627]: Session 50 logged out. Waiting for processes to exit. Apr 20 19:30:05.896925 containerd[1659]: time="2026-04-20T19:30:05.885799605Z" level=info msg="Stop container \"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\" with signal terminated" Apr 20 19:30:12.038916 systemd[1]: Started sshd@49-4105-10.0.0.14:22-10.0.0.1:56218.service - OpenSSH per-connection server daemon (10.0.0.1:56218). Apr 20 19:30:12.251000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@49-4105-10.0.0.14:22-10.0.0.1:56218 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:30:12.668156 kernel: audit: type=1130 audit(1776713412.251:1105): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@49-4105-10.0.0.14:22-10.0.0.1:56218 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:30:13.077271 systemd-logind[1627]: Removed session 50. Apr 20 19:30:48.936455 kubelet[3163]: E0420 19:30:48.911181 3163 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="7ad53f3fafdaae0a232b3f26afb202844744846718e9540c77ad7513e837efd9" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 20 19:30:57.787426 containerd[1659]: time="2026-04-20T19:30:57.733567465Z" level=error msg="ttrpc: received message on inactive stream" stream=223 Apr 20 19:31:00.864472 containerd[1659]: time="2026-04-20T19:31:00.052446354Z" level=error msg="ttrpc: received message on inactive stream" stream=221 Apr 20 19:31:05.234000 audit[5982]: AUDIT1101 pid=5982 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:31:05.750285 kernel: audit: type=1101 audit(1776713465.234:1106): pid=5982 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:31:05.865346 sshd[5982]: Accepted publickey for core from 10.0.0.1 port 56218 ssh2: RSA SHA256:ZNIzts6V4KYKlrJxXaosrimCRlmsV/+NkZ5UtjwHrjE Apr 20 19:31:07.018000 audit[5982]: AUDIT1103 pid=5982 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:31:07.053000 audit[5982]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc371bdcc0 a2=3 a3=0 items=0 ppid=1 pid=5982 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=51 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:31:07.053000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:31:08.142426 kernel: audit: type=1103 audit(1776713467.018:1107): pid=5982 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:31:08.264104 containerd[1659]: time="2026-04-20T19:31:07.629943759Z" level=error msg="ExecSync for \"7ad53f3fafdaae0a232b3f26afb202844744846718e9540c77ad7513e837efd9\" failed" error="rpc error: code = Unknown desc = failed to exec in container: failed to start exec \"03ae0ba1d2d3417ff6e564506f2992695a3dbad51fe59896fa66eeee26c66dff\": context canceled" Apr 20 19:31:07.849252 sshd-session[5982]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 19:31:09.471148 kernel: audit: type=1006 audit(1776713467.053:1108): pid=5982 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=51 res=1 Apr 20 19:31:09.570883 kernel: audit: type=1300 audit(1776713467.053:1108): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc371bdcc0 a2=3 a3=0 items=0 ppid=1 pid=5982 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=51 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:31:09.622294 kernel: audit: type=1327 audit(1776713467.053:1108): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:31:12.195185 systemd-logind[1627]: New session '51' of user 'core' with class 'user' and type 'tty'. Apr 20 19:31:12.472147 systemd[1]: Started session-51.scope - Session 51 of User core. Apr 20 19:31:17.752000 audit[5982]: AUDIT1105 pid=5982 uid=0 auid=500 ses=51 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:31:17.961183 kernel: audit: type=1105 audit(1776713477.752:1109): pid=5982 uid=0 auid=500 ses=51 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:31:19.967000 audit[6014]: AUDIT1103 pid=6014 uid=0 auid=500 ses=51 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:31:20.306323 kernel: audit: type=1103 audit(1776713479.967:1110): pid=6014 uid=0 auid=500 ses=51 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:31:24.937449 containerd[1659]: time="2026-04-20T19:31:24.622872797Z" level=error msg="get state for 7ad53f3fafdaae0a232b3f26afb202844744846718e9540c77ad7513e837efd9" error="context deadline exceeded" Apr 20 19:31:25.722456 containerd[1659]: time="2026-04-20T19:31:25.043292682Z" level=warning msg="unknown status" status=0 Apr 20 19:31:25.722456 containerd[1659]: time="2026-04-20T19:31:25.540880558Z" level=error msg="ttrpc: received message on inactive stream" stream=233 Apr 20 19:31:35.774250 containerd[1659]: time="2026-04-20T19:31:35.724245562Z" level=info msg="Kill container \"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\"" Apr 20 19:31:44.620997 containerd[1659]: time="2026-04-20T19:31:44.619107713Z" level=info msg="TaskExit event container_id:\"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\" id:\"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\" pid:2981 exit_status:1 exited_at:{seconds:1776712820 nanos:249587616}" Apr 20 19:31:48.537382 containerd[1659]: time="2026-04-20T19:31:48.325479728Z" level=error msg="get state for ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729" error="context deadline exceeded" Apr 20 19:31:49.355044 containerd[1659]: time="2026-04-20T19:31:48.644889349Z" level=warning msg="unknown status" status=0 Apr 20 19:31:52.883190 containerd[1659]: time="2026-04-20T19:31:52.677169702Z" level=error msg="StopContainer for \"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\" failed" error="rpc error: code = Canceled desc = context canceled" Apr 20 19:31:53.714408 containerd[1659]: time="2026-04-20T19:31:52.947392028Z" level=error msg="get state for ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729" error="context deadline exceeded" Apr 20 19:31:53.714408 containerd[1659]: time="2026-04-20T19:31:52.971798119Z" level=warning msg="unknown status" status=0 Apr 20 19:31:55.290137 containerd[1659]: time="2026-04-20T19:31:55.276404831Z" level=error msg="get state for ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729" error="context deadline exceeded" Apr 20 19:31:55.963057 containerd[1659]: time="2026-04-20T19:31:55.318142050Z" level=warning msg="unknown status" status=0 Apr 20 19:31:56.672526 containerd[1659]: time="2026-04-20T19:31:56.670015811Z" level=error msg="failed to delete task" error="context deadline exceeded" id=ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729 Apr 20 19:31:57.249984 containerd[1659]: time="2026-04-20T19:31:56.682913927Z" level=error msg="Failed to handle backOff event container_id:\"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\" id:\"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\" pid:2981 exit_status:1 exited_at:{seconds:1776712820 nanos:249587616} for ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded" Apr 20 19:31:57.249984 containerd[1659]: time="2026-04-20T19:31:56.683384767Z" level=info msg="TaskExit event container_id:\"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\" id:\"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\" pid:2993 exit_status:1 exited_at:{seconds:1776712815 nanos:539071424}" Apr 20 19:32:00.270183 containerd[1659]: time="2026-04-20T19:31:59.151281085Z" level=error msg="get state for d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf" error="context deadline exceeded" Apr 20 19:32:01.542075 containerd[1659]: time="2026-04-20T19:32:01.162193038Z" level=warning msg="unknown status" status=0 Apr 20 19:32:02.512929 containerd[1659]: time="2026-04-20T19:32:00.089814343Z" level=error msg="ttrpc: received message on inactive stream" stream=221 Apr 20 19:32:08.139963 kubelet[3163]: E0420 19:31:56.089480 3163 log.go:32] "StopContainer from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf" Apr 20 19:32:09.419331 containerd[1659]: time="2026-04-20T19:32:08.764405266Z" level=error msg="get state for d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf" error="context deadline exceeded" Apr 20 19:32:09.910185 containerd[1659]: time="2026-04-20T19:32:09.781736942Z" level=error msg="ttrpc: received message on inactive stream" stream=225 Apr 20 19:32:09.987694 containerd[1659]: time="2026-04-20T19:32:09.782511719Z" level=warning msg="unknown status" status=0 Apr 20 19:32:10.659195 containerd[1659]: time="2026-04-20T19:32:10.656119531Z" level=error msg="failed to delete task" error="context deadline exceeded" id=d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf Apr 20 19:32:11.645328 containerd[1659]: time="2026-04-20T19:32:11.121518746Z" level=error msg="Failed to handle backOff event container_id:\"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\" id:\"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\" pid:2993 exit_status:1 exited_at:{seconds:1776712815 nanos:539071424} for d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded" Apr 20 19:32:16.568689 containerd[1659]: time="2026-04-20T19:32:16.449340942Z" level=error msg="ttrpc: received message on inactive stream" stream=227 Apr 20 19:32:21.621476 kubelet[3163]: E0420 19:32:19.155943 3163 kuberuntime_container.go:863] "Container termination failed with gracePeriod" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="kube-system/kube-controller-manager-localhost" podUID="e9ca41790ae21be9f4cbd451ade0acec" containerName="kube-controller-manager" containerID="containerd://d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf" gracePeriod=30 Apr 20 19:32:23.881386 kubelet[3163]: E0420 19:32:21.790290 3163 log.go:32] "StopContainer from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729" Apr 20 19:32:29.516522 containerd[1659]: time="2026-04-20T19:32:29.515725872Z" level=error msg="StopContainer for \"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\" failed" error="rpc error: code = Unknown desc = failed to kill container \"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\": context deadline exceeded" Apr 20 19:32:30.420382 kubelet[3163]: E0420 19:32:27.934418 3163 kuberuntime_container.go:863] "Container termination failed with gracePeriod" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="kube-system/kube-scheduler-localhost" podUID="33fee6ba1581201eda98a989140db110" containerName="kube-scheduler" containerID="containerd://ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729" gracePeriod=30 Apr 20 19:32:31.499668 kubelet[3163]: E0420 19:32:26.170118 3163 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" Apr 20 19:32:32.246230 containerd[1659]: time="2026-04-20T19:32:30.586177376Z" level=error msg="failed to drain init process ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729 io" error="context deadline exceeded" runtime=io.containerd.runc.v2 Apr 20 19:32:32.246230 containerd[1659]: time="2026-04-20T19:32:31.315402653Z" level=error msg="ttrpc: received message on inactive stream" stream=225 Apr 20 19:32:32.731612 kubelet[3163]: E0420 19:32:24.765146 3163 kuberuntime_manager.go:1176] "killContainer for pod failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerName="kube-controller-manager" containerID={"Type":"containerd","ID":"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf"} pod="kube-system/kube-controller-manager-localhost" Apr 20 19:32:32.731612 kubelet[3163]: E0420 19:32:30.550484 3163 kuberuntime_manager.go:1176] "killContainer for pod failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerName="kube-scheduler" containerID={"Type":"containerd","ID":"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729"} pod="kube-system/kube-scheduler-localhost" Apr 20 19:32:33.552518 containerd[1659]: time="2026-04-20T19:32:33.273634942Z" level=error msg="ttrpc: received message on inactive stream" stream=227 Apr 20 19:32:33.552518 containerd[1659]: time="2026-04-20T19:32:33.281520875Z" level=error msg="ttrpc: received message on inactive stream" stream=229 Apr 20 19:32:33.552518 containerd[1659]: time="2026-04-20T19:32:33.284672358Z" level=error msg="ttrpc: received message on inactive stream" stream=231 Apr 20 19:32:37.749120 containerd[1659]: time="2026-04-20T19:32:37.102397757Z" level=error msg="ttrpc: received message on inactive stream" stream=233 Apr 20 19:32:45.770105 kubelet[3163]: E0420 19:32:32.128852 3163 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillContainer\" for \"kube-controller-manager\" with KillContainerError: \"rpc error: code = DeadlineExceeded desc = context deadline exceeded\"" pod="kube-system/kube-controller-manager-localhost" podUID="e9ca41790ae21be9f4cbd451ade0acec" Apr 20 19:32:48.855331 containerd[1659]: time="2026-04-20T19:32:48.783973184Z" level=info msg="TaskExit event container_id:\"bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf\" id:\"bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf\" pid:3768 exit_status:1 exited_at:{seconds:1776712842 nanos:981099034}" Apr 20 19:32:50.680714 kubelet[3163]: E0420 19:32:49.034032 3163 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillContainer\" for \"kube-scheduler\" with KillContainerError: \"rpc error: code = DeadlineExceeded desc = context deadline exceeded\"" pod="kube-system/kube-scheduler-localhost" podUID="33fee6ba1581201eda98a989140db110" Apr 20 19:33:08.975115 containerd[1659]: time="2026-04-20T19:33:08.867714744Z" level=error msg="get state for 535cbf317370e2ee0ec5e64de676b160729bcf3ec8cac6f2b79f5d2eb1374a04" error="context deadline exceeded" Apr 20 19:33:10.776012 containerd[1659]: time="2026-04-20T19:33:08.876353631Z" level=warning msg="unknown status" status=0 Apr 20 19:33:10.776012 containerd[1659]: time="2026-04-20T19:33:09.430602269Z" level=error msg="ttrpc: received message on inactive stream" stream=53 Apr 20 19:33:19.617193 containerd[1659]: time="2026-04-20T19:33:19.571248987Z" level=error msg="Failed to handle backOff event container_id:\"bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf\" id:\"bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf\" pid:3768 exit_status:1 exited_at:{seconds:1776712842 nanos:981099034} for bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 20 19:33:22.946039 kubelet[3163]: E0420 19:33:22.879450 3163 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="7ad53f3fafdaae0a232b3f26afb202844744846718e9540c77ad7513e837efd9" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 20 19:33:27.320189 sshd[6014]: Connection closed by 10.0.0.1 port 56218 Apr 20 19:33:27.683393 sshd-session[5982]: pam_unix(sshd:session): session closed for user core Apr 20 19:33:29.209255 kubelet[3163]: E0420 19:33:28.224144 3163 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 20 19:33:29.190000 audit[5982]: AUDIT1106 pid=5982 uid=0 auid=500 ses=51 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:33:29.367000 audit[5982]: AUDIT1104 pid=5982 uid=0 auid=500 ses=51 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:33:32.864456 kernel: audit: type=1106 audit(1776713609.190:1111): pid=5982 uid=0 auid=500 ses=51 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:33:32.888312 containerd[1659]: time="2026-04-20T19:33:30.214199694Z" level=error msg="ttrpc: received message on inactive stream" stream=241 Apr 20 19:33:33.235000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@49-4105-10.0.0.14:22-10.0.0.1:56218 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:33:32.848998 systemd[1]: sshd@49-4105-10.0.0.14:22-10.0.0.1:56218.service: Deactivated successfully. Apr 20 19:33:38.268071 kernel: audit: type=1104 audit(1776713609.367:1112): pid=5982 uid=0 auid=500 ses=51 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:33:33.317072 systemd[1]: sshd@49-4105-10.0.0.14:22-10.0.0.1:56218.service: Consumed 12.472s CPU time, 4.4M memory peak. Apr 20 19:33:39.455335 kernel: audit: type=1131 audit(1776713613.235:1113): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@49-4105-10.0.0.14:22-10.0.0.1:56218 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:33:36.221883 systemd[1]: session-51.scope: Deactivated successfully. Apr 20 19:33:36.525182 systemd[1]: session-51.scope: Consumed 45.673s CPU time, 15.9M memory peak. Apr 20 19:33:42.591077 containerd[1659]: time="2026-04-20T19:33:42.549416922Z" level=info msg="CreateContainer within sandbox \"de19448814837f2189e818ee408a7454c5277a1c9b2f91f664708851a1478571\" for container name:\"calico-apiserver\"" Apr 20 19:33:44.570927 containerd[1659]: time="2026-04-20T19:33:44.565703322Z" level=error msg="ttrpc: received message on inactive stream" stream=159 Apr 20 19:33:44.570927 containerd[1659]: time="2026-04-20T19:33:44.566705239Z" level=error msg="ttrpc: received message on inactive stream" stream=161 Apr 20 19:33:46.852000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@50-8216-10.0.0.14:22-10.0.0.1:45800 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:33:47.237242 kernel: audit: type=1130 audit(1776713626.852:1114): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@50-8216-10.0.0.14:22-10.0.0.1:45800 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:33:45.328727 systemd-logind[1627]: Session 51 logged out. Waiting for processes to exit. Apr 20 19:33:46.779242 systemd[1]: Started sshd@50-8216-10.0.0.14:22-10.0.0.1:45800.service - OpenSSH per-connection server daemon (10.0.0.1:45800). Apr 20 19:33:49.580461 systemd-logind[1627]: Removed session 51. Apr 20 19:33:55.450738 containerd[1659]: time="2026-04-20T19:33:53.750859888Z" level=error msg="get state for 7ad53f3fafdaae0a232b3f26afb202844744846718e9540c77ad7513e837efd9" error="context deadline exceeded" Apr 20 19:33:56.332361 containerd[1659]: time="2026-04-20T19:33:55.928387058Z" level=warning msg="unknown status" status=0 Apr 20 19:33:58.785311 containerd[1659]: time="2026-04-20T19:33:57.792154588Z" level=error msg="ttrpc: received message on inactive stream" stream=243 Apr 20 19:34:03.380191 containerd[1659]: time="2026-04-20T19:34:01.624521781Z" level=error msg="ttrpc: received message on inactive stream" stream=245 Apr 20 19:34:08.819158 containerd[1659]: time="2026-04-20T19:34:08.668407487Z" level=error msg="get state for 73574da5edd4b1ff54c9a1eda448e8fdacfdde5b83bea239e064b575ec742df0" error="context deadline exceeded" Apr 20 19:34:09.286653 containerd[1659]: time="2026-04-20T19:34:08.965869239Z" level=warning msg="unknown status" status=0 Apr 20 19:34:15.369673 containerd[1659]: time="2026-04-20T19:34:15.065463861Z" level=error msg="ttrpc: received message on inactive stream" stream=253 Apr 20 19:34:17.642026 kubelet[3163]: E0420 19:34:17.616026 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T19:30:11Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T19:30:11Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T19:30:11Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T19:30:11Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\\\",\\\"ghcr.io/flatcar/calico/node:v3.31.4\\\"],\\\"sizeBytes\\\":159838426},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\\\",\\\"ghcr.io/flatcar/calico/cni:v3.31.4\\\"],\\\"sizeBytes\\\":72167716},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\\\",\\\"ghcr.io/flatcar/calico/apiserver:v3.31.4\\\"],\\\"sizeBytes\\\":49971841},{\\\"names\\\":[\\\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\\\",\\\"quay.io/tigera/operator:v1.40.7\\\"],\\\"sizeBytes\\\":40842151},{\\\"names\\\":[\\\"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\\\",\\\"registry.k8s.io/kube-proxy:v1.33.11\\\"],\\\"sizeBytes\\\":32009730},{\\\"names\\\":[\\\"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\\\",\\\"registry.k8s.io/kube-apiserver:v1.33.11\\\"],\\\"sizeBytes\\\":30190588},{\\\"names\\\":[\\\"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\\\",\\\"registry.k8s.io/kube-controller-manager:v1.33.11\\\"],\\\"sizeBytes\\\":27737794},{\\\"names\\\":[\\\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\\\",\\\"registry.k8s.io/etcd:3.5.24-0\\\"],\\\"sizeBytes\\\":23716032},{\\\"names\\\":[\\\"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\\\",\\\"registry.k8s.io/kube-scheduler:v1.33.11\\\"],\\\"sizeBytes\\\":21856121},{\\\"names\\\":[\\\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\\\",\\\"registry.k8s.io/coredns/coredns:v1.12.0\\\"],\\\"sizeBytes\\\":20939036},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\\\",\\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\\\"],\\\"sizeBytes\\\":16260314},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\\\",\\\"ghcr.io/flatcar/calico/csi:v3.31.4\\\"],\\\"sizeBytes\\\":10348547},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\\\",\\\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\\\"],\\\"sizeBytes\\\":6186255},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\\\",\\\"registry.k8s.io/pause:3.10.1\\\"],\\\"sizeBytes\\\":320448},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\\\",\\\"registry.k8s.io/pause:3.10\\\"],\\\"sizeBytes\\\":320368}]}}\" for node \"localhost\": Patch \"https://10.0.0.14:6443/api/v1/nodes/localhost/status?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 20 19:34:21.777326 containerd[1659]: time="2026-04-20T19:34:21.319373985Z" level=error msg="ExecSync for \"7ad53f3fafdaae0a232b3f26afb202844744846718e9540c77ad7513e837efd9\" failed" error="rpc error: code = Unknown desc = failed to exec in container: failed to start exec \"73574da5edd4b1ff54c9a1eda448e8fdacfdde5b83bea239e064b575ec742df0\": context canceled" Apr 20 19:34:28.283249 containerd[1659]: time="2026-04-20T19:34:28.282396893Z" level=info msg="Container d60bfc28bae39dd4c39466e0fffee6553b16b69bc14ddc6752a782b3abc019c6: CDI devices from CRI Config.CDIDevices: []" Apr 20 19:34:33.045806 containerd[1659]: time="2026-04-20T19:34:32.738508480Z" level=error msg="get state for de19448814837f2189e818ee408a7454c5277a1c9b2f91f664708851a1478571" error="context deadline exceeded" Apr 20 19:34:33.549660 containerd[1659]: time="2026-04-20T19:34:33.152388727Z" level=warning msg="unknown status" status=0 Apr 20 19:34:33.641361 containerd[1659]: time="2026-04-20T19:34:33.632952711Z" level=error msg="ttrpc: received message on inactive stream" stream=17 Apr 20 19:34:40.075220 kubelet[3163]: E0420 19:34:36.274918 3163 log.go:32] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" podSandboxID="de19448814837f2189e818ee408a7454c5277a1c9b2f91f664708851a1478571" Apr 20 19:34:43.641421 kubelet[3163]: E0420 19:34:41.202529 3163 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" Apr 20 19:34:53.487000 audit[6064]: AUDIT1101 pid=6064 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:34:53.957634 kernel: audit: type=1101 audit(1776713693.487:1115): pid=6064 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:34:54.095799 sshd[6064]: Accepted publickey for core from 10.0.0.1 port 45800 ssh2: RSA SHA256:ZNIzts6V4KYKlrJxXaosrimCRlmsV/+NkZ5UtjwHrjE Apr 20 19:34:54.841000 audit[6064]: AUDIT1103 pid=6064 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:34:55.142000 audit[6064]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc3a25ffe0 a2=3 a3=0 items=0 ppid=1 pid=6064 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=52 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:34:55.142000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:34:55.861935 kernel: audit: type=1103 audit(1776713694.841:1116): pid=6064 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:34:55.998981 kernel: audit: type=1006 audit(1776713695.142:1117): pid=6064 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=52 res=1 Apr 20 19:34:56.025085 kernel: audit: type=1300 audit(1776713695.142:1117): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc3a25ffe0 a2=3 a3=0 items=0 ppid=1 pid=6064 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=52 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:34:56.194662 kernel: audit: type=1327 audit(1776713695.142:1117): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:34:56.310401 sshd-session[6064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 19:34:57.776673 containerd[1659]: time="2026-04-20T19:34:57.659450196Z" level=error msg="ttrpc: received message on inactive stream" stream=19 Apr 20 19:35:01.526239 containerd[1659]: time="2026-04-20T19:35:01.359452380Z" level=error msg="get state for de19448814837f2189e818ee408a7454c5277a1c9b2f91f664708851a1478571" error="context deadline exceeded" Apr 20 19:35:02.276583 containerd[1659]: time="2026-04-20T19:35:01.969252321Z" level=warning msg="unknown status" status=0 Apr 20 19:35:04.339165 systemd-logind[1627]: New session '52' of user 'core' with class 'user' and type 'tty'. Apr 20 19:35:05.919512 systemd[1]: Started session-52.scope - Session 52 of User core. Apr 20 19:35:07.289505 containerd[1659]: time="2026-04-20T19:35:07.279065153Z" level=info msg="CreateContainer within sandbox \"de19448814837f2189e818ee408a7454c5277a1c9b2f91f664708851a1478571\" for name:\"calico-apiserver\" returns container id \"d60bfc28bae39dd4c39466e0fffee6553b16b69bc14ddc6752a782b3abc019c6\"" Apr 20 19:35:09.664000 audit[6064]: AUDIT1105 pid=6064 uid=0 auid=500 ses=52 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:35:09.890712 kernel: audit: type=1105 audit(1776713709.664:1118): pid=6064 uid=0 auid=500 ses=52 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:35:12.415000 audit[6078]: AUDIT1103 pid=6078 uid=0 auid=500 ses=52 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:35:13.002245 kernel: audit: type=1103 audit(1776713712.415:1119): pid=6078 uid=0 auid=500 ses=52 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:35:14.137619 kubelet[3163]: I0420 19:35:14.085480 3163 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 20 19:35:16.561408 kubelet[3163]: I0420 19:35:16.556026 3163 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 20 19:35:26.918149 kubelet[3163]: E0420 19:35:22.956450 3163 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 20 19:35:27.675903 kubelet[3163]: I0420 19:35:19.113501 3163 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 20 19:35:27.966088 kubelet[3163]: I0420 19:35:17.327698 3163 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" err="an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding" Apr 20 19:35:31.200094 kubelet[3163]: E0420 19:35:22.155493 3163 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.31.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kld4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-84684997fc-zpm5v_calico-system(dfb0b7d2-b28d-4433-9fba-0074dfdf81ee): CreateContainerError: context deadline exceeded" logger="UnhandledError" Apr 20 19:35:40.988052 kubelet[3163]: E0420 19:35:40.976523 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.14:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 20 19:35:43.132375 kubelet[3163]: E0420 19:35:43.131220 3163 kubelet_node_status.go:535] "Unable to update node status" err="update node status exceeds retry count" Apr 20 19:35:46.860496 kubelet[3163]: E0420 19:35:42.212161 3163 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="7ad53f3fafdaae0a232b3f26afb202844744846718e9540c77ad7513e837efd9" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 20 19:35:51.073829 kubelet[3163]: E0420 19:35:41.868009 3163 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with CreateContainerError: \"context deadline exceeded\"" pod="calico-system/calico-apiserver-84684997fc-zpm5v" podUID="dfb0b7d2-b28d-4433-9fba-0074dfdf81ee" Apr 20 19:35:54.065118 containerd[1659]: time="2026-04-20T19:35:54.054589382Z" level=error msg="ttrpc: received message on inactive stream" stream=259 Apr 20 19:35:56.745310 kubelet[3163]: E0420 19:35:37.435677 3163 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/events/kube-scheduler-localhost.18a826e9b6ceb5d3\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{kube-scheduler-localhost.18a826e9b6ceb5d3 kube-system 2743 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-scheduler-localhost,UID:33fee6ba1581201eda98a989140db110,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 19:20:38 +0000 UTC,LastTimestamp:2026-04-20 19:20:59.281074808 +0000 UTC m=+757.375324536,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 19:35:57.695335 containerd[1659]: time="2026-04-20T19:35:56.883120856Z" level=error msg="ttrpc: received message on inactive stream" stream=261 Apr 20 19:35:59.111452 containerd[1659]: time="2026-04-20T19:35:58.689432314Z" level=error msg="get state for eb9d360022c43cd0d0841cddcbcf4ee6c5a5143f96085fd143ab1aa7833e360b" error="context deadline exceeded" Apr 20 19:36:00.550804 containerd[1659]: time="2026-04-20T19:35:59.130506979Z" level=warning msg="unknown status" status=0 Apr 20 19:36:02.185417 containerd[1659]: time="2026-04-20T19:36:02.145831019Z" level=error msg="ttrpc: received message on inactive stream" stream=267 Apr 20 19:36:06.456837 containerd[1659]: time="2026-04-20T19:36:06.439427586Z" level=error msg="ExecSync for \"7ad53f3fafdaae0a232b3f26afb202844744846718e9540c77ad7513e837efd9\" failed" error="rpc error: code = Unknown desc = failed to exec in container: failed to start exec \"eb9d360022c43cd0d0841cddcbcf4ee6c5a5143f96085fd143ab1aa7833e360b\": context canceled" Apr 20 19:36:09.952376 kubelet[3163]: E0420 19:36:08.315215 3163 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}" Apr 20 19:36:16.431844 kubelet[3163]: E0420 19:36:12.955706 3163 kuberuntime_container.go:540] "ListContainers failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" Apr 20 19:36:17.338378 kubelet[3163]: E0420 19:36:17.108273 3163 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 20 19:36:20.427335 kubelet[3163]: E0420 19:36:18.672482 3163 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" Apr 20 19:36:21.055872 kubelet[3163]: I0420 19:36:17.114519 3163 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Apr 20 19:36:29.246163 kubelet[3163]: E0420 19:36:25.250804 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=2837\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 20 19:36:29.769370 kubelet[3163]: E0420 19:36:28.319890 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=2901\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 20 19:36:38.432030 kubelet[3163]: E0420 19:36:37.846644 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=2825\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 20 19:36:44.050311 kubelet[3163]: E0420 19:36:44.044416 3163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="200ms" Apr 20 19:36:54.606520 kubelet[3163]: E0420 19:36:54.600976 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&resourceVersion=2900\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 20 19:36:57.660372 containerd[1659]: time="2026-04-20T19:36:57.478825663Z" level=info msg="TaskExit event container_id:\"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\" id:\"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\" pid:2981 exit_status:1 exited_at:{seconds:1776712820 nanos:249587616}" Apr 20 19:36:59.743194 kubelet[3163]: E0420 19:36:59.662909 3163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="400ms" Apr 20 19:37:06.814943 kubelet[3163]: E0420 19:36:57.385850 3163 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/events/kube-scheduler-localhost.18a826e9b6ceb5d3\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{kube-scheduler-localhost.18a826e9b6ceb5d3 kube-system 2743 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-scheduler-localhost,UID:33fee6ba1581201eda98a989140db110,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 19:20:38 +0000 UTC,LastTimestamp:2026-04-20 19:20:59.281074808 +0000 UTC m=+757.375324536,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 19:37:08.269366 containerd[1659]: time="2026-04-20T19:37:07.988253727Z" level=error msg="Failed to handle backOff event container_id:\"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\" id:\"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\" pid:2981 exit_status:1 exited_at:{seconds:1776712820 nanos:249587616} for ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 20 19:37:09.116082 containerd[1659]: time="2026-04-20T19:37:09.112886103Z" level=error msg="ttrpc: received message on inactive stream" stream=243 Apr 20 19:37:09.325502 containerd[1659]: time="2026-04-20T19:37:09.316916317Z" level=error msg="ttrpc: received message on inactive stream" stream=239 Apr 20 19:37:11.824296 containerd[1659]: time="2026-04-20T19:37:11.808298362Z" level=info msg="TaskExit event container_id:\"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\" id:\"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\" pid:2993 exit_status:1 exited_at:{seconds:1776712815 nanos:539071424}" Apr 20 19:37:16.142799 kubelet[3163]: E0420 19:37:16.137493 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=2837\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 20 19:37:16.734428 kubelet[3163]: I0420 19:37:12.839414 3163 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-20T19:37:04Z","lastTransitionTime":"2026-04-20T19:37:04Z","reason":"KubeletNotReady","message":"[container runtime is down, PLEG is not healthy: pleg was last seen active 4m2.75822892s ago; threshold is 3m0s]"} Apr 20 19:37:18.328273 kubelet[3163]: E0420 19:37:17.542409 3163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="800ms" Apr 20 19:37:23.599074 containerd[1659]: time="2026-04-20T19:37:23.079110342Z" level=error msg="ttrpc: received message on inactive stream" stream=233 Apr 20 19:37:24.221707 containerd[1659]: time="2026-04-20T19:37:23.621897226Z" level=error msg="Failed to handle backOff event container_id:\"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\" id:\"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\" pid:2993 exit_status:1 exited_at:{seconds:1776712815 nanos:539071424} for d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 20 19:37:28.414158 kubelet[3163]: E0420 19:37:28.408934 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=2825\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 20 19:37:31.931478 sshd[6078]: Connection closed by 10.0.0.1 port 45800 Apr 20 19:37:32.853000 audit[6064]: AUDIT1106 pid=6064 uid=0 auid=500 ses=52 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:37:33.050000 audit[6064]: AUDIT1104 pid=6064 uid=0 auid=500 ses=52 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:37:33.291821 kernel: audit: type=1106 audit(1776713852.853:1120): pid=6064 uid=0 auid=500 ses=52 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:37:32.107255 sshd-session[6064]: pam_unix(sshd:session): session closed for user core Apr 20 19:37:34.515156 kernel: audit: type=1104 audit(1776713853.050:1121): pid=6064 uid=0 auid=500 ses=52 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:37:34.391963 systemd[1]: sshd@50-8216-10.0.0.14:22-10.0.0.1:45800.service: Deactivated successfully. Apr 20 19:37:34.650000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@50-8216-10.0.0.14:22-10.0.0.1:45800 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:37:34.970779 kernel: audit: type=1131 audit(1776713854.650:1122): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@50-8216-10.0.0.14:22-10.0.0.1:45800 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:37:34.756528 systemd[1]: sshd@50-8216-10.0.0.14:22-10.0.0.1:45800.service: Consumed 13.443s CPU time, 4.1M memory peak. Apr 20 19:37:36.937374 systemd[1]: session-52.scope: Deactivated successfully. Apr 20 19:37:36.943831 systemd[1]: session-52.scope: Consumed 1min 5.443s CPU time, 18M memory peak. Apr 20 19:37:38.268673 systemd-logind[1627]: Session 52 logged out. Waiting for processes to exit. Apr 20 19:37:47.793384 kubelet[3163]: E0420 19:37:47.633440 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=2901\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 20 19:37:48.578470 systemd[1]: Started sshd@51-4106-10.0.0.14:22-10.0.0.1:51734.service - OpenSSH per-connection server daemon (10.0.0.1:51734). Apr 20 19:37:48.870000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@51-4106-10.0.0.14:22-10.0.0.1:51734 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:37:50.071721 kernel: audit: type=1130 audit(1776713868.870:1123): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@51-4106-10.0.0.14:22-10.0.0.1:51734 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:37:50.484212 systemd-logind[1627]: Removed session 52. Apr 20 19:37:51.515211 kubelet[3163]: E0420 19:37:51.507111 3163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="1.6s" Apr 20 19:37:52.660434 kubelet[3163]: E0420 19:37:52.653971 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&resourceVersion=2900\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 20 19:37:59.346775 kubelet[3163]: E0420 19:37:50.521946 3163 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/events/kube-scheduler-localhost.18a826e9b6ceb5d3\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{kube-scheduler-localhost.18a826e9b6ceb5d3 kube-system 2743 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-scheduler-localhost,UID:33fee6ba1581201eda98a989140db110,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 19:20:38 +0000 UTC,LastTimestamp:2026-04-20 19:20:59.281074808 +0000 UTC m=+757.375324536,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 19:38:17.707225 kubelet[3163]: E0420 19:38:17.706593 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=2837\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 20 19:38:20.407418 containerd[1659]: time="2026-04-20T19:38:20.393350511Z" level=info msg="TaskExit event container_id:\"bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf\" id:\"bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf\" pid:3768 exit_status:1 exited_at:{seconds:1776712842 nanos:981099034}" Apr 20 19:38:27.786278 containerd[1659]: time="2026-04-20T19:38:26.560379925Z" level=error msg="ttrpc: received message on inactive stream" stream=281 Apr 20 19:38:33.181165 containerd[1659]: time="2026-04-20T19:38:32.809468960Z" level=error msg="Failed to handle backOff event container_id:\"bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf\" id:\"bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf\" pid:3768 exit_status:1 exited_at:{seconds:1776712842 nanos:981099034} for bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 20 19:38:35.533994 containerd[1659]: time="2026-04-20T19:38:33.980888258Z" level=error msg="ttrpc: received message on inactive stream" stream=171 Apr 20 19:38:35.533994 containerd[1659]: time="2026-04-20T19:38:34.444525547Z" level=error msg="ttrpc: received message on inactive stream" stream=167 Apr 20 19:38:39.773322 containerd[1659]: time="2026-04-20T19:38:39.354424232Z" level=error msg="ttrpc: received message on inactive stream" stream=283 Apr 20 19:38:42.566504 containerd[1659]: time="2026-04-20T19:38:42.563353027Z" level=error msg="get state for 97fc186bea7b0bfc3510763cfdc6ce98c2e3d91068b87039ba069a597ce8ce85" error="context deadline exceeded" Apr 20 19:38:42.566504 containerd[1659]: time="2026-04-20T19:38:42.566780722Z" level=warning msg="unknown status" status=0 Apr 20 19:38:48.714360 containerd[1659]: time="2026-04-20T19:38:48.681027511Z" level=error msg="ttrpc: received message on inactive stream" stream=287 Apr 20 19:38:49.578230 kubelet[3163]: E0420 19:38:34.384410 3163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Apr 20 19:38:51.284892 kubelet[3163]: E0420 19:38:43.875195 3163 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="7ad53f3fafdaae0a232b3f26afb202844744846718e9540c77ad7513e837efd9" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 20 19:38:56.870046 containerd[1659]: time="2026-04-20T19:38:56.854261378Z" level=error msg="ExecSync for \"7ad53f3fafdaae0a232b3f26afb202844744846718e9540c77ad7513e837efd9\" failed" error="rpc error: code = Unknown desc = failed to exec in container: failed to start exec \"97fc186bea7b0bfc3510763cfdc6ce98c2e3d91068b87039ba069a597ce8ce85\": context canceled" Apr 20 19:39:04.731000 audit[6125]: AUDIT1101 pid=6125 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:39:05.276395 kernel: audit: type=1101 audit(1776713944.731:1124): pid=6125 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:39:06.073261 sshd[6125]: Accepted publickey for core from 10.0.0.1 port 51734 ssh2: RSA SHA256:ZNIzts6V4KYKlrJxXaosrimCRlmsV/+NkZ5UtjwHrjE Apr 20 19:39:09.149000 audit[6125]: AUDIT1103 pid=6125 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:39:09.437000 audit[6125]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc17e88400 a2=3 a3=0 items=0 ppid=1 pid=6125 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=53 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:39:09.437000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:39:10.616951 kernel: audit: type=1103 audit(1776713949.149:1125): pid=6125 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:39:10.742419 kernel: audit: type=1006 audit(1776713949.437:1126): pid=6125 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=53 res=1 Apr 20 19:39:10.758404 kernel: audit: type=1300 audit(1776713949.437:1126): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc17e88400 a2=3 a3=0 items=0 ppid=1 pid=6125 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=53 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:39:10.824904 kernel: audit: type=1327 audit(1776713949.437:1126): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:39:12.557504 sshd-session[6125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 19:39:28.519437 kubelet[3163]: E0420 19:39:28.511160 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&resourceVersion=2900\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 20 19:39:35.733043 systemd-logind[1627]: New session '53' of user 'core' with class 'user' and type 'tty'. Apr 20 19:39:36.238493 systemd[1]: Started session-53.scope - Session 53 of User core. Apr 20 19:39:41.916000 audit[6125]: AUDIT1105 pid=6125 uid=0 auid=500 ses=53 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:39:42.288322 kernel: audit: type=1105 audit(1776713981.916:1127): pid=6125 uid=0 auid=500 ses=53 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:39:43.046704 kubelet[3163]: E0420 19:39:43.044785 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=2825\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 20 19:39:44.172000 audit[6154]: AUDIT1103 pid=6154 uid=0 auid=500 ses=53 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:39:44.342865 kernel: audit: type=1103 audit(1776713984.172:1128): pid=6154 uid=0 auid=500 ses=53 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:39:49.744982 kubelet[3163]: E0420 19:39:42.857085 3163 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/events/kube-scheduler-localhost.18a826e9b6ceb5d3\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{kube-scheduler-localhost.18a826e9b6ceb5d3 kube-system 2743 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-scheduler-localhost,UID:33fee6ba1581201eda98a989140db110,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 19:20:38 +0000 UTC,LastTimestamp:2026-04-20 19:20:59.281074808 +0000 UTC m=+757.375324536,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 19:39:53.153389 containerd[1659]: time="2026-04-20T19:39:53.071644291Z" level=info msg="container event discarded" container=d60bfc28bae39dd4c39466e0fffee6553b16b69bc14ddc6752a782b3abc019c6 type=CONTAINER_CREATED_EVENT Apr 20 19:40:03.637151 kubelet[3163]: E0420 19:40:03.627667 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=2901\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 20 19:40:04.506106 kubelet[3163]: E0420 19:40:02.913442 3163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Apr 20 19:40:20.434221 kubelet[3163]: E0420 19:40:20.429847 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=2837\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 20 19:40:24.052754 kubelet[3163]: E0420 19:40:23.961026 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T19:37:04Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T19:37:04Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T19:37:04Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T19:37:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-20T19:37:04Z\\\",\\\"message\\\":\\\"[container runtime is down, PLEG is not healthy: pleg was last seen active 4m2.75822892s ago; threshold is 3m0s]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\\\",\\\"ghcr.io/flatcar/calico/node:v3.31.4\\\"],\\\"sizeBytes\\\":159838426},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\\\",\\\"ghcr.io/flatcar/calico/cni:v3.31.4\\\"],\\\"sizeBytes\\\":72167716},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\\\",\\\"ghcr.io/flatcar/calico/apiserver:v3.31.4\\\"],\\\"sizeBytes\\\":49971841},{\\\"names\\\":[\\\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\\\",\\\"quay.io/tigera/operator:v1.40.7\\\"],\\\"sizeBytes\\\":40842151},{\\\"names\\\":[\\\"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\\\",\\\"registry.k8s.io/kube-proxy:v1.33.11\\\"],\\\"sizeBytes\\\":32009730},{\\\"names\\\":[\\\"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\\\",\\\"registry.k8s.io/kube-apiserver:v1.33.11\\\"],\\\"sizeBytes\\\":30190588},{\\\"names\\\":[\\\"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\\\",\\\"registry.k8s.io/kube-controller-manager:v1.33.11\\\"],\\\"sizeBytes\\\":27737794},{\\\"names\\\":[\\\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\\\",\\\"registry.k8s.io/etcd:3.5.24-0\\\"],\\\"sizeBytes\\\":23716032},{\\\"names\\\":[\\\"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\\\",\\\"registry.k8s.io/kube-scheduler:v1.33.11\\\"],\\\"sizeBytes\\\":21856121},{\\\"names\\\":[\\\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\\\",\\\"registry.k8s.io/coredns/coredns:v1.12.0\\\"],\\\"sizeBytes\\\":20939036},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\\\",\\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\\\"],\\\"sizeBytes\\\":16260314},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\\\",\\\"ghcr.io/flatcar/calico/csi:v3.31.4\\\"],\\\"sizeBytes\\\":10348547},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\\\",\\\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\\\"],\\\"sizeBytes\\\":6186255},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\\\",\\\"registry.k8s.io/pause:3.10.1\\\"],\\\"sizeBytes\\\":320448},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\\\",\\\"registry.k8s.io/pause:3.10\\\"],\\\"sizeBytes\\\":320368}]}}\" for node \"localhost\": Patch \"https://10.0.0.14:6443/api/v1/nodes/localhost/status?timeout=10s\": context deadline exceeded" Apr 20 19:40:36.616738 kubelet[3163]: E0420 19:40:36.256497 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&resourceVersion=2900\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 20 19:40:41.228612 kubelet[3163]: E0420 19:40:39.506818 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=2825\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 20 19:40:49.328405 kubelet[3163]: E0420 19:40:48.521942 3163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 20 19:40:51.260785 kubelet[3163]: E0420 19:40:49.975273 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.14:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded" Apr 20 19:40:52.184241 kubelet[3163]: E0420 19:40:52.132069 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=2901\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 20 19:40:56.386005 kubelet[3163]: E0420 19:40:45.687207 3163 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/events/kube-scheduler-localhost.18a826e9b6ceb5d3\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{kube-scheduler-localhost.18a826e9b6ceb5d3 kube-system 2743 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-scheduler-localhost,UID:33fee6ba1581201eda98a989140db110,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 19:20:38 +0000 UTC,LastTimestamp:2026-04-20 19:20:59.281074808 +0000 UTC m=+757.375324536,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 19:41:06.336253 kubelet[3163]: E0420 19:41:06.318350 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="12m17.827s" Apr 20 19:41:08.792299 kubelet[3163]: I0420 19:41:08.770431 3163 scope.go:117] "RemoveContainer" containerID="d60bfc28bae39dd4c39466e0fffee6553b16b69bc14ddc6752a782b3abc019c6" Apr 20 19:41:13.871427 kubelet[3163]: I0420 19:41:11.884336 3163 status_manager.go:895] "Failed to get status for pod" podUID="071d23f6-a94b-4165-9229-2d0570b516d8" pod="calico-system/calico-node-g9fs5" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-node-g9fs5\": net/http: TLS handshake timeout" Apr 20 19:41:22.881378 kubelet[3163]: E0420 19:41:17.490524 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.14:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded" Apr 20 19:41:33.636842 kubelet[3163]: E0420 19:41:31.474936 3163 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" Apr 20 19:41:34.637890 kubelet[3163]: E0420 19:41:33.282885 3163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 20 19:41:35.902031 kubelet[3163]: E0420 19:41:34.591823 3163 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 3m1.726179387s ago; threshold is 3m0s]" Apr 20 19:41:39.662931 kubelet[3163]: E0420 19:41:39.656076 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&resourceVersion=2900\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 20 19:41:43.965003 kubelet[3163]: E0420 19:41:43.952068 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=2837\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 20 19:41:49.881338 kubelet[3163]: E0420 19:41:48.092983 3163 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 3m5.689027106s ago; threshold is 3m0s]" Apr 20 19:41:50.865306 sshd[6154]: Connection closed by 10.0.0.1 port 51734 Apr 20 19:41:51.047000 audit[6125]: AUDIT1106 pid=6125 uid=0 auid=500 ses=53 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:41:51.177000 audit[6125]: AUDIT1104 pid=6125 uid=0 auid=500 ses=53 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:41:50.856588 sshd-session[6125]: pam_unix(sshd:session): session closed for user core Apr 20 19:41:52.838408 kernel: audit: type=1106 audit(1776714111.047:1129): pid=6125 uid=0 auid=500 ses=53 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:41:52.929266 kernel: audit: type=1104 audit(1776714111.177:1130): pid=6125 uid=0 auid=500 ses=53 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:41:54.269991 systemd[1]: sshd@51-4106-10.0.0.14:22-10.0.0.1:51734.service: Deactivated successfully. Apr 20 19:41:54.890000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@51-4106-10.0.0.14:22-10.0.0.1:51734 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:41:55.017402 systemd[1]: sshd@51-4106-10.0.0.14:22-10.0.0.1:51734.service: Consumed 18.257s CPU time, 4.1M memory peak. Apr 20 19:41:56.572872 kubelet[3163]: E0420 19:41:54.342792 3163 secret.go:189] Couldn't get secret calico-system/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Apr 20 19:41:57.063108 kernel: audit: type=1131 audit(1776714114.890:1131): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@51-4106-10.0.0.14:22-10.0.0.1:51734 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:41:57.391264 systemd[1]: session-53.scope: Deactivated successfully. Apr 20 19:41:57.823286 systemd[1]: session-53.scope: Consumed 55.697s CPU time, 16M memory peak. Apr 20 19:41:59.381372 systemd-logind[1627]: Session 53 logged out. Waiting for processes to exit. Apr 20 19:42:05.845822 kubelet[3163]: E0420 19:42:04.658931 3163 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 3m17.427978883s ago; threshold is 3m0s]" Apr 20 19:42:06.671198 kubelet[3163]: E0420 19:42:06.644864 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.14:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded" Apr 20 19:42:09.266521 containerd[1659]: time="2026-04-20T19:42:09.266148562Z" level=info msg="TaskExit event container_id:\"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\" id:\"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\" pid:2981 exit_status:1 exited_at:{seconds:1776712820 nanos:249587616}" Apr 20 19:42:09.583000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@52-12299-10.0.0.14:22-10.0.0.1:45180 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:42:11.054921 kernel: audit: type=1130 audit(1776714129.583:1132): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@52-12299-10.0.0.14:22-10.0.0.1:45180 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:42:09.333967 systemd[1]: Started sshd@52-12299-10.0.0.14:22-10.0.0.1:45180.service - OpenSSH per-connection server daemon (10.0.0.1:45180). Apr 20 19:42:10.643181 systemd-logind[1627]: Removed session 53. Apr 20 19:42:12.634850 kubelet[3163]: E0420 19:42:09.707930 3163 kubelet.go:2460] "Skipping pod synchronization" err="PLEG is not healthy: pleg was last seen active 3m33.791609959s ago; threshold is 3m0s" Apr 20 19:42:16.277748 kubelet[3163]: E0420 19:42:14.592198 3163 projected.go:289] Couldn't get configMap calico-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:42:16.645061 kubelet[3163]: E0420 19:42:15.490369 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=2825\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 20 19:42:19.838034 containerd[1659]: time="2026-04-20T19:42:19.836033061Z" level=error msg="Failed to handle backOff event container_id:\"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\" id:\"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\" pid:2981 exit_status:1 exited_at:{seconds:1776712820 nanos:249587616} for ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 20 19:42:22.753203 containerd[1659]: time="2026-04-20T19:42:22.568146694Z" level=error msg="ttrpc: received message on inactive stream" stream=251 Apr 20 19:42:25.060117 containerd[1659]: time="2026-04-20T19:42:24.883051099Z" level=info msg="TaskExit event container_id:\"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\" id:\"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\" pid:2993 exit_status:1 exited_at:{seconds:1776712815 nanos:539071424}" Apr 20 19:42:25.548274 kubelet[3163]: E0420 19:42:21.904135 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dfb0b7d2-b28d-4433-9fba-0074dfdf81ee-calico-apiserver-certs podName:dfb0b7d2-b28d-4433-9fba-0074dfdf81ee nodeName:}" failed. No retries permitted until 2026-04-20 19:42:16.670443372 +0000 UTC m=+2034.764693076 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/dfb0b7d2-b28d-4433-9fba-0074dfdf81ee-calico-apiserver-certs") pod "calico-apiserver-84684997fc-zpm5v" (UID: "dfb0b7d2-b28d-4433-9fba-0074dfdf81ee") : failed to sync secret cache: timed out waiting for the condition Apr 20 19:42:26.856295 containerd[1659]: time="2026-04-20T19:42:26.846408039Z" level=error msg="ttrpc: received message on inactive stream" stream=249 Apr 20 19:42:31.808207 kubelet[3163]: E0420 19:42:27.049477 3163 kubelet.go:2460] "Skipping pod synchronization" err="PLEG is not healthy: pleg was last seen active 3m45.158973167s ago; threshold is 3m0s" Apr 20 19:42:35.095034 kubelet[3163]: E0420 19:42:29.656881 3163 projected.go:194] Error preparing data for projected volume kube-api-access-kld4g for pod calico-system/calico-apiserver-84684997fc-zpm5v: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:42:38.233058 containerd[1659]: time="2026-04-20T19:42:38.224314135Z" level=error msg="Failed to handle backOff event container_id:\"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\" id:\"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\" pid:2993 exit_status:1 exited_at:{seconds:1776712815 nanos:539071424} for d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 20 19:42:39.675875 containerd[1659]: time="2026-04-20T19:42:39.669182914Z" level=error msg="ttrpc: received message on inactive stream" stream=241 Apr 20 19:42:40.753308 containerd[1659]: time="2026-04-20T19:42:39.964323251Z" level=error msg="ttrpc: received message on inactive stream" stream=243 Apr 20 19:42:56.268912 containerd[1659]: time="2026-04-20T19:42:55.245513175Z" level=error msg="ttrpc: received message on inactive stream" stream=301 Apr 20 19:43:00.677093 containerd[1659]: time="2026-04-20T19:43:00.586448973Z" level=error msg="ttrpc: received message on inactive stream" stream=303 Apr 20 19:43:06.735963 kubelet[3163]: E0420 19:42:15.307173 3163 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/events/kube-scheduler-localhost.18a826e9b6ceb5d3\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{kube-scheduler-localhost.18a826e9b6ceb5d3 kube-system 2743 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-scheduler-localhost,UID:33fee6ba1581201eda98a989140db110,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 19:20:38 +0000 UTC,LastTimestamp:2026-04-20 19:20:59.281074808 +0000 UTC m=+757.375324536,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 19:43:07.779859 containerd[1659]: time="2026-04-20T19:43:06.921407402Z" level=error msg="ExecSync for \"7ad53f3fafdaae0a232b3f26afb202844744846718e9540c77ad7513e837efd9\" failed" error="rpc error: code = Unknown desc = failed to exec in container: failed to start exec \"8e1a1f94ba602e1730dbf1e3a83a3e1893e43a6516ef89bceaf2b520ec75bd8d\": context canceled" Apr 20 19:43:12.547046 kubelet[3163]: E0420 19:42:56.187046 3163 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 4m18.665920696s ago; threshold is 3m0s]" Apr 20 19:43:13.062000 audit[6191]: AUDIT1101 pid=6191 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:43:13.838865 sshd[6191]: Accepted publickey for core from 10.0.0.1 port 45180 ssh2: RSA SHA256:ZNIzts6V4KYKlrJxXaosrimCRlmsV/+NkZ5UtjwHrjE Apr 20 19:43:15.008297 kernel: audit: type=1101 audit(1776714193.062:1133): pid=6191 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:43:15.792000 audit[6191]: AUDIT1103 pid=6191 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:43:16.136000 audit[6191]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff121deb90 a2=3 a3=0 items=0 ppid=1 pid=6191 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=54 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:43:16.136000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:43:16.239170 sshd-session[6191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 19:43:17.330168 kernel: audit: type=1103 audit(1776714195.792:1134): pid=6191 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:43:17.473358 kernel: audit: type=1006 audit(1776714196.136:1135): pid=6191 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=54 res=1 Apr 20 19:43:17.652158 kernel: audit: type=1300 audit(1776714196.136:1135): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff121deb90 a2=3 a3=0 items=0 ppid=1 pid=6191 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=54 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:43:17.859926 kernel: audit: type=1327 audit(1776714196.136:1135): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:43:24.561948 systemd-logind[1627]: New session '54' of user 'core' with class 'user' and type 'tty'. Apr 20 19:43:28.686409 systemd[1]: Started session-54.scope - Session 54 of User core. Apr 20 19:43:34.726169 containerd[1659]: time="2026-04-20T19:43:34.596132054Z" level=info msg="TaskExit event container_id:\"bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf\" id:\"bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf\" pid:3768 exit_status:1 exited_at:{seconds:1776712842 nanos:981099034}" Apr 20 19:43:39.372381 containerd[1659]: time="2026-04-20T19:43:39.080750118Z" level=error msg="get state for bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf" error="context deadline exceeded" Apr 20 19:43:40.329000 audit[6191]: AUDIT1105 pid=6191 uid=0 auid=500 ses=54 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:43:41.466874 kernel: audit: type=1105 audit(1776714220.329:1136): pid=6191 uid=0 auid=500 ses=54 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:43:41.763763 containerd[1659]: time="2026-04-20T19:43:39.378784246Z" level=warning msg="unknown status" status=0 Apr 20 19:43:43.259189 kubelet[3163]: E0420 19:43:21.819425 3163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 20 19:43:44.665000 audit[6202]: AUDIT1103 pid=6202 uid=0 auid=500 ses=54 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:43:45.554040 kernel: audit: type=1103 audit(1776714224.665:1137): pid=6202 uid=0 auid=500 ses=54 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:43:52.708123 kubelet[3163]: E0420 19:43:48.620842 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/dfb0b7d2-b28d-4433-9fba-0074dfdf81ee-kube-api-access-kld4g podName:dfb0b7d2-b28d-4433-9fba-0074dfdf81ee nodeName:}" failed. No retries permitted until 2026-04-20 19:43:35.316317222 +0000 UTC m=+2113.410566929 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-kld4g" (UniqueName: "kubernetes.io/projected/dfb0b7d2-b28d-4433-9fba-0074dfdf81ee-kube-api-access-kld4g") pod "calico-apiserver-84684997fc-zpm5v" (UID: "dfb0b7d2-b28d-4433-9fba-0074dfdf81ee") : failed to sync configmap cache: timed out waiting for the condition Apr 20 19:43:57.136495 kubelet[3163]: E0420 19:43:48.836044 3163 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 4m48.611955908s ago; threshold is 3m0s]" Apr 20 19:44:00.691907 containerd[1659]: time="2026-04-20T19:43:52.082097789Z" level=error msg="Failed to handle backOff event container_id:\"bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf\" id:\"bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf\" pid:3768 exit_status:1 exited_at:{seconds:1776712842 nanos:981099034} for bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 20 19:44:09.370057 containerd[1659]: time="2026-04-20T19:43:58.949211236Z" level=error msg="ttrpc: received message on inactive stream" stream=177 Apr 20 19:44:12.649268 containerd[1659]: time="2026-04-20T19:44:09.427274409Z" level=error msg="ttrpc: received message on inactive stream" stream=179 Apr 20 19:44:14.365227 kubelet[3163]: E0420 19:44:02.106431 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.14:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 20 19:44:23.529619 kubelet[3163]: E0420 19:43:42.361984 3163 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="7ad53f3fafdaae0a232b3f26afb202844744846718e9540c77ad7513e837efd9" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 20 19:44:34.127204 kubelet[3163]: E0420 19:44:12.877518 3163 kubelet_node_status.go:535] "Unable to update node status" err="update node status exceeds retry count" Apr 20 19:44:34.127204 kubelet[3163]: I0420 19:44:16.259237 3163 request.go:752] "Waited before sending request" delay="12.001479105s" reason="client-side throttling, not priority and fairness" verb="PATCH" URL="https://10.0.0.14:6443/api/v1/namespaces/kube-system/events/kube-scheduler-localhost.18a826e9b6ceb5d3" Apr 20 19:44:41.466105 kubelet[3163]: E0420 19:44:31.323432 3163 secret.go:189] Couldn't get secret calico-system/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Apr 20 19:44:41.466105 kubelet[3163]: E0420 19:44:41.448515 3163 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}" Apr 20 19:44:41.466105 kubelet[3163]: E0420 19:44:41.452644 3163 container_log_manager.go:197] "Failed to rotate container logs" err="failed to list containers: rpc error: code = DeadlineExceeded desc = context deadline exceeded" Apr 20 19:44:47.188988 kubelet[3163]: E0420 19:44:42.651901 3163 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="7741ff0e99f5739abda292d10780e87385f1fe549e3c8a3421f5007714bdc536" Apr 20 19:44:51.815258 kubelet[3163]: E0420 19:44:43.416434 3163 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" filter="nil" Apr 20 19:44:55.581322 kubelet[3163]: E0420 19:44:43.886083 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1537\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"calico-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 20 19:44:58.593226 kubelet[3163]: E0420 19:44:56.149281 3163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 20 19:45:01.692401 kubelet[3163]: E0420 19:44:58.847350 3163 kuberuntime_sandbox.go:294] "Failed to list pod sandboxes" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" Apr 20 19:45:04.637194 kubelet[3163]: E0420 19:45:02.156658 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&resourceVersion=2900\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 20 19:45:06.747196 kubelet[3163]: I0420 19:44:59.010103 3163 image_gc_manager.go:222] "Failed to monitor images" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" Apr 20 19:45:13.155433 kubelet[3163]: E0420 19:44:46.591993 3163 log.go:32] "Status from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" Apr 20 19:45:17.269417 kubelet[3163]: E0420 19:45:16.328109 3163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:45:21.366018 kubelet[3163]: E0420 19:45:21.339467 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=2901\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 20 19:45:24.236485 kubelet[3163]: E0420 19:44:57.239186 3163 container_log_manager.go:274] "Failed to get container status" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" worker=1 containerID="7741ff0e99f5739abda292d10780e87385f1fe549e3c8a3421f5007714bdc536" Apr 20 19:45:31.377335 kubelet[3163]: E0420 19:45:20.450780 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.Secret: Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/secrets?fieldSelector=metadata.name%3Dcalico-apiserver-certs&resourceVersion=2406\": dial tcp 10.0.0.14:6443: i/o timeout" logger="UnhandledError" reflector="object-\"calico-system\"/\"calico-apiserver-certs\"" type="*v1.Secret" Apr 20 19:45:34.752015 kubelet[3163]: E0420 19:44:57.830268 3163 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 5m47.914586147s ago; threshold is 3m0s]" Apr 20 19:45:57.544161 kubelet[3163]: E0420 19:45:49.365898 3163 kubelet.go:3102] "Container runtime sanity check failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" Apr 20 19:46:02.311445 kubelet[3163]: E0420 19:45:49.775259 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dfb0b7d2-b28d-4433-9fba-0074dfdf81ee-calico-apiserver-certs podName:dfb0b7d2-b28d-4433-9fba-0074dfdf81ee nodeName:}" failed. No retries permitted until 2026-04-20 19:45:41.574904676 +0000 UTC m=+2239.669154379 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/dfb0b7d2-b28d-4433-9fba-0074dfdf81ee-calico-apiserver-certs") pod "calico-apiserver-84684997fc-zpm5v" (UID: "dfb0b7d2-b28d-4433-9fba-0074dfdf81ee") : failed to sync secret cache: timed out waiting for the condition Apr 20 19:46:02.825820 kubelet[3163]: E0420 19:45:38.134762 3163 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 7m5.212836317s ago; threshold is 3m0s]" Apr 20 19:46:02.825820 kubelet[3163]: E0420 19:46:02.646331 3163 log.go:32] "ListImages with filter from image service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" filter="nil" Apr 20 19:46:05.245065 kubelet[3163]: E0420 19:45:49.950678 3163 log.go:32] "Get ImageStatus from image service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" image="ghcr.io/flatcar/calico/apiserver:v3.31.4" Apr 20 19:46:07.914909 kubelet[3163]: E0420 19:45:59.482369 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=2837\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 20 19:46:09.146514 kubelet[3163]: I0420 19:46:04.547101 3163 request.go:752] "Waited before sending request" delay="7.915805584s" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&resourceVersion=2900" Apr 20 19:46:15.524370 kubelet[3163]: E0420 19:46:07.746381 3163 kuberuntime_image.go:104] "Failed to list images" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" Apr 20 19:46:15.524370 kubelet[3163]: I0420 19:46:15.496392 3163 image_gc_manager.go:230] "Failed to update image list" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" Apr 20 19:46:25.455272 kubelet[3163]: E0420 19:46:24.078375 3163 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 7m34.815789409s ago; threshold is 3m0s]" Apr 20 19:46:29.542764 kubelet[3163]: E0420 19:46:28.014354 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=2825\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 20 19:46:30.782073 kubelet[3163]: E0420 19:46:23.802018 3163 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}" Apr 20 19:46:33.094358 kubelet[3163]: E0420 19:46:32.847618 3163 kuberuntime_container.go:540] "ListContainers failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" Apr 20 19:46:36.315319 kubelet[3163]: E0420 19:46:36.307355 3163 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:46:39.195368 sshd[6202]: Connection closed by 10.0.0.1 port 45180 Apr 20 19:46:40.594000 audit[6191]: AUDIT1106 pid=6191 uid=0 auid=500 ses=54 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:46:40.850000 audit[6191]: AUDIT1104 pid=6191 uid=0 auid=500 ses=54 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:46:42.488457 kernel: audit: type=1106 audit(1776714400.594:1138): pid=6191 uid=0 auid=500 ses=54 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:46:39.383970 sshd-session[6191]: pam_unix(sshd:session): session closed for user core Apr 20 19:46:43.948000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@52-12299-10.0.0.14:22-10.0.0.1:45180 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:46:44.281375 kernel: audit: type=1104 audit(1776714400.850:1139): pid=6191 uid=0 auid=500 ses=54 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:46:43.582814 systemd[1]: sshd@52-12299-10.0.0.14:22-10.0.0.1:45180.service: Deactivated successfully. Apr 20 19:46:44.673227 kubelet[3163]: E0420 19:46:36.161240 3163 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 5m43.422737664s ago; threshold is 3m0s]" Apr 20 19:46:44.673227 kubelet[3163]: E0420 19:46:43.541950 3163 projected.go:194] Error preparing data for projected volume kube-api-access-6ncsk for pod kube-system/kube-proxy-c6mkn: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:46:44.673227 kubelet[3163]: E0420 19:46:44.467406 3163 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.31.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kld4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-84684997fc-zpm5v_calico-system(dfb0b7d2-b28d-4433-9fba-0074dfdf81ee): CreateContainerConfigError: context deadline exceeded" logger="UnhandledError" Apr 20 19:46:45.856150 kernel: audit: type=1131 audit(1776714403.948:1140): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@52-12299-10.0.0.14:22-10.0.0.1:45180 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:46:43.960096 systemd[1]: sshd@52-12299-10.0.0.14:22-10.0.0.1:45180.service: Consumed 14.736s CPU time, 4.3M memory peak. Apr 20 19:46:45.284243 systemd[1]: session-54.scope: Deactivated successfully. Apr 20 19:46:47.138734 kubelet[3163]: E0420 19:46:43.128785 3163 kubelet.go:1596] "Image garbage collection failed multiple times in a row" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" Apr 20 19:46:45.504346 systemd[1]: session-54.scope: Consumed 1min 9.837s CPU time, 17.7M memory peak. Apr 20 19:46:47.573663 containerd[1659]: time="2026-04-20T19:46:47.216492023Z" level=info msg="StopContainer for \"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\" with timeout 30 (s)" Apr 20 19:46:49.700152 kubelet[3163]: E0420 19:46:49.681136 3163 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with CreateContainerConfigError: \"context deadline exceeded\"" pod="calico-system/calico-apiserver-84684997fc-zpm5v" podUID="dfb0b7d2-b28d-4433-9fba-0074dfdf81ee" Apr 20 19:46:50.531403 containerd[1659]: time="2026-04-20T19:46:50.153424885Z" level=info msg="Skipping the sending of signal terminated to container \"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\" because a prior stop with timeout>0 request already sent the signal" Apr 20 19:46:50.631492 kubelet[3163]: E0420 19:46:50.629777 3163 projected.go:289] Couldn't get configMap calico-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:46:51.334506 kubelet[3163]: E0420 19:46:51.157524 3163 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/events/kube-scheduler-localhost.18a826e9b6ceb5d3\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{kube-scheduler-localhost.18a826e9b6ceb5d3 kube-system 2743 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-scheduler-localhost,UID:33fee6ba1581201eda98a989140db110,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 19:20:38 +0000 UTC,LastTimestamp:2026-04-20 19:20:59.281074808 +0000 UTC m=+757.375324536,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 19:46:51.890648 systemd-logind[1627]: Session 54 logged out. Waiting for processes to exit. Apr 20 19:46:52.300000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@53-9-10.0.0.14:22-10.0.0.1:49660 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:46:52.351220 kernel: audit: type=1130 audit(1776714412.300:1141): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@53-9-10.0.0.14:22-10.0.0.1:49660 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:46:52.300387 systemd[1]: Started sshd@53-9-10.0.0.14:22-10.0.0.1:49660.service - OpenSSH per-connection server daemon (10.0.0.1:49660). Apr 20 19:46:54.874072 systemd-logind[1627]: Removed session 54. Apr 20 19:46:55.390751 kubelet[3163]: E0420 19:46:37.472937 3163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 20 19:46:55.742918 kubelet[3163]: E0420 19:46:50.630365 3163 projected.go:194] Error preparing data for projected volume kube-api-access-kld4g for pod calico-system/calico-apiserver-84684997fc-zpm5v: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:46:55.742918 kubelet[3163]: E0420 19:46:50.153325 3163 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 5m59.698262029s ago; threshold is 3m0s]" Apr 20 19:46:57.008617 kubelet[3163]: E0420 19:46:53.809396 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=2901\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 20 19:47:01.564864 kubelet[3163]: E0420 19:46:53.477460 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1537\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"calico-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 20 19:47:06.865861 kubelet[3163]: I0420 19:46:59.086399 3163 status_manager.go:895] "Failed to get status for pod" podUID="33fee6ba1581201eda98a989140db110" pod="kube-system/kube-scheduler-localhost" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": net/http: TLS handshake timeout - error from a previous attempt: EOF" Apr 20 19:47:14.722516 kubelet[3163]: E0420 19:47:14.703056 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/526e8f89-8d32-4504-b20c-956610c7bb82-kube-api-access-6ncsk podName:526e8f89-8d32-4504-b20c-956610c7bb82 nodeName:}" failed. No retries permitted until 2026-04-20 19:47:02.418087216 +0000 UTC m=+2320.512336915 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6ncsk" (UniqueName: "kubernetes.io/projected/526e8f89-8d32-4504-b20c-956610c7bb82-kube-api-access-6ncsk") pod "kube-proxy-c6mkn" (UID: "526e8f89-8d32-4504-b20c-956610c7bb82") : failed to sync configmap cache: timed out waiting for the condition Apr 20 19:47:20.363027 containerd[1659]: time="2026-04-20T19:47:20.354335865Z" level=info msg="TaskExit event container_id:\"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\" id:\"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\" pid:2981 exit_status:1 exited_at:{seconds:1776712820 nanos:249587616}" Apr 20 19:47:21.416277 containerd[1659]: time="2026-04-20T19:47:21.259342516Z" level=info msg="Kill container \"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\"" Apr 20 19:47:24.516895 kubelet[3163]: E0420 19:47:21.335616 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/dfb0b7d2-b28d-4433-9fba-0074dfdf81ee-kube-api-access-kld4g podName:dfb0b7d2-b28d-4433-9fba-0074dfdf81ee nodeName:}" failed. No retries permitted until 2026-04-20 19:47:20.21016739 +0000 UTC m=+2338.304417093 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-kld4g" (UniqueName: "kubernetes.io/projected/dfb0b7d2-b28d-4433-9fba-0074dfdf81ee-kube-api-access-kld4g") pod "calico-apiserver-84684997fc-zpm5v" (UID: "dfb0b7d2-b28d-4433-9fba-0074dfdf81ee") : failed to sync configmap cache: timed out waiting for the condition Apr 20 19:47:25.252801 kubelet[3163]: I0420 19:47:19.494962 3163 request.go:752] "Waited before sending request" delay="2.467558094s" reason="client-side throttling, not priority and fairness" verb="PATCH" URL="https://10.0.0.14:6443/api/v1/namespaces/kube-system/events/kube-scheduler-localhost.18a826e9b6ceb5d3" Apr 20 19:47:28.680374 kubelet[3163]: E0420 19:47:16.118264 3163 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 6m23.487112845s ago; threshold is 3m0s]" Apr 20 19:47:32.180758 containerd[1659]: time="2026-04-20T19:47:32.153019418Z" level=error msg="Failed to handle backOff event container_id:\"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\" id:\"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\" pid:2981 exit_status:1 exited_at:{seconds:1776712820 nanos:249587616} for ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 20 19:47:32.839158 containerd[1659]: time="2026-04-20T19:47:32.337499823Z" level=error msg="ttrpc: received message on inactive stream" stream=259 Apr 20 19:47:32.839158 containerd[1659]: time="2026-04-20T19:47:32.703217964Z" level=error msg="ttrpc: received message on inactive stream" stream=257 Apr 20 19:47:33.899378 kubelet[3163]: E0420 19:47:33.883305 3163 secret.go:189] Couldn't get secret calico-system/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Apr 20 19:47:38.795735 kubelet[3163]: E0420 19:47:28.612126 3163 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:47:40.159476 containerd[1659]: time="2026-04-20T19:47:40.143425454Z" level=info msg="TaskExit event container_id:\"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\" id:\"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\" pid:2993 exit_status:1 exited_at:{seconds:1776712815 nanos:539071424}" Apr 20 19:47:41.955000 audit[6235]: AUDIT1101 pid=6235 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:47:42.142163 kernel: audit: type=1101 audit(1776714461.955:1142): pid=6235 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:47:42.154932 sshd[6235]: Accepted publickey for core from 10.0.0.1 port 49660 ssh2: RSA SHA256:ZNIzts6V4KYKlrJxXaosrimCRlmsV/+NkZ5UtjwHrjE Apr 20 19:47:43.129000 audit[6235]: AUDIT1103 pid=6235 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:47:43.511095 kernel: audit: type=1103 audit(1776714463.129:1143): pid=6235 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:47:43.538448 containerd[1659]: time="2026-04-20T19:47:43.373466336Z" level=error msg="get state for d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf" error="context deadline exceeded" Apr 20 19:47:43.538448 containerd[1659]: time="2026-04-20T19:47:43.536224928Z" level=warning msg="unknown status" status=0 Apr 20 19:47:43.706000 audit[6235]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc228f26c0 a2=3 a3=0 items=0 ppid=1 pid=6235 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=55 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:47:43.706000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:47:44.831211 kernel: audit: type=1006 audit(1776714463.706:1144): pid=6235 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=55 res=1 Apr 20 19:47:44.848521 kernel: audit: type=1300 audit(1776714463.706:1144): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc228f26c0 a2=3 a3=0 items=0 ppid=1 pid=6235 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=55 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:47:44.869285 kernel: audit: type=1327 audit(1776714463.706:1144): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:47:44.870468 sshd-session[6235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 19:47:46.340204 kubelet[3163]: E0420 19:47:43.495491 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.Secret: Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/secrets?fieldSelector=metadata.name%3Dcalico-apiserver-certs&resourceVersion=2406\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"calico-system\"/\"calico-apiserver-certs\"" type="*v1.Secret" Apr 20 19:47:47.377506 containerd[1659]: time="2026-04-20T19:47:47.373087802Z" level=error msg="get state for d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf" error="context deadline exceeded" Apr 20 19:47:48.190517 containerd[1659]: time="2026-04-20T19:47:47.387979301Z" level=warning msg="unknown status" status=0 Apr 20 19:47:51.960444 systemd-logind[1627]: New session '55' of user 'core' with class 'user' and type 'tty'. Apr 20 19:47:52.770182 systemd[1]: Started session-55.scope - Session 55 of User core. Apr 20 19:47:54.221821 containerd[1659]: time="2026-04-20T19:47:53.886102910Z" level=error msg="get state for d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf" error="context deadline exceeded" Apr 20 19:47:55.030386 containerd[1659]: time="2026-04-20T19:47:55.026912311Z" level=warning msg="unknown status" status=0 Apr 20 19:47:55.234266 containerd[1659]: time="2026-04-20T19:47:55.109505894Z" level=error msg="failed to delete task" error="context deadline exceeded" id=d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf Apr 20 19:47:55.234266 containerd[1659]: time="2026-04-20T19:47:55.132275453Z" level=error msg="Failed to handle backOff event container_id:\"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\" id:\"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\" pid:2993 exit_status:1 exited_at:{seconds:1776712815 nanos:539071424} for d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded" Apr 20 19:47:56.338000 audit[6235]: AUDIT1105 pid=6235 uid=0 auid=500 ses=55 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:47:56.977159 kernel: audit: type=1105 audit(1776714476.338:1145): pid=6235 uid=0 auid=500 ses=55 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:47:57.055632 kubelet[3163]: E0420 19:47:50.687863 3163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 20 19:47:57.639000 audit[6259]: AUDIT1103 pid=6259 uid=0 auid=500 ses=55 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:47:58.252074 kernel: audit: type=1103 audit(1776714477.639:1146): pid=6259 uid=0 auid=500 ses=55 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:47:59.940329 kubelet[3163]: E0420 19:47:54.663389 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=2825\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 20 19:48:02.272997 kubelet[3163]: E0420 19:48:02.269763 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&resourceVersion=2900\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 20 19:48:05.174975 kubelet[3163]: E0420 19:48:01.464026 3163 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 6m54.267316187s ago; threshold is 3m0s]" Apr 20 19:48:05.620104 kubelet[3163]: E0420 19:48:05.536383 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=2837\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 20 19:48:07.973935 kubelet[3163]: E0420 19:48:07.965461 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dfb0b7d2-b28d-4433-9fba-0074dfdf81ee-calico-apiserver-certs podName:dfb0b7d2-b28d-4433-9fba-0074dfdf81ee nodeName:}" failed. No retries permitted until 2026-04-20 19:47:46.561011265 +0000 UTC m=+2364.655260971 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/dfb0b7d2-b28d-4433-9fba-0074dfdf81ee-calico-apiserver-certs") pod "calico-apiserver-84684997fc-zpm5v" (UID: "dfb0b7d2-b28d-4433-9fba-0074dfdf81ee") : failed to sync secret cache: timed out waiting for the condition Apr 20 19:48:10.475374 kubelet[3163]: E0420 19:48:10.462912 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/526e8f89-8d32-4504-b20c-956610c7bb82-kube-proxy podName:526e8f89-8d32-4504-b20c-956610c7bb82 nodeName:}" failed. No retries permitted until 2026-04-20 19:48:08.892414246 +0000 UTC m=+2386.986663963 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/526e8f89-8d32-4504-b20c-956610c7bb82-kube-proxy") pod "kube-proxy-c6mkn" (UID: "526e8f89-8d32-4504-b20c-956610c7bb82") : failed to sync configmap cache: timed out waiting for the condition Apr 20 19:48:11.282448 kubelet[3163]: E0420 19:48:11.279982 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1537\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 20 19:48:19.392825 kubelet[3163]: E0420 19:48:18.342300 3163 kubelet.go:2460] "Skipping pod synchronization" err="PLEG is not healthy: pleg was last seen active 7m25.842101898s ago; threshold is 3m0s" Apr 20 19:48:20.863994 kubelet[3163]: E0420 19:48:14.929474 3163 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/events/kube-scheduler-localhost.18a826e9b6ceb5d3\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{kube-scheduler-localhost.18a826e9b6ceb5d3 kube-system 2743 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-scheduler-localhost,UID:33fee6ba1581201eda98a989140db110,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 19:20:38 +0000 UTC,LastTimestamp:2026-04-20 19:20:59.281074808 +0000 UTC m=+757.375324536,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 19:48:21.679213 kubelet[3163]: E0420 19:48:19.240052 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1537\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"calico-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 20 19:48:27.677156 kubelet[3163]: E0420 19:48:27.654381 3163 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 7m36.240566378s ago; threshold is 3m0s]" Apr 20 19:48:29.671090 kubelet[3163]: E0420 19:48:29.667047 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-proxy&resourceVersion=1537\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap" Apr 20 19:48:31.529807 kubelet[3163]: I0420 19:48:19.470177 3163 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-20T19:48:16Z","lastTransitionTime":"2026-04-20T19:48:16Z","reason":"KubeletNotReady","message":"PLEG is not healthy: pleg was last seen active 7m27.736239176s ago; threshold is 3m0s"} Apr 20 19:48:34.218475 kubelet[3163]: E0420 19:48:32.247393 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=2901\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 20 19:48:35.889012 kubelet[3163]: I0420 19:48:35.846287 3163 status_manager.go:895] "Failed to get status for pod" podUID="5ef51a6b32499d3d1e531fb8b3a83d4f" pod="kube-system/kube-apiserver-localhost" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": net/http: TLS handshake timeout" Apr 20 19:48:37.152185 kubelet[3163]: E0420 19:48:34.552265 3163 projected.go:289] Couldn't get configMap calico-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:48:38.221730 kubelet[3163]: E0420 19:48:38.212234 3163 projected.go:194] Error preparing data for projected volume kube-api-access-4m6bv for pod calico-system/csi-node-driver-5h6vg: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:48:41.376168 kubelet[3163]: E0420 19:48:41.330521 3163 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 7m44.836374s ago; threshold is 3m0s]" Apr 20 19:48:42.660358 kubelet[3163]: E0420 19:48:41.577715 3163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 20 19:48:42.660358 kubelet[3163]: E0420 19:48:42.575031 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9f02930c-961c-4c4b-8334-b61cbd5c3d20-kube-api-access-4m6bv podName:9f02930c-961c-4c4b-8334-b61cbd5c3d20 nodeName:}" failed. No retries permitted until 2026-04-20 19:48:43.073258878 +0000 UTC m=+2421.167508594 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-4m6bv" (UniqueName: "kubernetes.io/projected/9f02930c-961c-4c4b-8334-b61cbd5c3d20-kube-api-access-4m6bv") pod "csi-node-driver-5h6vg" (UID: "9f02930c-961c-4c4b-8334-b61cbd5c3d20") : failed to sync configmap cache: timed out waiting for the condition Apr 20 19:48:44.271238 containerd[1659]: time="2026-04-20T19:48:43.406511838Z" level=info msg="StopContainer for \"336c04308422bb47be47dbabd9a54c52608564fda7f37c62a62203892575bd65\" with timeout 30 (s)" Apr 20 19:48:49.381326 containerd[1659]: time="2026-04-20T19:48:49.105467431Z" level=info msg="StopContainer for \"7ad53f3fafdaae0a232b3f26afb202844744846718e9540c77ad7513e837efd9\" with timeout 2 (s)" Apr 20 19:48:50.423716 containerd[1659]: time="2026-04-20T19:48:50.422813869Z" level=info msg="Stop container \"336c04308422bb47be47dbabd9a54c52608564fda7f37c62a62203892575bd65\" with signal terminated" Apr 20 19:48:50.656092 kubelet[3163]: E0420 19:48:50.645513 3163 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:48:51.109506 kubelet[3163]: E0420 19:48:50.669043 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/526e8f89-8d32-4504-b20c-956610c7bb82-kube-proxy podName:526e8f89-8d32-4504-b20c-956610c7bb82 nodeName:}" failed. No retries permitted until 2026-04-20 19:48:51.658144581 +0000 UTC m=+2429.752394291 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/526e8f89-8d32-4504-b20c-956610c7bb82-kube-proxy") pod "kube-proxy-c6mkn" (UID: "526e8f89-8d32-4504-b20c-956610c7bb82") : failed to sync configmap cache: timed out waiting for the condition Apr 20 19:48:52.160465 kubelet[3163]: E0420 19:48:48.967269 3163 secret.go:189] Couldn't get secret calico-system/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Apr 20 19:48:52.790847 containerd[1659]: time="2026-04-20T19:48:52.142335540Z" level=error msg="get state for 7ad53f3fafdaae0a232b3f26afb202844744846718e9540c77ad7513e837efd9" error="context deadline exceeded" Apr 20 19:48:52.790847 containerd[1659]: time="2026-04-20T19:48:52.471175818Z" level=warning msg="unknown status" status=0 Apr 20 19:48:52.790847 containerd[1659]: time="2026-04-20T19:48:52.471688595Z" level=info msg="Stop container \"7ad53f3fafdaae0a232b3f26afb202844744846718e9540c77ad7513e837efd9\" with signal terminated" Apr 20 19:48:53.856173 containerd[1659]: time="2026-04-20T19:48:53.788435542Z" level=error msg="ttrpc: received message on inactive stream" stream=325 Apr 20 19:48:55.915316 kubelet[3163]: E0420 19:48:51.296303 3163 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="7ad53f3fafdaae0a232b3f26afb202844744846718e9540c77ad7513e837efd9" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 20 19:48:55.915316 kubelet[3163]: E0420 19:48:54.238281 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=2825\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 20 19:48:56.262681 containerd[1659]: time="2026-04-20T19:48:56.242281125Z" level=info msg="StopContainer for \"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\" with timeout 30 (s)" Apr 20 19:48:57.055382 kubelet[3163]: E0420 19:48:54.547225 3163 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:48:57.055382 kubelet[3163]: E0420 19:48:57.028776 3163 projected.go:194] Error preparing data for projected volume kube-api-access-6ncsk for pod kube-system/kube-proxy-c6mkn: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:48:57.786398 containerd[1659]: time="2026-04-20T19:48:56.991741362Z" level=error msg="get state for 7ad53f3fafdaae0a232b3f26afb202844744846718e9540c77ad7513e837efd9" error="context deadline exceeded" Apr 20 19:48:57.786398 containerd[1659]: time="2026-04-20T19:48:57.777718504Z" level=warning msg="unknown status" status=0 Apr 20 19:48:58.573917 containerd[1659]: time="2026-04-20T19:48:58.571949969Z" level=error msg="ttrpc: received message on inactive stream" stream=329 Apr 20 19:48:59.378166 containerd[1659]: time="2026-04-20T19:48:59.372274023Z" level=info msg="Skipping the sending of signal terminated to container \"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\" because a prior stop with timeout>0 request already sent the signal" Apr 20 19:49:01.089494 kubelet[3163]: E0420 19:48:57.637515 3163 log.go:32] "StopContainer from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf" Apr 20 19:49:01.941849 kubelet[3163]: E0420 19:49:00.342409 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=2837\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 20 19:49:02.119135 containerd[1659]: time="2026-04-20T19:49:01.174526439Z" level=error msg="failed to drain init process d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf io" error="context deadline exceeded" runtime=io.containerd.runc.v2 Apr 20 19:49:02.119135 containerd[1659]: time="2026-04-20T19:49:01.222209672Z" level=error msg="ttrpc: received message on inactive stream" stream=249 Apr 20 19:49:02.670387 containerd[1659]: time="2026-04-20T19:49:01.800441488Z" level=error msg="ttrpc: received message on inactive stream" stream=251 Apr 20 19:49:02.670387 containerd[1659]: time="2026-04-20T19:49:02.396209954Z" level=error msg="ttrpc: received message on inactive stream" stream=253 Apr 20 19:49:02.670387 containerd[1659]: time="2026-04-20T19:49:02.624486539Z" level=error msg="ttrpc: received message on inactive stream" stream=255 Apr 20 19:49:03.713467 containerd[1659]: time="2026-04-20T19:49:03.468496119Z" level=error msg="StopContainer for \"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\" failed" error="rpc error: code = Canceled desc = an error occurs during waiting for container \"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\" to be killed: wait container \"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\": context canceled" Apr 20 19:49:03.973515 containerd[1659]: time="2026-04-20T19:49:03.630651343Z" level=info msg="TaskExit event container_id:\"bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf\" id:\"bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf\" pid:3768 exit_status:1 exited_at:{seconds:1776712842 nanos:981099034}" Apr 20 19:49:03.980727 kubelet[3163]: E0420 19:48:57.955333 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.Secret: Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/secrets?fieldSelector=metadata.name%3Dcalico-apiserver-certs&resourceVersion=2406\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"calico-system\"/\"calico-apiserver-certs\"" type="*v1.Secret" Apr 20 19:49:04.557241 sshd[6259]: Connection closed by 10.0.0.1 port 49660 Apr 20 19:49:04.686327 sshd-session[6235]: pam_unix(sshd:session): session closed for user core Apr 20 19:49:05.366000 audit[6235]: AUDIT1106 pid=6235 uid=0 auid=500 ses=55 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:49:05.493902 containerd[1659]: time="2026-04-20T19:49:04.489203899Z" level=error msg="ttrpc: received message on inactive stream" stream=321 Apr 20 19:49:05.477000 audit[6235]: AUDIT1104 pid=6235 uid=0 auid=500 ses=55 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:49:06.141139 kernel: audit: type=1106 audit(1776714545.366:1147): pid=6235 uid=0 auid=500 ses=55 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:49:06.252309 kubelet[3163]: E0420 19:49:02.998029 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dfb0b7d2-b28d-4433-9fba-0074dfdf81ee-calico-apiserver-certs podName:dfb0b7d2-b28d-4433-9fba-0074dfdf81ee nodeName:}" failed. No retries permitted until 2026-04-20 19:49:02.99109759 +0000 UTC m=+2441.085347293 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/dfb0b7d2-b28d-4433-9fba-0074dfdf81ee-calico-apiserver-certs") pod "calico-apiserver-84684997fc-zpm5v" (UID: "dfb0b7d2-b28d-4433-9fba-0074dfdf81ee") : failed to sync secret cache: timed out waiting for the condition Apr 20 19:49:06.918272 kernel: audit: type=1104 audit(1776714545.477:1148): pid=6235 uid=0 auid=500 ses=55 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:49:06.925000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@53-9-10.0.0.14:22-10.0.0.1:49660 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:49:06.721737 systemd[1]: sshd@53-9-10.0.0.14:22-10.0.0.1:49660.service: Deactivated successfully. Apr 20 19:49:07.264250 kernel: audit: type=1131 audit(1776714546.925:1149): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@53-9-10.0.0.14:22-10.0.0.1:49660 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:49:06.970405 systemd[1]: sshd@53-9-10.0.0.14:22-10.0.0.1:49660.service: Consumed 11.777s CPU time, 4.3M memory peak. Apr 20 19:49:07.677311 systemd[1]: session-55.scope: Deactivated successfully. Apr 20 19:49:07.826853 containerd[1659]: time="2026-04-20T19:49:07.755982116Z" level=error msg="ttrpc: received message on inactive stream" stream=323 Apr 20 19:49:07.864222 systemd[1]: session-55.scope: Consumed 32.470s CPU time, 17.9M memory peak. Apr 20 19:49:08.708025 systemd-logind[1627]: Session 55 logged out. Waiting for processes to exit. Apr 20 19:49:09.914824 kubelet[3163]: E0420 19:49:06.790445 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&resourceVersion=2900\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 20 19:49:10.282135 kubelet[3163]: E0420 19:49:10.248331 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/526e8f89-8d32-4504-b20c-956610c7bb82-kube-api-access-6ncsk podName:526e8f89-8d32-4504-b20c-956610c7bb82 nodeName:}" failed. No retries permitted until 2026-04-20 19:49:08.129054707 +0000 UTC m=+2446.223304421 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-6ncsk" (UniqueName: "kubernetes.io/projected/526e8f89-8d32-4504-b20c-956610c7bb82-kube-api-access-6ncsk") pod "kube-proxy-c6mkn" (UID: "526e8f89-8d32-4504-b20c-956610c7bb82") : failed to sync configmap cache: timed out waiting for the condition Apr 20 19:49:10.282135 kubelet[3163]: I0420 19:49:10.249637 3163 request.go:752] "Waited before sending request" delay="1.750800707s" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=2825" Apr 20 19:49:10.432866 kubelet[3163]: E0420 19:49:04.992250 3163 kuberuntime_container.go:863] "Container termination failed with gracePeriod" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="kube-system/kube-controller-manager-localhost" podUID="e9ca41790ae21be9f4cbd451ade0acec" containerName="kube-controller-manager" containerID="containerd://d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf" gracePeriod=30 Apr 20 19:49:12.477340 systemd-logind[1627]: Removed session 55. Apr 20 19:49:12.825000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@54-10-10.0.0.14:22-10.0.0.1:32854 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:49:12.557436 systemd[1]: Started sshd@54-10-10.0.0.14:22-10.0.0.1:32854.service - OpenSSH per-connection server daemon (10.0.0.1:32854). Apr 20 19:49:14.159029 kernel: audit: type=1130 audit(1776714552.825:1150): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@54-10-10.0.0.14:22-10.0.0.1:32854 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:49:14.171249 kubelet[3163]: I0420 19:49:13.322235 3163 status_manager.go:895] "Failed to get status for pod" podUID="071d23f6-a94b-4165-9229-2d0570b516d8" pod="calico-system/calico-node-g9fs5" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-node-g9fs5\": net/http: TLS handshake timeout" Apr 20 19:49:14.566942 containerd[1659]: time="2026-04-20T19:49:14.047375234Z" level=error msg="Failed to handle backOff event container_id:\"bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf\" id:\"bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf\" pid:3768 exit_status:1 exited_at:{seconds:1776712842 nanos:981099034} for bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 20 19:49:15.491786 kubelet[3163]: E0420 19:49:10.450344 3163 kuberuntime_manager.go:1176] "killContainer for pod failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerName="kube-controller-manager" containerID={"Type":"containerd","ID":"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf"} pod="kube-system/kube-controller-manager-localhost" Apr 20 19:49:16.038901 kubelet[3163]: E0420 19:49:09.712163 3163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 20 19:49:16.138424 containerd[1659]: time="2026-04-20T19:49:14.199279601Z" level=error msg="ttrpc: received message on inactive stream" stream=189 Apr 20 19:49:18.269333 containerd[1659]: time="2026-04-20T19:49:18.194527604Z" level=error msg="ttrpc: received message on inactive stream" stream=187 Apr 20 19:49:18.483880 kubelet[3163]: E0420 19:49:18.479906 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1537\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 20 19:49:19.223369 kubelet[3163]: E0420 19:49:19.043428 3163 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/events/kube-scheduler-localhost.18a826e9b6ceb5d3\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{kube-scheduler-localhost.18a826e9b6ceb5d3 kube-system 2743 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-scheduler-localhost,UID:33fee6ba1581201eda98a989140db110,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 19:20:38 +0000 UTC,LastTimestamp:2026-04-20 19:20:59.281074808 +0000 UTC m=+757.375324536,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 19:49:20.031812 containerd[1659]: time="2026-04-20T19:49:19.525528485Z" level=error msg="get state for 7ad53f3fafdaae0a232b3f26afb202844744846718e9540c77ad7513e837efd9" error="context deadline exceeded" Apr 20 19:49:20.657236 containerd[1659]: time="2026-04-20T19:49:20.632732652Z" level=warning msg="unknown status" status=0 Apr 20 19:49:22.977279 kubelet[3163]: E0420 19:49:16.470809 3163 projected.go:289] Couldn't get configMap calico-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:49:23.501496 kubelet[3163]: E0420 19:49:13.977364 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1537\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"calico-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 20 19:49:23.501496 kubelet[3163]: E0420 19:49:21.556319 3163 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillContainer\" for \"kube-controller-manager\" with KillContainerError: \"rpc error: code = DeadlineExceeded desc = context deadline exceeded\"" pod="kube-system/kube-controller-manager-localhost" podUID="e9ca41790ae21be9f4cbd451ade0acec" Apr 20 19:49:26.951172 kubelet[3163]: E0420 19:49:26.858141 3163 projected.go:194] Error preparing data for projected volume kube-api-access-kld4g for pod calico-system/calico-apiserver-84684997fc-zpm5v: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:49:29.050978 kubelet[3163]: E0420 19:49:24.725515 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-proxy&resourceVersion=1537\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap" Apr 20 19:49:29.788494 containerd[1659]: time="2026-04-20T19:49:29.674821338Z" level=info msg="Kill container \"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\"" Apr 20 19:49:31.161711 kubelet[3163]: E0420 19:49:31.083213 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=2901\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 20 19:49:36.981345 kubelet[3163]: E0420 19:49:36.857440 3163 secret.go:189] Couldn't get secret calico-system/node-certs: failed to sync secret cache: timed out waiting for the condition Apr 20 19:49:39.862499 kubelet[3163]: E0420 19:49:35.922477 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/dfb0b7d2-b28d-4433-9fba-0074dfdf81ee-kube-api-access-kld4g podName:dfb0b7d2-b28d-4433-9fba-0074dfdf81ee nodeName:}" failed. No retries permitted until 2026-04-20 19:49:37.895930844 +0000 UTC m=+2475.990180560 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-kld4g" (UniqueName: "kubernetes.io/projected/dfb0b7d2-b28d-4433-9fba-0074dfdf81ee-kube-api-access-kld4g") pod "calico-apiserver-84684997fc-zpm5v" (UID: "dfb0b7d2-b28d-4433-9fba-0074dfdf81ee") : failed to sync configmap cache: timed out waiting for the condition Apr 20 19:49:44.018717 kubelet[3163]: E0420 19:49:44.006939 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=2825\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 20 19:49:45.343494 update_engine[1636]: I20260420 19:49:45.158497 1636 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Apr 20 19:49:46.651466 update_engine[1636]: I20260420 19:49:45.307475 1636 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Apr 20 19:49:46.651466 update_engine[1636]: I20260420 19:49:46.552944 1636 omaha_request_params.cc:62] Current group set to alpha Apr 20 19:49:47.366282 kubelet[3163]: E0420 19:49:41.755230 3163 projected.go:289] Couldn't get configMap calico-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:49:47.832038 update_engine[1636]: I20260420 19:49:46.707361 1636 update_attempter.cc:499] Already updated boot flags. Skipping. Apr 20 19:49:47.832038 update_engine[1636]: I20260420 19:49:46.718451 1636 update_attempter.cc:643] Scheduling an action processor start. Apr 20 19:49:47.832038 update_engine[1636]: I20260420 19:49:46.726262 1636 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 20 19:49:47.832038 update_engine[1636]: I20260420 19:49:46.883313 1636 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 20 19:49:47.832038 update_engine[1636]: I20260420 19:49:46.901172 1636 omaha_request_action.cc:272] Request: Apr 20 19:49:47.832038 update_engine[1636]: Apr 20 19:49:47.832038 update_engine[1636]: Apr 20 19:49:47.832038 update_engine[1636]: Apr 20 19:49:47.832038 update_engine[1636]: Apr 20 19:49:47.832038 update_engine[1636]: Apr 20 19:49:47.832038 update_engine[1636]: Apr 20 19:49:47.832038 update_engine[1636]: Apr 20 19:49:47.832038 update_engine[1636]: Apr 20 19:49:47.832038 update_engine[1636]: I20260420 19:49:46.902159 1636 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 20 19:49:47.832038 update_engine[1636]: I20260420 19:49:47.346493 1636 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 20 19:49:47.832038 update_engine[1636]: I20260420 19:49:47.813476 1636 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 20 19:49:47.832038 update_engine[1636]: E20260420 19:49:47.858678 1636 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Apr 20 19:49:50.947750 update_engine[1636]: I20260420 19:49:47.934088 1636 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Apr 20 19:49:51.243486 kubelet[3163]: E0420 19:49:49.762320 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T19:48:16Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T19:48:16Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T19:48:16Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T19:48:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-20T19:48:16Z\\\",\\\"message\\\":\\\"PLEG is not healthy: pleg was last seen active 7m27.736239176s ago; threshold is 3m0s\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\\\",\\\"ghcr.io/flatcar/calico/node:v3.31.4\\\"],\\\"sizeBytes\\\":159838426},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\\\",\\\"ghcr.io/flatcar/calico/cni:v3.31.4\\\"],\\\"sizeBytes\\\":72167716},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\\\",\\\"ghcr.io/flatcar/calico/apiserver:v3.31.4\\\"],\\\"sizeBytes\\\":49971841},{\\\"names\\\":[\\\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\\\",\\\"quay.io/tigera/operator:v1.40.7\\\"],\\\"sizeBytes\\\":40842151},{\\\"names\\\":[\\\"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\\\",\\\"registry.k8s.io/kube-proxy:v1.33.11\\\"],\\\"sizeBytes\\\":32009730},{\\\"names\\\":[\\\"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\\\",\\\"registry.k8s.io/kube-apiserver:v1.33.11\\\"],\\\"sizeBytes\\\":30190588},{\\\"names\\\":[\\\"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\\\",\\\"registry.k8s.io/kube-controller-manager:v1.33.11\\\"],\\\"sizeBytes\\\":27737794},{\\\"names\\\":[\\\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\\\",\\\"registry.k8s.io/etcd:3.5.24-0\\\"],\\\"sizeBytes\\\":23716032},{\\\"names\\\":[\\\"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\\\",\\\"registry.k8s.io/kube-scheduler:v1.33.11\\\"],\\\"sizeBytes\\\":21856121},{\\\"names\\\":[\\\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\\\",\\\"registry.k8s.io/coredns/coredns:v1.12.0\\\"],\\\"sizeBytes\\\":20939036},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\\\",\\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\\\"],\\\"sizeBytes\\\":16260314},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\\\",\\\"ghcr.io/flatcar/calico/csi:v3.31.4\\\"],\\\"sizeBytes\\\":10348547},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\\\",\\\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\\\"],\\\"sizeBytes\\\":6186255},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\\\",\\\"registry.k8s.io/pause:3.10.1\\\"],\\\"sizeBytes\\\":320448},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\\\",\\\"registry.k8s.io/pause:3.10\\\"],\\\"sizeBytes\\\":320368}]}}\" for node \"localhost\": Patch \"https://10.0.0.14:6443/api/v1/nodes/localhost/status?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 20 19:49:51.934312 kubelet[3163]: E0420 19:49:51.918128 3163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 20 19:49:52.933988 kubelet[3163]: E0420 19:49:52.916067 3163 projected.go:289] Couldn't get configMap tigera-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:49:53.144115 kubelet[3163]: E0420 19:49:53.116467 3163 projected.go:194] Error preparing data for projected volume kube-api-access-5kv6b for pod calico-system/calico-node-g9fs5: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:49:53.349149 kubelet[3163]: E0420 19:49:53.301762 3163 configmap.go:193] Couldn't get configMap calico-system/tigera-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:49:55.573475 kubelet[3163]: E0420 19:49:54.457209 3163 projected.go:289] Couldn't get configMap calico-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:49:57.994012 kubelet[3163]: E0420 19:49:54.457438 3163 projected.go:194] Error preparing data for projected volume kube-api-access-qj2d9 for pod tigera-operator/tigera-operator-6bf85f8dd-hvgdj: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:49:58.167089 update_engine[1636]: I20260420 19:49:58.142168 1636 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 20 19:49:58.578400 update_engine[1636]: I20260420 19:49:58.184489 1636 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 20 19:49:58.578400 update_engine[1636]: I20260420 19:49:58.443405 1636 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 20 19:49:59.169636 update_engine[1636]: E20260420 19:49:58.671828 1636 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Apr 20 19:49:59.169636 update_engine[1636]: I20260420 19:49:58.736135 1636 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Apr 20 19:49:59.605776 containerd[1659]: time="2026-04-20T19:49:59.585467962Z" level=error msg="Failed to delete exec process \"7b416f275bc9047a6c0e23a635793f02eb030831518ac300f5c2ba9c00df882b\" for container \"7ad53f3fafdaae0a232b3f26afb202844744846718e9540c77ad7513e837efd9\"" error="context deadline exceeded" Apr 20 19:50:00.093389 containerd[1659]: time="2026-04-20T19:49:59.782193099Z" level=error msg="ExecSync for \"7ad53f3fafdaae0a232b3f26afb202844744846718e9540c77ad7513e837efd9\" failed" error="rpc error: code = Unknown desc = failed to exec in container: failed to start exec \"7b416f275bc9047a6c0e23a635793f02eb030831518ac300f5c2ba9c00df882b\": context canceled" Apr 20 19:50:00.159444 kubelet[3163]: E0420 19:49:54.462404 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/071d23f6-a94b-4165-9229-2d0570b516d8-node-certs podName:071d23f6-a94b-4165-9229-2d0570b516d8 nodeName:}" failed. No retries permitted until 2026-04-20 19:49:44.338036116 +0000 UTC m=+2482.432285839 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-certs" (UniqueName: "kubernetes.io/secret/071d23f6-a94b-4165-9229-2d0570b516d8-node-certs") pod "calico-node-g9fs5" (UID: "071d23f6-a94b-4165-9229-2d0570b516d8") : failed to sync secret cache: timed out waiting for the condition Apr 20 19:50:00.165000 audit[6292]: AUDIT1101 pid=6292 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:50:00.587807 sshd[6292]: Accepted publickey for core from 10.0.0.1 port 32854 ssh2: RSA SHA256:ZNIzts6V4KYKlrJxXaosrimCRlmsV/+NkZ5UtjwHrjE Apr 20 19:50:01.012753 kernel: audit: type=1101 audit(1776714600.165:1151): pid=6292 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:50:01.254269 locksmithd[1713]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Apr 20 19:50:01.339000 audit[6292]: AUDIT1103 pid=6292 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:50:01.422000 audit[6292]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd62b66820 a2=3 a3=0 items=0 ppid=1 pid=6292 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=56 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:50:01.422000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:50:02.158455 kernel: audit: type=1103 audit(1776714601.339:1152): pid=6292 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:50:02.218377 kubelet[3163]: I0420 19:50:01.663993 3163 request.go:752] "Waited before sending request" delay="2.96919499s" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=2825" Apr 20 19:50:02.836180 kernel: audit: type=1006 audit(1776714601.422:1153): pid=6292 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=56 res=1 Apr 20 19:50:02.634496 sshd-session[6292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 19:50:03.229922 kubelet[3163]: E0420 19:50:01.649433 3163 projected.go:194] Error preparing data for projected volume kube-api-access-4m6bv for pod calico-system/csi-node-driver-5h6vg: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:50:03.345025 kernel: audit: type=1300 audit(1776714601.422:1153): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd62b66820 a2=3 a3=0 items=0 ppid=1 pid=6292 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=56 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:50:03.382995 kernel: audit: type=1327 audit(1776714601.422:1153): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:50:06.254499 containerd[1659]: time="2026-04-20T19:50:06.254037349Z" level=error msg="ttrpc: received message on inactive stream" stream=335 Apr 20 19:50:07.448842 containerd[1659]: time="2026-04-20T19:50:07.176322038Z" level=error msg="ttrpc: received message on inactive stream" stream=331 Apr 20 19:50:08.029441 containerd[1659]: time="2026-04-20T19:50:08.026125038Z" level=info msg="Kill container \"7ad53f3fafdaae0a232b3f26afb202844744846718e9540c77ad7513e837efd9\"" Apr 20 19:50:09.185314 update_engine[1636]: I20260420 19:50:09.150784 1636 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 20 19:50:10.363939 update_engine[1636]: I20260420 19:50:09.300236 1636 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 20 19:50:10.363939 update_engine[1636]: I20260420 19:50:10.302321 1636 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 20 19:50:09.763918 systemd-logind[1627]: New session '56' of user 'core' with class 'user' and type 'tty'. Apr 20 19:50:11.051901 update_engine[1636]: E20260420 19:50:10.392920 1636 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Apr 20 19:50:11.051901 update_engine[1636]: I20260420 19:50:10.456481 1636 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Apr 20 19:50:10.456752 systemd[1]: Started session-56.scope - Session 56 of User core. Apr 20 19:50:12.685690 kubelet[3163]: E0420 19:50:12.680327 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.14:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 20 19:50:13.421000 audit[6292]: AUDIT1105 pid=6292 uid=0 auid=500 ses=56 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:50:13.676605 kernel: audit: type=1105 audit(1776714613.421:1154): pid=6292 uid=0 auid=500 ses=56 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:50:15.506000 audit[6325]: AUDIT1103 pid=6325 uid=0 auid=500 ses=56 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:50:15.842276 kernel: audit: type=1103 audit(1776714615.506:1155): pid=6325 uid=0 auid=500 ses=56 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:50:15.842278 systemd[1]: cri-containerd-7ad53f3fafdaae0a232b3f26afb202844744846718e9540c77ad7513e837efd9.scope: Deactivated successfully. Apr 20 19:50:16.141253 systemd[1]: cri-containerd-7ad53f3fafdaae0a232b3f26afb202844744846718e9540c77ad7513e837efd9.scope: Consumed 3min 43.674s CPU time, 345.4M memory peak, 84.5M read from disk, 1.7M written to disk. Apr 20 19:50:16.573763 kubelet[3163]: E0420 19:50:16.561657 3163 log.go:32] "StopContainer from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="7ad53f3fafdaae0a232b3f26afb202844744846718e9540c77ad7513e837efd9" Apr 20 19:50:16.959288 kubelet[3163]: E0420 19:50:16.792947 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/071d23f6-a94b-4165-9229-2d0570b516d8-kube-api-access-5kv6b podName:071d23f6-a94b-4165-9229-2d0570b516d8 nodeName:}" failed. No retries permitted until 2026-04-20 19:50:12.691800291 +0000 UTC m=+2510.786050021 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-5kv6b" (UniqueName: "kubernetes.io/projected/071d23f6-a94b-4165-9229-2d0570b516d8-kube-api-access-5kv6b") pod "calico-node-g9fs5" (UID: "071d23f6-a94b-4165-9229-2d0570b516d8") : failed to sync configmap cache: timed out waiting for the condition Apr 20 19:50:17.564136 kubelet[3163]: I0420 19:50:17.470324 3163 status_manager.go:895] "Failed to get status for pod" podUID="dfb0b7d2-b28d-4433-9fba-0074dfdf81ee" pod="calico-system/calico-apiserver-84684997fc-zpm5v" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-apiserver-84684997fc-zpm5v\": net/http: TLS handshake timeout" Apr 20 19:50:18.438490 kubelet[3163]: E0420 19:50:08.647344 3163 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/events/kube-scheduler-localhost.18a826e9b6ceb5d3\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{kube-scheduler-localhost.18a826e9b6ceb5d3 kube-system 2743 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-scheduler-localhost,UID:33fee6ba1581201eda98a989140db110,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 19:20:38 +0000 UTC,LastTimestamp:2026-04-20 19:20:59.281074808 +0000 UTC m=+757.375324536,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 19:50:20.782371 containerd[1659]: time="2026-04-20T19:50:20.776071714Z" level=error msg="StopContainer for \"7ad53f3fafdaae0a232b3f26afb202844744846718e9540c77ad7513e837efd9\" failed" error="rpc error: code = Unknown desc = failed to kill container \"7ad53f3fafdaae0a232b3f26afb202844744846718e9540c77ad7513e837efd9\": context canceled" Apr 20 19:50:21.855286 update_engine[1636]: I20260420 19:50:21.118018 1636 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 20 19:50:21.855286 update_engine[1636]: I20260420 19:50:21.357172 1636 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 20 19:50:23.448611 update_engine[1636]: I20260420 19:50:21.873711 1636 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 20 19:50:23.448611 update_engine[1636]: E20260420 19:50:21.994235 1636 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Apr 20 19:50:23.448611 update_engine[1636]: I20260420 19:50:22.080443 1636 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 20 19:50:23.448611 update_engine[1636]: I20260420 19:50:22.229127 1636 omaha_request_action.cc:617] Omaha request response: Apr 20 19:50:23.448611 update_engine[1636]: E20260420 19:50:22.269472 1636 omaha_request_action.cc:636] Omaha request network transfer failed. Apr 20 19:50:23.448611 update_engine[1636]: I20260420 19:50:22.396652 1636 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Apr 20 19:50:23.448611 update_engine[1636]: I20260420 19:50:22.418296 1636 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 20 19:50:23.448611 update_engine[1636]: I20260420 19:50:22.445133 1636 update_attempter.cc:306] Processing Done. Apr 20 19:50:23.448611 update_engine[1636]: E20260420 19:50:22.485224 1636 update_attempter.cc:619] Update failed. Apr 20 19:50:23.448611 update_engine[1636]: I20260420 19:50:22.595429 1636 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Apr 20 19:50:23.448611 update_engine[1636]: I20260420 19:50:22.658310 1636 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Apr 20 19:50:23.448611 update_engine[1636]: I20260420 19:50:22.821451 1636 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Apr 20 19:50:23.448611 update_engine[1636]: I20260420 19:50:22.943465 1636 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 20 19:50:23.448611 update_engine[1636]: I20260420 19:50:23.241984 1636 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 20 19:50:23.448611 update_engine[1636]: I20260420 19:50:23.340220 1636 omaha_request_action.cc:272] Request: Apr 20 19:50:23.448611 update_engine[1636]: Apr 20 19:50:23.448611 update_engine[1636]: Apr 20 19:50:29.493063 update_engine[1636]: Apr 20 19:50:29.493063 update_engine[1636]: Apr 20 19:50:29.493063 update_engine[1636]: Apr 20 19:50:29.493063 update_engine[1636]: Apr 20 19:50:29.493063 update_engine[1636]: I20260420 19:50:23.414314 1636 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 20 19:50:29.493063 update_engine[1636]: I20260420 19:50:23.444325 1636 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 20 19:50:29.493063 update_engine[1636]: I20260420 19:50:24.073513 1636 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 20 19:50:29.493063 update_engine[1636]: E20260420 19:50:24.421164 1636 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Apr 20 19:50:29.493063 update_engine[1636]: I20260420 19:50:24.666730 1636 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 20 19:50:29.493063 update_engine[1636]: I20260420 19:50:24.749739 1636 omaha_request_action.cc:617] Omaha request response: Apr 20 19:50:29.493063 update_engine[1636]: I20260420 19:50:24.819465 1636 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 20 19:50:29.493063 update_engine[1636]: I20260420 19:50:24.901350 1636 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 20 19:50:29.493063 update_engine[1636]: I20260420 19:50:24.978495 1636 update_attempter.cc:306] Processing Done. Apr 20 19:50:29.493063 update_engine[1636]: I20260420 19:50:25.052366 1636 update_attempter.cc:310] Error event sent. Apr 20 19:50:29.493063 update_engine[1636]: I20260420 19:50:25.069522 1636 update_check_scheduler.cc:74] Next update check in 42m9s Apr 20 19:50:31.851390 locksmithd[1713]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Apr 20 19:50:31.851390 locksmithd[1713]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Apr 20 19:50:33.643415 kubelet[3163]: E0420 19:50:33.643094 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1537\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 20 19:50:33.838392 kubelet[3163]: E0420 19:50:26.992485 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/071d23f6-a94b-4165-9229-2d0570b516d8-tigera-ca-bundle podName:071d23f6-a94b-4165-9229-2d0570b516d8 nodeName:}" failed. No retries permitted until 2026-04-20 19:50:17.364036175 +0000 UTC m=+2515.458285877 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tigera-ca-bundle" (UniqueName: "kubernetes.io/configmap/071d23f6-a94b-4165-9229-2d0570b516d8-tigera-ca-bundle") pod "calico-node-g9fs5" (UID: "071d23f6-a94b-4165-9229-2d0570b516d8") : failed to sync configmap cache: timed out waiting for the condition Apr 20 19:50:36.706000 audit[6341]: NETFILTER_CFG table=filter:162 family=2 entries=9 op=nft_register_rule pid=6341 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Apr 20 19:50:36.706000 audit[6341]: SYSCALL arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7ffc619707d0 a2=0 a3=7ffc619707bc items=0 ppid=3270 pid=6341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:50:36.706000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Apr 20 19:50:37.175851 kernel: audit: type=1325 audit(1776714636.706:1156): table=filter:162 family=2 entries=9 op=nft_register_rule pid=6341 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Apr 20 19:50:37.195380 kernel: audit: type=1300 audit(1776714636.706:1156): arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7ffc619707d0 a2=0 a3=7ffc619707bc items=0 ppid=3270 pid=6341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:50:37.238494 kernel: audit: type=1327 audit(1776714636.706:1156): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Apr 20 19:50:37.649487 kubelet[3163]: E0420 19:50:34.851401 3163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 20 19:50:38.157000 audit[6341]: NETFILTER_CFG table=nat:163 family=2 entries=55 op=nft_unregister_chain pid=6341 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Apr 20 19:50:38.157000 audit[6341]: SYSCALL arch=c000003e syscall=46 success=yes exit=16780 a0=3 a1=7ffc619707d0 a2=0 a3=7ffc619707bc items=0 ppid=3270 pid=6341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:50:38.157000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Apr 20 19:50:38.562378 kernel: audit: type=1325 audit(1776714638.157:1157): table=nat:163 family=2 entries=55 op=nft_unregister_chain pid=6341 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Apr 20 19:50:38.572832 kernel: audit: type=1300 audit(1776714638.157:1157): arch=c000003e syscall=46 success=yes exit=16780 a0=3 a1=7ffc619707d0 a2=0 a3=7ffc619707bc items=0 ppid=3270 pid=6341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:50:38.586609 kernel: audit: type=1327 audit(1776714638.157:1157): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Apr 20 19:50:40.005350 kubelet[3163]: E0420 19:50:38.548128 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.14:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded" Apr 20 19:50:41.093792 containerd[1659]: time="2026-04-20T19:50:41.040504297Z" level=error msg="ttrpc: received message on inactive stream" stream=347 Apr 20 19:50:41.781456 kubelet[3163]: I0420 19:50:40.241358 3163 request.go:752] "Waited before sending request" delay="2.573145096s" reason="client-side throttling, not priority and fairness" verb="PATCH" URL="https://10.0.0.14:6443/api/v1/namespaces/kube-system/events/kube-scheduler-localhost.18a826e9b6ceb5d3" Apr 20 19:50:44.173514 kubelet[3163]: E0420 19:50:38.179004 3163 secret.go:189] Couldn't get secret calico-system/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Apr 20 19:50:44.663297 kubelet[3163]: E0420 19:50:40.971116 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&resourceVersion=2900\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 20 19:50:47.145181 kubelet[3163]: I0420 19:50:40.982843 3163 status_manager.go:895] "Failed to get status for pod" podUID="33fee6ba1581201eda98a989140db110" pod="kube-system/kube-scheduler-localhost" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 19:50:47.573367 kubelet[3163]: E0420 19:50:36.838649 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=2837\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 20 19:50:48.533030 containerd[1659]: time="2026-04-20T19:50:48.528391804Z" level=info msg="Kill container \"336c04308422bb47be47dbabd9a54c52608564fda7f37c62a62203892575bd65\"" Apr 20 19:50:48.850224 kubelet[3163]: E0420 19:50:33.642067 3163 kuberuntime_container.go:863] "Container termination failed with gracePeriod" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="calico-system/calico-node-g9fs5" podUID="071d23f6-a94b-4165-9229-2d0570b516d8" containerName="calico-node" containerID="containerd://7ad53f3fafdaae0a232b3f26afb202844744846718e9540c77ad7513e837efd9" gracePeriod=2 Apr 20 19:50:48.850224 kubelet[3163]: E0420 19:50:48.832349 3163 kuberuntime_manager.go:1176] "killContainer for pod failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerName="calico-node" containerID={"Type":"containerd","ID":"7ad53f3fafdaae0a232b3f26afb202844744846718e9540c77ad7513e837efd9"} pod="calico-system/calico-node-g9fs5" Apr 20 19:50:49.657239 kubelet[3163]: E0420 19:50:42.077373 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/22f1ff03-de8a-48db-b03e-54fdbe0d3d5f-kube-api-access-qj2d9 podName:22f1ff03-de8a-48db-b03e-54fdbe0d3d5f nodeName:}" failed. No retries permitted until 2026-04-20 19:50:41.370240601 +0000 UTC m=+2539.464490303 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-qj2d9" (UniqueName: "kubernetes.io/projected/22f1ff03-de8a-48db-b03e-54fdbe0d3d5f-kube-api-access-qj2d9") pod "tigera-operator-6bf85f8dd-hvgdj" (UID: "22f1ff03-de8a-48db-b03e-54fdbe0d3d5f") : failed to sync configmap cache: timed out waiting for the condition Apr 20 19:50:49.657239 kubelet[3163]: E0420 19:50:46.568942 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=2901\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 20 19:50:49.657239 kubelet[3163]: E0420 19:50:48.310500 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.Secret: Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/secrets?fieldSelector=metadata.name%3Dnode-certs&resourceVersion=2406\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"calico-system\"/\"node-certs\"" type="*v1.Secret" Apr 20 19:50:49.824953 kubelet[3163]: I0420 19:50:49.819153 3163 scope.go:117] "RemoveContainer" containerID="d60bfc28bae39dd4c39466e0fffee6553b16b69bc14ddc6752a782b3abc019c6" Apr 20 19:50:50.876260 kubelet[3163]: E0420 19:50:50.717397 3163 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillContainer\" for \"calico-node\" with KillContainerError: \"rpc error: code = DeadlineExceeded desc = context deadline exceeded\"" pod="calico-system/calico-node-g9fs5" podUID="071d23f6-a94b-4165-9229-2d0570b516d8" Apr 20 19:50:52.450450 kubelet[3163]: E0420 19:50:50.686994 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.14:6443/api/v1/namespaces/tigera-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1537\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"tigera-operator\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 20 19:50:52.760876 kubelet[3163]: E0420 19:50:49.827147 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-proxy&resourceVersion=1537\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap" Apr 20 19:50:53.247042 kubelet[3163]: E0420 19:50:51.163792 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9f02930c-961c-4c4b-8334-b61cbd5c3d20-kube-api-access-4m6bv podName:9f02930c-961c-4c4b-8334-b61cbd5c3d20 nodeName:}" failed. No retries permitted until 2026-04-20 19:50:49.942289494 +0000 UTC m=+2548.036539201 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-4m6bv" (UniqueName: "kubernetes.io/projected/9f02930c-961c-4c4b-8334-b61cbd5c3d20-kube-api-access-4m6bv") pod "csi-node-driver-5h6vg" (UID: "9f02930c-961c-4c4b-8334-b61cbd5c3d20") : failed to sync configmap cache: timed out waiting for the condition Apr 20 19:50:53.260978 kubelet[3163]: E0420 19:50:53.260619 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1537\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"calico-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 20 19:50:53.514414 kubelet[3163]: E0420 19:50:53.211334 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=2825\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 20 19:50:53.514414 kubelet[3163]: E0420 19:50:51.746989 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.Secret: Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/secrets?fieldSelector=metadata.name%3Dcalico-apiserver-certs&resourceVersion=2406\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"calico-system\"/\"calico-apiserver-certs\"" type="*v1.Secret" Apr 20 19:50:53.514414 kubelet[3163]: E0420 19:50:53.357577 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dfb0b7d2-b28d-4433-9fba-0074dfdf81ee-calico-apiserver-certs podName:dfb0b7d2-b28d-4433-9fba-0074dfdf81ee nodeName:}" failed. No retries permitted until 2026-04-20 19:51:01.351431564 +0000 UTC m=+2559.445681277 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/dfb0b7d2-b28d-4433-9fba-0074dfdf81ee-calico-apiserver-certs") pod "calico-apiserver-84684997fc-zpm5v" (UID: "dfb0b7d2-b28d-4433-9fba-0074dfdf81ee") : failed to sync secret cache: timed out waiting for the condition Apr 20 19:50:53.514414 kubelet[3163]: E0420 19:50:53.436056 3163 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/events/kube-scheduler-localhost.18a826e9b6ceb5d3\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-scheduler-localhost.18a826e9b6ceb5d3 kube-system 2743 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-scheduler-localhost,UID:33fee6ba1581201eda98a989140db110,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 19:20:38 +0000 UTC,LastTimestamp:2026-04-20 19:20:59.281074808 +0000 UTC m=+757.375324536,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 19:50:53.700916 kubelet[3163]: E0420 19:50:53.662216 3163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 20 19:50:53.852919 kubelet[3163]: I0420 19:50:53.773377 3163 status_manager.go:895] "Failed to get status for pod" podUID="5ef51a6b32499d3d1e531fb8b3a83d4f" pod="kube-system/kube-apiserver-localhost" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 19:50:54.048836 kubelet[3163]: E0420 19:50:54.048744 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.14:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 19:50:54.082664 kubelet[3163]: E0420 19:50:53.661582 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/configmaps?fieldSelector=metadata.name%3Dtigera-ca-bundle&resourceVersion=1537\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"calico-system\"/\"tigera-ca-bundle\"" type="*v1.ConfigMap" Apr 20 19:50:54.120363 sshd[6325]: Connection closed by 10.0.0.1 port 32854 Apr 20 19:50:54.182872 sshd-session[6292]: pam_unix(sshd:session): session closed for user core Apr 20 19:50:54.390000 audit[6292]: AUDIT1106 pid=6292 uid=0 auid=500 ses=56 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:50:54.408000 audit[6292]: AUDIT1104 pid=6292 uid=0 auid=500 ses=56 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:50:54.485167 kubelet[3163]: E0420 19:50:54.022449 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.Secret: Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/secrets?fieldSelector=metadata.name%3Dnode-certs&resourceVersion=2406\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="object-\"calico-system\"/\"node-certs\"" type="*v1.Secret" Apr 20 19:50:54.485167 kubelet[3163]: E0420 19:50:54.286290 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&resourceVersion=2900\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 20 19:50:54.485167 kubelet[3163]: I0420 19:50:54.393432 3163 status_manager.go:895] "Failed to get status for pod" podUID="071d23f6-a94b-4165-9229-2d0570b516d8" pod="calico-system/calico-node-g9fs5" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-node-g9fs5\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 19:50:54.485167 kubelet[3163]: E0420 19:50:54.266275 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1537\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 20 19:50:54.556660 kernel: audit: type=1106 audit(1776714654.390:1158): pid=6292 uid=0 auid=500 ses=56 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:50:54.575665 containerd[1659]: time="2026-04-20T19:50:54.538413567Z" level=info msg="received container exit event container_id:\"7ad53f3fafdaae0a232b3f26afb202844744846718e9540c77ad7513e837efd9\" id:\"7ad53f3fafdaae0a232b3f26afb202844744846718e9540c77ad7513e837efd9\" pid:4687 exited_at:{seconds:1776714654 nanos:89477321}" Apr 20 19:50:54.590000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@54-10-10.0.0.14:22-10.0.0.1:32854 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:50:54.652574 kubelet[3163]: E0420 19:50:54.541252 3163 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:50:54.652574 kubelet[3163]: E0420 19:50:54.554853 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/526e8f89-8d32-4504-b20c-956610c7bb82-kube-proxy podName:526e8f89-8d32-4504-b20c-956610c7bb82 nodeName:}" failed. No retries permitted until 2026-04-20 19:50:56.549942202 +0000 UTC m=+2554.644191898 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/526e8f89-8d32-4504-b20c-956610c7bb82-kube-proxy") pod "kube-proxy-c6mkn" (UID: "526e8f89-8d32-4504-b20c-956610c7bb82") : failed to sync configmap cache: timed out waiting for the condition Apr 20 19:50:54.652574 kubelet[3163]: I0420 19:50:54.557404 3163 status_manager.go:895] "Failed to get status for pod" podUID="dfb0b7d2-b28d-4433-9fba-0074dfdf81ee" pod="calico-system/calico-apiserver-84684997fc-zpm5v" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-apiserver-84684997fc-zpm5v\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 19:50:54.664888 kernel: audit: type=1104 audit(1776714654.408:1159): pid=6292 uid=0 auid=500 ses=56 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:50:54.585920 systemd[1]: sshd@54-10-10.0.0.14:22-10.0.0.1:32854.service: Deactivated successfully. Apr 20 19:50:54.677669 kernel: audit: type=1131 audit(1776714654.590:1160): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@54-10-10.0.0.14:22-10.0.0.1:32854 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:50:54.644476 systemd[1]: sshd@54-10-10.0.0.14:22-10.0.0.1:32854.service: Consumed 12.029s CPU time, 4.2M memory peak. Apr 20 19:50:54.717276 systemd[1]: session-56.scope: Deactivated successfully. Apr 20 19:50:54.717806 systemd[1]: session-56.scope: Consumed 21.427s CPU time, 17.9M memory peak. Apr 20 19:50:54.721223 systemd-logind[1627]: Session 56 logged out. Waiting for processes to exit. Apr 20 19:50:54.949056 kubelet[3163]: E0420 19:50:54.941680 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.14:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 19:50:54.949056 kubelet[3163]: E0420 19:50:54.942094 3163 kubelet_node_status.go:535] "Unable to update node status" err="update node status exceeds retry count" Apr 20 19:50:55.064408 systemd-logind[1627]: Removed session 56. Apr 20 19:50:55.150319 kubelet[3163]: E0420 19:50:55.138067 3163 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:50:55.304035 kubelet[3163]: E0420 19:50:55.289202 3163 projected.go:194] Error preparing data for projected volume kube-api-access-6ncsk for pod kube-system/kube-proxy-c6mkn: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:50:55.449168 kubelet[3163]: E0420 19:50:55.448359 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/526e8f89-8d32-4504-b20c-956610c7bb82-kube-api-access-6ncsk podName:526e8f89-8d32-4504-b20c-956610c7bb82 nodeName:}" failed. No retries permitted until 2026-04-20 19:50:57.446335628 +0000 UTC m=+2555.540585343 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-6ncsk" (UniqueName: "kubernetes.io/projected/526e8f89-8d32-4504-b20c-956610c7bb82-kube-api-access-6ncsk") pod "kube-proxy-c6mkn" (UID: "526e8f89-8d32-4504-b20c-956610c7bb82") : failed to sync configmap cache: timed out waiting for the condition Apr 20 19:50:55.480421 containerd[1659]: time="2026-04-20T19:50:55.474201407Z" level=error msg="ExecSync for \"7ad53f3fafdaae0a232b3f26afb202844744846718e9540c77ad7513e837efd9\" failed" error="rpc error: code = Unknown desc = failed to exec in container: failed to start exec \"ca83c2578e2129a961bab28d447a4221e12983d5f4eded230e337f96ef1a985d\": OCI runtime exec failed: exec failed: cannot exec in a stopped container" Apr 20 19:50:55.599465 kubelet[3163]: E0420 19:50:55.591460 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.14:6443/api/v1/namespaces/tigera-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1537\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="object-\"tigera-operator\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 20 19:50:55.723158 containerd[1659]: time="2026-04-20T19:50:55.722791335Z" level=info msg="CreateContainer within sandbox \"de19448814837f2189e818ee408a7454c5277a1c9b2f91f664708851a1478571\" for container name:\"calico-apiserver\" attempt:1" Apr 20 19:50:55.726247 kubelet[3163]: E0420 19:50:55.669274 3163 projected.go:289] Couldn't get configMap tigera-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:50:55.726523 kubelet[3163]: E0420 19:50:55.726316 3163 projected.go:194] Error preparing data for projected volume kube-api-access-qj2d9 for pod tigera-operator/tigera-operator-6bf85f8dd-hvgdj: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:50:55.726523 kubelet[3163]: E0420 19:50:55.726468 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/22f1ff03-de8a-48db-b03e-54fdbe0d3d5f-kube-api-access-qj2d9 podName:22f1ff03-de8a-48db-b03e-54fdbe0d3d5f nodeName:}" failed. No retries permitted until 2026-04-20 19:50:56.726436973 +0000 UTC m=+2554.820686685 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-qj2d9" (UniqueName: "kubernetes.io/projected/22f1ff03-de8a-48db-b03e-54fdbe0d3d5f-kube-api-access-qj2d9") pod "tigera-operator-6bf85f8dd-hvgdj" (UID: "22f1ff03-de8a-48db-b03e-54fdbe0d3d5f") : failed to sync configmap cache: timed out waiting for the condition Apr 20 19:50:55.726523 kubelet[3163]: E0420 19:50:55.723662 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/configmaps?fieldSelector=metadata.name%3Dtigera-ca-bundle&resourceVersion=1537\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="object-\"calico-system\"/\"tigera-ca-bundle\"" type="*v1.ConfigMap" Apr 20 19:50:55.726523 kubelet[3163]: E0420 19:50:55.726044 3163 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to start exec \"ca83c2578e2129a961bab28d447a4221e12983d5f4eded230e337f96ef1a985d\": OCI runtime exec failed: exec failed: cannot exec in a stopped container" containerID="7ad53f3fafdaae0a232b3f26afb202844744846718e9540c77ad7513e837efd9" cmd=["/bin/calico-node","-shutdown"] Apr 20 19:50:55.727068 containerd[1659]: time="2026-04-20T19:50:55.727033359Z" level=error msg="ExecSync for \"7ad53f3fafdaae0a232b3f26afb202844744846718e9540c77ad7513e837efd9\" failed" error="rpc error: code = Unknown desc = failed to exec in container: failed to start exec \"b448aaccefe9f8e095d7214549be6806ea4c281eade06c9d7937f123647bf25b\": OCI runtime exec failed: exec failed: cannot exec in a stopped container" Apr 20 19:50:55.758318 kubelet[3163]: I0420 19:50:55.591257 3163 status_manager.go:895] "Failed to get status for pod" podUID="33fee6ba1581201eda98a989140db110" pod="kube-system/kube-scheduler-localhost" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 19:50:55.871356 kubelet[3163]: E0420 19:50:55.708877 3163 configmap.go:193] Couldn't get configMap calico-system/tigera-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:50:55.871356 kubelet[3163]: E0420 19:50:55.714693 3163 projected.go:289] Couldn't get configMap calico-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:50:55.871356 kubelet[3163]: E0420 19:50:55.855008 3163 projected.go:194] Error preparing data for projected volume kube-api-access-kld4g for pod calico-system/calico-apiserver-84684997fc-zpm5v: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:50:55.871356 kubelet[3163]: E0420 19:50:55.672297 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2m6.834s" Apr 20 19:50:55.918305 kubelet[3163]: E0420 19:50:55.726522 3163 kuberuntime_container.go:741] "PreStop hook failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to start exec \"ca83c2578e2129a961bab28d447a4221e12983d5f4eded230e337f96ef1a985d\": OCI runtime exec failed: exec failed: cannot exec in a stopped container" pod="calico-system/calico-node-g9fs5" podUID="071d23f6-a94b-4165-9229-2d0570b516d8" containerName="calico-node" containerID="containerd://7ad53f3fafdaae0a232b3f26afb202844744846718e9540c77ad7513e837efd9" Apr 20 19:50:55.918305 kubelet[3163]: E0420 19:50:55.917997 3163 projected.go:289] Couldn't get configMap calico-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:50:55.918305 kubelet[3163]: E0420 19:50:55.918110 3163 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to start exec \"b448aaccefe9f8e095d7214549be6806ea4c281eade06c9d7937f123647bf25b\": OCI runtime exec failed: exec failed: cannot exec in a stopped container" containerID="7ad53f3fafdaae0a232b3f26afb202844744846718e9540c77ad7513e837efd9" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 20 19:50:55.918305 kubelet[3163]: E0420 19:50:55.918324 3163 projected.go:194] Error preparing data for projected volume kube-api-access-4m6bv for pod calico-system/csi-node-driver-5h6vg: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:50:55.928627 kubelet[3163]: E0420 19:50:55.928502 3163 projected.go:289] Couldn't get configMap calico-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:50:55.969955 kubelet[3163]: E0420 19:50:55.947088 3163 projected.go:194] Error preparing data for projected volume kube-api-access-5kv6b for pod calico-system/calico-node-g9fs5: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:50:56.085484 containerd[1659]: time="2026-04-20T19:50:56.079471217Z" level=error msg="StopContainer for \"336c04308422bb47be47dbabd9a54c52608564fda7f37c62a62203892575bd65\" failed" error="rpc error: code = Unknown desc = failed to kill container \"336c04308422bb47be47dbabd9a54c52608564fda7f37c62a62203892575bd65\": context canceled" Apr 20 19:50:56.119451 kubelet[3163]: E0420 19:50:55.950247 3163 log.go:32] "StopContainer from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="336c04308422bb47be47dbabd9a54c52608564fda7f37c62a62203892575bd65" Apr 20 19:50:56.128888 kubelet[3163]: E0420 19:50:56.128243 3163 kuberuntime_container.go:863] "Container termination failed with gracePeriod" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="kube-system/kube-apiserver-localhost" podUID="5ef51a6b32499d3d1e531fb8b3a83d4f" containerName="kube-apiserver" containerID="containerd://336c04308422bb47be47dbabd9a54c52608564fda7f37c62a62203892575bd65" gracePeriod=30 Apr 20 19:50:56.161298 kubelet[3163]: E0420 19:50:56.149443 3163 kuberuntime_manager.go:1176] "killContainer for pod failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerName="kube-apiserver" containerID={"Type":"containerd","ID":"336c04308422bb47be47dbabd9a54c52608564fda7f37c62a62203892575bd65"} pod="kube-system/kube-apiserver-localhost" Apr 20 19:50:56.196158 kubelet[3163]: E0420 19:50:56.194130 3163 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillContainer\" for \"kube-apiserver\" with KillContainerError: \"rpc error: code = DeadlineExceeded desc = context deadline exceeded\"" pod="kube-system/kube-apiserver-localhost" podUID="5ef51a6b32499d3d1e531fb8b3a83d4f" Apr 20 19:50:56.260234 kubelet[3163]: E0420 19:50:55.955145 3163 secret.go:189] Couldn't get secret calico-system/node-certs: failed to sync secret cache: timed out waiting for the condition Apr 20 19:50:56.339525 kubelet[3163]: E0420 19:50:55.976633 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9f02930c-961c-4c4b-8334-b61cbd5c3d20-kube-api-access-4m6bv podName:9f02930c-961c-4c4b-8334-b61cbd5c3d20 nodeName:}" failed. No retries permitted until 2026-04-20 19:50:57.925194092 +0000 UTC m=+2556.019443798 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-4m6bv" (UniqueName: "kubernetes.io/projected/9f02930c-961c-4c4b-8334-b61cbd5c3d20-kube-api-access-4m6bv") pod "csi-node-driver-5h6vg" (UID: "9f02930c-961c-4c4b-8334-b61cbd5c3d20") : failed to sync configmap cache: timed out waiting for the condition Apr 20 19:50:56.463292 kubelet[3163]: E0420 19:50:56.463194 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/071d23f6-a94b-4165-9229-2d0570b516d8-tigera-ca-bundle podName:071d23f6-a94b-4165-9229-2d0570b516d8 nodeName:}" failed. No retries permitted until 2026-04-20 19:50:57.462113494 +0000 UTC m=+2555.556363203 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "tigera-ca-bundle" (UniqueName: "kubernetes.io/configmap/071d23f6-a94b-4165-9229-2d0570b516d8-tigera-ca-bundle") pod "calico-node-g9fs5" (UID: "071d23f6-a94b-4165-9229-2d0570b516d8") : failed to sync configmap cache: timed out waiting for the condition Apr 20 19:50:56.464814 kubelet[3163]: E0420 19:50:56.464787 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/dfb0b7d2-b28d-4433-9fba-0074dfdf81ee-kube-api-access-kld4g podName:dfb0b7d2-b28d-4433-9fba-0074dfdf81ee nodeName:}" failed. No retries permitted until 2026-04-20 19:51:00.464754224 +0000 UTC m=+2558.559003932 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-kld4g" (UniqueName: "kubernetes.io/projected/dfb0b7d2-b28d-4433-9fba-0074dfdf81ee-kube-api-access-kld4g") pod "calico-apiserver-84684997fc-zpm5v" (UID: "dfb0b7d2-b28d-4433-9fba-0074dfdf81ee") : failed to sync configmap cache: timed out waiting for the condition Apr 20 19:50:56.468416 kubelet[3163]: E0420 19:50:56.468341 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/071d23f6-a94b-4165-9229-2d0570b516d8-kube-api-access-5kv6b podName:071d23f6-a94b-4165-9229-2d0570b516d8 nodeName:}" failed. No retries permitted until 2026-04-20 19:50:57.466907515 +0000 UTC m=+2555.561157217 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-5kv6b" (UniqueName: "kubernetes.io/projected/071d23f6-a94b-4165-9229-2d0570b516d8-kube-api-access-5kv6b") pod "calico-node-g9fs5" (UID: "071d23f6-a94b-4165-9229-2d0570b516d8") : failed to sync configmap cache: timed out waiting for the condition Apr 20 19:50:56.470106 kubelet[3163]: I0420 19:50:56.469768 3163 status_manager.go:895] "Failed to get status for pod" podUID="33fee6ba1581201eda98a989140db110" pod="kube-system/kube-scheduler-localhost" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 19:50:56.470830 kubelet[3163]: E0420 19:50:56.470767 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/071d23f6-a94b-4165-9229-2d0570b516d8-node-certs podName:071d23f6-a94b-4165-9229-2d0570b516d8 nodeName:}" failed. No retries permitted until 2026-04-20 19:50:57.470742748 +0000 UTC m=+2555.564992449 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-certs" (UniqueName: "kubernetes.io/secret/071d23f6-a94b-4165-9229-2d0570b516d8-node-certs") pod "calico-node-g9fs5" (UID: "071d23f6-a94b-4165-9229-2d0570b516d8") : failed to sync secret cache: timed out waiting for the condition Apr 20 19:50:56.471437 kubelet[3163]: I0420 19:50:56.471366 3163 status_manager.go:895] "Failed to get status for pod" podUID="5ef51a6b32499d3d1e531fb8b3a83d4f" pod="kube-system/kube-apiserver-localhost" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 19:50:56.472150 kubelet[3163]: I0420 19:50:56.472123 3163 status_manager.go:895] "Failed to get status for pod" podUID="071d23f6-a94b-4165-9229-2d0570b516d8" pod="calico-system/calico-node-g9fs5" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-node-g9fs5\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 19:50:56.472619 containerd[1659]: time="2026-04-20T19:50:56.472590767Z" level=info msg="StopContainer for \"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\" with timeout 30 (s)" Apr 20 19:50:56.472823 kubelet[3163]: I0420 19:50:56.472745 3163 status_manager.go:895] "Failed to get status for pod" podUID="dfb0b7d2-b28d-4433-9fba-0074dfdf81ee" pod="calico-system/calico-apiserver-84684997fc-zpm5v" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-apiserver-84684997fc-zpm5v\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 19:50:56.473917 kubelet[3163]: I0420 19:50:56.473837 3163 status_manager.go:895] "Failed to get status for pod" podUID="5ef51a6b32499d3d1e531fb8b3a83d4f" pod="kube-system/kube-apiserver-localhost" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 19:50:56.474323 containerd[1659]: time="2026-04-20T19:50:56.474157046Z" level=info msg="Container 094aeb199e5141e10c4aa1ca00e31f3c2ea5db40bdffe7260f4ac4067e20028a: CDI devices from CRI Config.CDIDevices: []" Apr 20 19:50:56.474653 kubelet[3163]: I0420 19:50:56.474629 3163 status_manager.go:895] "Failed to get status for pod" podUID="071d23f6-a94b-4165-9229-2d0570b516d8" pod="calico-system/calico-node-g9fs5" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-node-g9fs5\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 19:50:56.475187 kubelet[3163]: I0420 19:50:56.475111 3163 status_manager.go:895] "Failed to get status for pod" podUID="dfb0b7d2-b28d-4433-9fba-0074dfdf81ee" pod="calico-system/calico-apiserver-84684997fc-zpm5v" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-apiserver-84684997fc-zpm5v\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 19:50:56.475396 kubelet[3163]: I0420 19:50:56.475278 3163 status_manager.go:895] "Failed to get status for pod" podUID="33fee6ba1581201eda98a989140db110" pod="kube-system/kube-scheduler-localhost" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 19:50:56.477006 kubelet[3163]: E0420 19:50:56.476942 3163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:50:56.494886 containerd[1659]: time="2026-04-20T19:50:56.486645018Z" level=info msg="Skipping the sending of signal terminated to container \"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\" because a prior stop with timeout>0 request already sent the signal" Apr 20 19:50:56.849032 containerd[1659]: time="2026-04-20T19:50:56.820293393Z" level=error msg="ttrpc: received message on inactive stream" stream=137 Apr 20 19:50:56.991322 systemd[1]: cri-containerd-336c04308422bb47be47dbabd9a54c52608564fda7f37c62a62203892575bd65.scope: Deactivated successfully. Apr 20 19:50:57.016000 audit: BPF prog-id=88 op=UNLOAD Apr 20 19:50:57.016000 audit: BPF prog-id=111 op=UNLOAD Apr 20 19:50:57.075664 kernel: audit: type=1334 audit(1776714657.016:1161): prog-id=88 op=UNLOAD Apr 20 19:50:57.013811 systemd[1]: cri-containerd-336c04308422bb47be47dbabd9a54c52608564fda7f37c62a62203892575bd65.scope: Consumed 31min 4.447s CPU time, 479.5M memory peak, 42.6M read from disk. Apr 20 19:50:57.186433 kernel: audit: type=1334 audit(1776714657.016:1162): prog-id=111 op=UNLOAD Apr 20 19:50:57.367144 containerd[1659]: time="2026-04-20T19:50:57.366962636Z" level=info msg="received container exit event container_id:\"336c04308422bb47be47dbabd9a54c52608564fda7f37c62a62203892575bd65\" id:\"336c04308422bb47be47dbabd9a54c52608564fda7f37c62a62203892575bd65\" pid:3000 exit_status:137 exited_at:{seconds:1776714657 nanos:362246788}" Apr 20 19:50:57.595600 kubelet[3163]: E0420 19:50:57.571613 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-proxy&resourceVersion=1537\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap" Apr 20 19:50:57.685428 kubelet[3163]: E0420 19:50:57.677413 3163 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:50:57.750112 kubelet[3163]: E0420 19:50:57.673325 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.Secret: Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/secrets?fieldSelector=metadata.name%3Dnode-certs&resourceVersion=2406\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="object-\"calico-system\"/\"node-certs\"" type="*v1.Secret" Apr 20 19:50:57.781445 kubelet[3163]: E0420 19:50:57.780050 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/526e8f89-8d32-4504-b20c-956610c7bb82-kube-proxy podName:526e8f89-8d32-4504-b20c-956610c7bb82 nodeName:}" failed. No retries permitted until 2026-04-20 19:51:01.764225983 +0000 UTC m=+2559.858475683 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/526e8f89-8d32-4504-b20c-956610c7bb82-kube-proxy") pod "kube-proxy-c6mkn" (UID: "526e8f89-8d32-4504-b20c-956610c7bb82") : failed to sync configmap cache: timed out waiting for the condition Apr 20 19:50:57.940176 containerd[1659]: time="2026-04-20T19:50:57.935205994Z" level=info msg="CreateContainer within sandbox \"de19448814837f2189e818ee408a7454c5277a1c9b2f91f664708851a1478571\" for name:\"calico-apiserver\" attempt:1 returns container id \"094aeb199e5141e10c4aa1ca00e31f3c2ea5db40bdffe7260f4ac4067e20028a\"" Apr 20 19:50:58.071932 kubelet[3163]: E0420 19:50:58.069309 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/configmaps?fieldSelector=metadata.name%3Dtigera-ca-bundle&resourceVersion=1537\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="object-\"calico-system\"/\"tigera-ca-bundle\"" type="*v1.ConfigMap" Apr 20 19:50:58.209452 kubelet[3163]: E0420 19:50:58.171629 3163 projected.go:289] Couldn't get configMap tigera-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:50:58.245177 containerd[1659]: time="2026-04-20T19:50:58.244953952Z" level=info msg="StartContainer for \"094aeb199e5141e10c4aa1ca00e31f3c2ea5db40bdffe7260f4ac4067e20028a\"" Apr 20 19:50:58.253682 kubelet[3163]: E0420 19:50:58.245042 3163 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ef51a6b32499d3d1e531fb8b3a83d4f.slice/cri-containerd-336c04308422bb47be47dbabd9a54c52608564fda7f37c62a62203892575bd65.scope\": RecentStats: unable to find data in memory cache]" Apr 20 19:50:58.320781 kubelet[3163]: E0420 19:50:58.249281 3163 projected.go:194] Error preparing data for projected volume kube-api-access-qj2d9 for pod tigera-operator/tigera-operator-6bf85f8dd-hvgdj: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:50:58.339327 kubelet[3163]: E0420 19:50:58.336883 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/22f1ff03-de8a-48db-b03e-54fdbe0d3d5f-kube-api-access-qj2d9 podName:22f1ff03-de8a-48db-b03e-54fdbe0d3d5f nodeName:}" failed. No retries permitted until 2026-04-20 19:51:00.335129017 +0000 UTC m=+2558.429378716 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-qj2d9" (UniqueName: "kubernetes.io/projected/22f1ff03-de8a-48db-b03e-54fdbe0d3d5f-kube-api-access-qj2d9") pod "tigera-operator-6bf85f8dd-hvgdj" (UID: "22f1ff03-de8a-48db-b03e-54fdbe0d3d5f") : failed to sync configmap cache: timed out waiting for the condition Apr 20 19:50:58.388015 kubelet[3163]: E0420 19:50:58.310403 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.26s" Apr 20 19:50:58.527268 containerd[1659]: time="2026-04-20T19:50:58.522867972Z" level=info msg="connecting to shim 094aeb199e5141e10c4aa1ca00e31f3c2ea5db40bdffe7260f4ac4067e20028a" address="unix:///run/containerd/s/9f25d20f4617cde34f7397032d9ecbc0b43cd780bc15ce3e8713428f4b2ceb63" protocol=ttrpc version=3 Apr 20 19:50:58.594010 kubelet[3163]: E0420 19:50:58.593497 3163 projected.go:289] Couldn't get configMap calico-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:50:58.594010 kubelet[3163]: E0420 19:50:58.593910 3163 secret.go:189] Couldn't get secret calico-system/node-certs: failed to sync secret cache: timed out waiting for the condition Apr 20 19:50:58.711018 kubelet[3163]: E0420 19:50:58.594246 3163 projected.go:194] Error preparing data for projected volume kube-api-access-5kv6b for pod calico-system/calico-node-g9fs5: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:50:58.711018 kubelet[3163]: E0420 19:50:58.594432 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/071d23f6-a94b-4165-9229-2d0570b516d8-node-certs podName:071d23f6-a94b-4165-9229-2d0570b516d8 nodeName:}" failed. No retries permitted until 2026-04-20 19:51:00.594077962 +0000 UTC m=+2558.688327668 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "node-certs" (UniqueName: "kubernetes.io/secret/071d23f6-a94b-4165-9229-2d0570b516d8-node-certs") pod "calico-node-g9fs5" (UID: "071d23f6-a94b-4165-9229-2d0570b516d8") : failed to sync secret cache: timed out waiting for the condition Apr 20 19:50:58.711018 kubelet[3163]: E0420 19:50:58.593788 3163 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:50:58.711018 kubelet[3163]: E0420 19:50:58.652348 3163 projected.go:194] Error preparing data for projected volume kube-api-access-6ncsk for pod kube-system/kube-proxy-c6mkn: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:50:58.711018 kubelet[3163]: E0420 19:50:58.685354 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/071d23f6-a94b-4165-9229-2d0570b516d8-kube-api-access-5kv6b podName:071d23f6-a94b-4165-9229-2d0570b516d8 nodeName:}" failed. No retries permitted until 2026-04-20 19:51:00.656477668 +0000 UTC m=+2558.750727370 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-5kv6b" (UniqueName: "kubernetes.io/projected/071d23f6-a94b-4165-9229-2d0570b516d8-kube-api-access-5kv6b") pod "calico-node-g9fs5" (UID: "071d23f6-a94b-4165-9229-2d0570b516d8") : failed to sync configmap cache: timed out waiting for the condition Apr 20 19:50:58.711018 kubelet[3163]: E0420 19:50:58.710871 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/526e8f89-8d32-4504-b20c-956610c7bb82-kube-api-access-6ncsk podName:526e8f89-8d32-4504-b20c-956610c7bb82 nodeName:}" failed. No retries permitted until 2026-04-20 19:51:02.690272193 +0000 UTC m=+2560.784521901 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-6ncsk" (UniqueName: "kubernetes.io/projected/526e8f89-8d32-4504-b20c-956610c7bb82-kube-api-access-6ncsk") pod "kube-proxy-c6mkn" (UID: "526e8f89-8d32-4504-b20c-956610c7bb82") : failed to sync configmap cache: timed out waiting for the condition Apr 20 19:50:58.712775 kubelet[3163]: E0420 19:50:58.712640 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.Secret: Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/secrets?fieldSelector=metadata.name%3Dcalico-apiserver-certs&resourceVersion=2406\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="object-\"calico-system\"/\"calico-apiserver-certs\"" type="*v1.Secret" Apr 20 19:50:58.712775 kubelet[3163]: E0420 19:50:58.692525 3163 configmap.go:193] Couldn't get configMap calico-system/tigera-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:50:58.712913 kubelet[3163]: E0420 19:50:58.712857 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/071d23f6-a94b-4165-9229-2d0570b516d8-tigera-ca-bundle podName:071d23f6-a94b-4165-9229-2d0570b516d8 nodeName:}" failed. No retries permitted until 2026-04-20 19:51:00.712781622 +0000 UTC m=+2558.807031318 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "tigera-ca-bundle" (UniqueName: "kubernetes.io/configmap/071d23f6-a94b-4165-9229-2d0570b516d8-tigera-ca-bundle") pod "calico-node-g9fs5" (UID: "071d23f6-a94b-4165-9229-2d0570b516d8") : failed to sync configmap cache: timed out waiting for the condition Apr 20 19:50:58.723068 containerd[1659]: time="2026-04-20T19:50:58.722983598Z" level=info msg="StopContainer for \"336c04308422bb47be47dbabd9a54c52608564fda7f37c62a62203892575bd65\" with timeout 30 (s)" Apr 20 19:50:58.872112 containerd[1659]: time="2026-04-20T19:50:58.856337024Z" level=info msg="Skipping the sending of signal terminated to container \"336c04308422bb47be47dbabd9a54c52608564fda7f37c62a62203892575bd65\" because a prior stop with timeout>0 request already sent the signal" Apr 20 19:50:58.912856 kubelet[3163]: E0420 19:50:58.912438 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.14:6443/api/v1/namespaces/tigera-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1537\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="object-\"tigera-operator\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 20 19:50:59.138790 systemd[1]: Started cri-containerd-094aeb199e5141e10c4aa1ca00e31f3c2ea5db40bdffe7260f4ac4067e20028a.scope - libcontainer container 094aeb199e5141e10c4aa1ca00e31f3c2ea5db40bdffe7260f4ac4067e20028a. Apr 20 19:50:59.148000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@55-8217-10.0.0.14:22-10.0.0.1:35942 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:50:59.213889 kernel: audit: type=1130 audit(1776714659.148:1163): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@55-8217-10.0.0.14:22-10.0.0.1:35942 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:50:59.148517 systemd[1]: Started sshd@55-8217-10.0.0.14:22-10.0.0.1:35942.service - OpenSSH per-connection server daemon (10.0.0.1:35942). Apr 20 19:50:59.209216 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7ad53f3fafdaae0a232b3f26afb202844744846718e9540c77ad7513e837efd9-rootfs.mount: Deactivated successfully. Apr 20 19:50:59.239035 containerd[1659]: time="2026-04-20T19:50:59.237647988Z" level=error msg="ExecSync for \"7ad53f3fafdaae0a232b3f26afb202844744846718e9540c77ad7513e837efd9\" failed" error="rpc error: code = Unknown desc = failed to exec in container: failed to create exec \"cd1964084b7ec3868c1734c4d77d6a58d767bdd171785d124d2df6b45e960925\": cannot exec in a deleted state" Apr 20 19:50:59.256823 kubelet[3163]: E0420 19:50:59.239028 3163 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to create exec \"cd1964084b7ec3868c1734c4d77d6a58d767bdd171785d124d2df6b45e960925\": cannot exec in a deleted state" containerID="7ad53f3fafdaae0a232b3f26afb202844744846718e9540c77ad7513e837efd9" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 20 19:50:59.263945 containerd[1659]: time="2026-04-20T19:50:59.256995474Z" level=error msg="ExecSync for \"7ad53f3fafdaae0a232b3f26afb202844744846718e9540c77ad7513e837efd9\" failed" error="rpc error: code = NotFound desc = failed to exec in container: failed to create exec \"563ad1f73dd7f7b26549d7fd91d2f14865867d442187cc3b94c252c423bfd4fc\": task 7ad53f3fafdaae0a232b3f26afb202844744846718e9540c77ad7513e837efd9 not found" Apr 20 19:50:59.284579 kubelet[3163]: E0420 19:50:59.283987 3163 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = failed to exec in container: failed to create exec \"563ad1f73dd7f7b26549d7fd91d2f14865867d442187cc3b94c252c423bfd4fc\": task 7ad53f3fafdaae0a232b3f26afb202844744846718e9540c77ad7513e837efd9 not found" containerID="7ad53f3fafdaae0a232b3f26afb202844744846718e9540c77ad7513e837efd9" cmd=["/bin/calico-node","-shutdown"] Apr 20 19:50:59.369374 kubelet[3163]: E0420 19:50:59.291781 3163 kuberuntime_container.go:741] "PreStop hook failed" err="rpc error: code = NotFound desc = failed to exec in container: failed to create exec \"563ad1f73dd7f7b26549d7fd91d2f14865867d442187cc3b94c252c423bfd4fc\": task 7ad53f3fafdaae0a232b3f26afb202844744846718e9540c77ad7513e837efd9 not found" pod="calico-system/calico-node-g9fs5" podUID="071d23f6-a94b-4165-9229-2d0570b516d8" containerName="calico-node" containerID="containerd://7ad53f3fafdaae0a232b3f26afb202844744846718e9540c77ad7513e837efd9" Apr 20 19:50:59.422341 containerd[1659]: time="2026-04-20T19:50:59.421472338Z" level=info msg="StopContainer for \"7ad53f3fafdaae0a232b3f26afb202844744846718e9540c77ad7513e837efd9\" with timeout 5 (s)" Apr 20 19:50:59.440023 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-336c04308422bb47be47dbabd9a54c52608564fda7f37c62a62203892575bd65-rootfs.mount: Deactivated successfully. Apr 20 19:50:59.457308 containerd[1659]: time="2026-04-20T19:50:59.451752010Z" level=error msg="ExecSync for \"7ad53f3fafdaae0a232b3f26afb202844744846718e9540c77ad7513e837efd9\" failed" error="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" Apr 20 19:50:59.457308 containerd[1659]: time="2026-04-20T19:50:59.453612747Z" level=info msg="Container to stop \"7ad53f3fafdaae0a232b3f26afb202844744846718e9540c77ad7513e837efd9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 20 19:50:59.470299 containerd[1659]: time="2026-04-20T19:50:59.468919201Z" level=info msg="StopContainer for \"7ad53f3fafdaae0a232b3f26afb202844744846718e9540c77ad7513e837efd9\" returns successfully" Apr 20 19:50:59.470378 kubelet[3163]: E0420 19:50:59.459798 3163 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" containerID="7ad53f3fafdaae0a232b3f26afb202844744846718e9540c77ad7513e837efd9" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 20 19:50:59.470378 kubelet[3163]: E0420 19:50:59.458419 3163 projected.go:289] Couldn't get configMap calico-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:50:59.490813 kubelet[3163]: E0420 19:50:59.470665 3163 projected.go:194] Error preparing data for projected volume kube-api-access-4m6bv for pod calico-system/csi-node-driver-5h6vg: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:50:59.512389 kubelet[3163]: E0420 19:50:59.510684 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9f02930c-961c-4c4b-8334-b61cbd5c3d20-kube-api-access-4m6bv podName:9f02930c-961c-4c4b-8334-b61cbd5c3d20 nodeName:}" failed. No retries permitted until 2026-04-20 19:51:03.51007871 +0000 UTC m=+2561.604328407 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-4m6bv" (UniqueName: "kubernetes.io/projected/9f02930c-961c-4c4b-8334-b61cbd5c3d20-kube-api-access-4m6bv") pod "csi-node-driver-5h6vg" (UID: "9f02930c-961c-4c4b-8334-b61cbd5c3d20") : failed to sync configmap cache: timed out waiting for the condition Apr 20 19:50:59.527373 containerd[1659]: time="2026-04-20T19:50:59.527301399Z" level=info msg="StopContainer for \"336c04308422bb47be47dbabd9a54c52608564fda7f37c62a62203892575bd65\" returns successfully" Apr 20 19:50:59.527828 containerd[1659]: time="2026-04-20T19:50:59.527758799Z" level=error msg="ExecSync for \"7ad53f3fafdaae0a232b3f26afb202844744846718e9540c77ad7513e837efd9\" failed" error="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" Apr 20 19:50:59.528464 kubelet[3163]: E0420 19:50:59.528440 3163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:50:59.529052 kubelet[3163]: E0420 19:50:59.528906 3163 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" containerID="7ad53f3fafdaae0a232b3f26afb202844744846718e9540c77ad7513e837efd9" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 20 19:50:59.534843 containerd[1659]: time="2026-04-20T19:50:59.534660582Z" level=error msg="ExecSync for \"7ad53f3fafdaae0a232b3f26afb202844744846718e9540c77ad7513e837efd9\" failed" error="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" Apr 20 19:50:59.535000 audit: BPF prog-id=209 op=LOAD Apr 20 19:50:59.548981 kernel: audit: type=1334 audit(1776714659.535:1164): prog-id=209 op=LOAD Apr 20 19:50:59.549373 kubelet[3163]: E0420 19:50:59.544932 3163 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" containerID="7ad53f3fafdaae0a232b3f26afb202844744846718e9540c77ad7513e837efd9" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 20 19:50:59.557177 containerd[1659]: time="2026-04-20T19:50:59.556424652Z" level=error msg="ExecSync for \"7ad53f3fafdaae0a232b3f26afb202844744846718e9540c77ad7513e837efd9\" failed" error="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" Apr 20 19:50:59.557794 kubelet[3163]: E0420 19:50:59.557284 3163 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" containerID="7ad53f3fafdaae0a232b3f26afb202844744846718e9540c77ad7513e837efd9" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 20 19:50:59.558067 containerd[1659]: time="2026-04-20T19:50:59.557876384Z" level=info msg="CreateContainer within sandbox \"1bba4a8c43a21f7135445620d6423b332e6946df1fe8c521ac8720d95194888f\" for container name:\"calico-node\" attempt:1" Apr 20 19:50:59.586000 audit: BPF prog-id=210 op=LOAD Apr 20 19:50:59.586000 audit[6393]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0000e4240 a2=98 a3=0 items=0 ppid=5396 pid=6393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:50:59.654114 kernel: audit: type=1334 audit(1776714659.586:1165): prog-id=210 op=LOAD Apr 20 19:50:59.655378 kernel: audit: type=1300 audit(1776714659.586:1165): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0000e4240 a2=98 a3=0 items=0 ppid=5396 pid=6393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:50:59.586000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039346165623139396535313431653130633461613163613030653331 Apr 20 19:50:59.660833 containerd[1659]: time="2026-04-20T19:50:59.657187332Z" level=info msg="CreateContainer within sandbox \"a0a1c013bb9119be3e83c967343167afaabfa5d3210072f49e9de991e138aad2\" for container name:\"kube-apiserver\" attempt:1" Apr 20 19:50:59.587000 audit: BPF prog-id=210 op=UNLOAD Apr 20 19:50:59.587000 audit[6393]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5396 pid=6393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:50:59.587000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039346165623139396535313431653130633461613163613030653331 Apr 20 19:50:59.709040 kernel: audit: type=1327 audit(1776714659.586:1165): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039346165623139396535313431653130633461613163613030653331 Apr 20 19:50:59.709141 kernel: audit: type=1334 audit(1776714659.587:1166): prog-id=210 op=UNLOAD Apr 20 19:50:59.709217 kernel: audit: type=1300 audit(1776714659.587:1166): arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5396 pid=6393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:50:59.709233 kernel: audit: type=1327 audit(1776714659.587:1166): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039346165623139396535313431653130633461613163613030653331 Apr 20 19:50:59.648000 audit: BPF prog-id=211 op=LOAD Apr 20 19:50:59.648000 audit[6393]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0000e4490 a2=98 a3=0 items=0 ppid=5396 pid=6393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:50:59.737437 kernel: audit: type=1334 audit(1776714659.648:1167): prog-id=211 op=LOAD Apr 20 19:50:59.648000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039346165623139396535313431653130633461613163613030653331 Apr 20 19:50:59.749422 kernel: audit: type=1300 audit(1776714659.648:1167): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0000e4490 a2=98 a3=0 items=0 ppid=5396 pid=6393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:50:59.749793 kernel: audit: type=1327 audit(1776714659.648:1167): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039346165623139396535313431653130633461613163613030653331 Apr 20 19:50:59.651000 audit: BPF prog-id=212 op=LOAD Apr 20 19:50:59.651000 audit[6393]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0000e4220 a2=98 a3=0 items=0 ppid=5396 pid=6393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:50:59.651000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039346165623139396535313431653130633461613163613030653331 Apr 20 19:50:59.651000 audit: BPF prog-id=212 op=UNLOAD Apr 20 19:50:59.651000 audit[6393]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5396 pid=6393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:50:59.651000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039346165623139396535313431653130633461613163613030653331 Apr 20 19:50:59.651000 audit: BPF prog-id=211 op=UNLOAD Apr 20 19:50:59.651000 audit[6393]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5396 pid=6393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:50:59.651000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039346165623139396535313431653130633461613163613030653331 Apr 20 19:50:59.651000 audit: BPF prog-id=213 op=LOAD Apr 20 19:50:59.651000 audit[6393]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0000e46f0 a2=98 a3=0 items=0 ppid=5396 pid=6393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:50:59.651000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039346165623139396535313431653130633461613163613030653331 Apr 20 19:50:59.932000 audit[6412]: AUDIT1101 pid=6412 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:50:59.946824 sshd[6412]: Accepted publickey for core from 10.0.0.1 port 35942 ssh2: RSA SHA256:ZNIzts6V4KYKlrJxXaosrimCRlmsV/+NkZ5UtjwHrjE Apr 20 19:50:59.949000 audit[6412]: AUDIT1103 pid=6412 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:50:59.950000 audit[6412]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffea6841be0 a2=3 a3=0 items=0 ppid=1 pid=6412 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=57 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:50:59.950000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:50:59.962073 sshd-session[6412]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 19:50:59.963106 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1045328789.mount: Deactivated successfully. Apr 20 19:51:00.007446 containerd[1659]: time="2026-04-20T19:51:00.007413262Z" level=info msg="Container 38809e5cd6af13351ccdf029282f28389cf4d54a3e7b12c6f1dbc5b71d29ce3a: CDI devices from CRI Config.CDIDevices: []" Apr 20 19:51:00.008883 containerd[1659]: time="2026-04-20T19:51:00.008831403Z" level=info msg="Container 292f0d8812e0e22d6b9c42f42298e77dbbf1a3ff614a20049631a83923de267a: CDI devices from CRI Config.CDIDevices: []" Apr 20 19:51:00.215196 containerd[1659]: time="2026-04-20T19:51:00.213082418Z" level=info msg="CreateContainer within sandbox \"a0a1c013bb9119be3e83c967343167afaabfa5d3210072f49e9de991e138aad2\" for name:\"kube-apiserver\" attempt:1 returns container id \"292f0d8812e0e22d6b9c42f42298e77dbbf1a3ff614a20049631a83923de267a\"" Apr 20 19:51:00.224039 systemd-logind[1627]: New session '57' of user 'core' with class 'user' and type 'tty'. Apr 20 19:51:00.236080 containerd[1659]: time="2026-04-20T19:51:00.235965741Z" level=info msg="StartContainer for \"292f0d8812e0e22d6b9c42f42298e77dbbf1a3ff614a20049631a83923de267a\"" Apr 20 19:51:00.237025 systemd[1]: Started session-57.scope - Session 57 of User core. Apr 20 19:51:00.238424 containerd[1659]: time="2026-04-20T19:51:00.236441674Z" level=info msg="CreateContainer within sandbox \"1bba4a8c43a21f7135445620d6423b332e6946df1fe8c521ac8720d95194888f\" for name:\"calico-node\" attempt:1 returns container id \"38809e5cd6af13351ccdf029282f28389cf4d54a3e7b12c6f1dbc5b71d29ce3a\"" Apr 20 19:51:00.242197 containerd[1659]: time="2026-04-20T19:51:00.240867559Z" level=info msg="StartContainer for \"38809e5cd6af13351ccdf029282f28389cf4d54a3e7b12c6f1dbc5b71d29ce3a\"" Apr 20 19:51:00.260720 containerd[1659]: time="2026-04-20T19:51:00.257075336Z" level=info msg="connecting to shim 292f0d8812e0e22d6b9c42f42298e77dbbf1a3ff614a20049631a83923de267a" address="unix:///run/containerd/s/80102222aa3ed4b7ee78377cd8f0cd98fe2254d5e4d09c655e1726e3fa17fed4" protocol=ttrpc version=3 Apr 20 19:51:00.286067 containerd[1659]: time="2026-04-20T19:51:00.285980791Z" level=info msg="connecting to shim 38809e5cd6af13351ccdf029282f28389cf4d54a3e7b12c6f1dbc5b71d29ce3a" address="unix:///run/containerd/s/d6fd6f578359a16fb6047ac6b8915843558ecdd02f7ae288b74c76a061bb8a9a" protocol=ttrpc version=3 Apr 20 19:51:00.285000 audit[6412]: AUDIT1105 pid=6412 uid=0 auid=500 ses=57 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:51:00.339000 audit[6432]: AUDIT1103 pid=6432 uid=0 auid=500 ses=57 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:51:00.739870 containerd[1659]: time="2026-04-20T19:51:00.739665152Z" level=info msg="StartContainer for \"094aeb199e5141e10c4aa1ca00e31f3c2ea5db40bdffe7260f4ac4067e20028a\" returns successfully" Apr 20 19:51:00.764688 kubelet[3163]: E0420 19:51:00.763922 3163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 20 19:51:01.189279 kubelet[3163]: E0420 19:51:01.187844 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=2837\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 20 19:51:01.386354 systemd[1]: Started cri-containerd-292f0d8812e0e22d6b9c42f42298e77dbbf1a3ff614a20049631a83923de267a.scope - libcontainer container 292f0d8812e0e22d6b9c42f42298e77dbbf1a3ff614a20049631a83923de267a. Apr 20 19:51:01.490818 kubelet[3163]: E0420 19:51:01.486412 3163 projected.go:289] Couldn't get configMap tigera-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:51:01.520126 kubelet[3163]: E0420 19:51:01.512168 3163 projected.go:194] Error preparing data for projected volume kube-api-access-qj2d9 for pod tigera-operator/tigera-operator-6bf85f8dd-hvgdj: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:51:01.709930 kubelet[3163]: E0420 19:51:01.694304 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/22f1ff03-de8a-48db-b03e-54fdbe0d3d5f-kube-api-access-qj2d9 podName:22f1ff03-de8a-48db-b03e-54fdbe0d3d5f nodeName:}" failed. No retries permitted until 2026-04-20 19:51:05.537398289 +0000 UTC m=+2563.631647997 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-qj2d9" (UniqueName: "kubernetes.io/projected/22f1ff03-de8a-48db-b03e-54fdbe0d3d5f-kube-api-access-qj2d9") pod "tigera-operator-6bf85f8dd-hvgdj" (UID: "22f1ff03-de8a-48db-b03e-54fdbe0d3d5f") : failed to sync configmap cache: timed out waiting for the condition Apr 20 19:51:01.708121 systemd[1]: Started cri-containerd-38809e5cd6af13351ccdf029282f28389cf4d54a3e7b12c6f1dbc5b71d29ce3a.scope - libcontainer container 38809e5cd6af13351ccdf029282f28389cf4d54a3e7b12c6f1dbc5b71d29ce3a. Apr 20 19:51:01.814139 kubelet[3163]: E0420 19:51:01.803828 3163 configmap.go:193] Couldn't get configMap calico-system/tigera-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:51:01.814139 kubelet[3163]: E0420 19:51:01.804613 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/071d23f6-a94b-4165-9229-2d0570b516d8-tigera-ca-bundle podName:071d23f6-a94b-4165-9229-2d0570b516d8 nodeName:}" failed. No retries permitted until 2026-04-20 19:51:05.804295859 +0000 UTC m=+2563.898545558 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "tigera-ca-bundle" (UniqueName: "kubernetes.io/configmap/071d23f6-a94b-4165-9229-2d0570b516d8-tigera-ca-bundle") pod "calico-node-g9fs5" (UID: "071d23f6-a94b-4165-9229-2d0570b516d8") : failed to sync configmap cache: timed out waiting for the condition Apr 20 19:51:01.814139 kubelet[3163]: E0420 19:51:01.804793 3163 projected.go:289] Couldn't get configMap calico-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:51:01.816368 kubelet[3163]: E0420 19:51:01.816290 3163 projected.go:194] Error preparing data for projected volume kube-api-access-5kv6b for pod calico-system/calico-node-g9fs5: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:51:01.820929 kubelet[3163]: E0420 19:51:01.816834 3163 projected.go:289] Couldn't get configMap calico-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:51:01.820929 kubelet[3163]: E0420 19:51:01.817001 3163 projected.go:194] Error preparing data for projected volume kube-api-access-kld4g for pod calico-system/calico-apiserver-84684997fc-zpm5v: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:51:01.845854 kubelet[3163]: E0420 19:51:01.845224 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/071d23f6-a94b-4165-9229-2d0570b516d8-kube-api-access-5kv6b podName:071d23f6-a94b-4165-9229-2d0570b516d8 nodeName:}" failed. No retries permitted until 2026-04-20 19:51:05.845006296 +0000 UTC m=+2563.939256011 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-5kv6b" (UniqueName: "kubernetes.io/projected/071d23f6-a94b-4165-9229-2d0570b516d8-kube-api-access-5kv6b") pod "calico-node-g9fs5" (UID: "071d23f6-a94b-4165-9229-2d0570b516d8") : failed to sync configmap cache: timed out waiting for the condition Apr 20 19:51:01.845854 kubelet[3163]: E0420 19:51:01.824512 3163 secret.go:189] Couldn't get secret calico-system/node-certs: failed to sync secret cache: timed out waiting for the condition Apr 20 19:51:01.950359 kubelet[3163]: E0420 19:51:01.846151 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/071d23f6-a94b-4165-9229-2d0570b516d8-node-certs podName:071d23f6-a94b-4165-9229-2d0570b516d8 nodeName:}" failed. No retries permitted until 2026-04-20 19:51:05.846074061 +0000 UTC m=+2563.940323758 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "node-certs" (UniqueName: "kubernetes.io/secret/071d23f6-a94b-4165-9229-2d0570b516d8-node-certs") pod "calico-node-g9fs5" (UID: "071d23f6-a94b-4165-9229-2d0570b516d8") : failed to sync secret cache: timed out waiting for the condition Apr 20 19:51:01.950359 kubelet[3163]: E0420 19:51:01.940693 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/dfb0b7d2-b28d-4433-9fba-0074dfdf81ee-kube-api-access-kld4g podName:dfb0b7d2-b28d-4433-9fba-0074dfdf81ee nodeName:}" failed. No retries permitted until 2026-04-20 19:51:09.879827717 +0000 UTC m=+2567.974077420 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-kld4g" (UniqueName: "kubernetes.io/projected/dfb0b7d2-b28d-4433-9fba-0074dfdf81ee-kube-api-access-kld4g") pod "calico-apiserver-84684997fc-zpm5v" (UID: "dfb0b7d2-b28d-4433-9fba-0074dfdf81ee") : failed to sync configmap cache: timed out waiting for the condition Apr 20 19:51:02.224000 audit: BPF prog-id=214 op=LOAD Apr 20 19:51:02.450000 audit: BPF prog-id=215 op=LOAD Apr 20 19:51:02.450000 audit[6431]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a240 a2=98 a3=0 items=0 ppid=2849 pid=6431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:51:02.450000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239326630643838313265306532326436623963343266343232393865 Apr 20 19:51:02.638000 audit: BPF prog-id=215 op=UNLOAD Apr 20 19:51:02.638000 audit[6431]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2849 pid=6431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:51:02.638000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239326630643838313265306532326436623963343266343232393865 Apr 20 19:51:02.940000 audit: BPF prog-id=216 op=LOAD Apr 20 19:51:02.940000 audit[6431]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a490 a2=98 a3=0 items=0 ppid=2849 pid=6431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:51:02.940000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239326630643838313265306532326436623963343266343232393865 Apr 20 19:51:02.948000 audit: BPF prog-id=217 op=LOAD Apr 20 19:51:02.948000 audit[6431]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a220 a2=98 a3=0 items=0 ppid=2849 pid=6431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:51:02.948000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239326630643838313265306532326436623963343266343232393865 Apr 20 19:51:02.976000 audit: BPF prog-id=217 op=UNLOAD Apr 20 19:51:02.976000 audit[6431]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2849 pid=6431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:51:02.976000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239326630643838313265306532326436623963343266343232393865 Apr 20 19:51:02.993000 audit: BPF prog-id=216 op=UNLOAD Apr 20 19:51:02.993000 audit[6431]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2849 pid=6431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:51:02.993000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239326630643838313265306532326436623963343266343232393865 Apr 20 19:51:03.041000 audit: BPF prog-id=218 op=LOAD Apr 20 19:51:03.041000 audit[6431]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6f0 a2=98 a3=0 items=0 ppid=2849 pid=6431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:51:03.041000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239326630643838313265306532326436623963343266343232393865 Apr 20 19:51:03.060331 kubelet[3163]: E0420 19:51:03.054412 3163 secret.go:189] Couldn't get secret calico-system/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Apr 20 19:51:03.114303 kubelet[3163]: E0420 19:51:03.091692 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dfb0b7d2-b28d-4433-9fba-0074dfdf81ee-calico-apiserver-certs podName:dfb0b7d2-b28d-4433-9fba-0074dfdf81ee nodeName:}" failed. No retries permitted until 2026-04-20 19:51:19.061238893 +0000 UTC m=+2577.155488591 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/dfb0b7d2-b28d-4433-9fba-0074dfdf81ee-calico-apiserver-certs") pod "calico-apiserver-84684997fc-zpm5v" (UID: "dfb0b7d2-b28d-4433-9fba-0074dfdf81ee") : failed to sync secret cache: timed out waiting for the condition Apr 20 19:51:03.155463 kubelet[3163]: E0420 19:51:03.153462 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&resourceVersion=2900\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 20 19:51:03.327514 sshd[6432]: Connection closed by 10.0.0.1 port 35942 Apr 20 19:51:03.327900 sshd-session[6412]: pam_unix(sshd:session): session closed for user core Apr 20 19:51:03.327000 audit[6412]: AUDIT1106 pid=6412 uid=0 auid=500 ses=57 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:51:03.342000 audit[6412]: AUDIT1104 pid=6412 uid=0 auid=500 ses=57 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:51:03.347461 kubelet[3163]: E0420 19:51:03.345240 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1537\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="object-\"calico-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 20 19:51:03.347461 kubelet[3163]: E0420 19:51:03.346307 3163 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:51:03.347461 kubelet[3163]: E0420 19:51:03.346439 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/526e8f89-8d32-4504-b20c-956610c7bb82-kube-proxy podName:526e8f89-8d32-4504-b20c-956610c7bb82 nodeName:}" failed. No retries permitted until 2026-04-20 19:51:11.3463571 +0000 UTC m=+2569.440606808 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/526e8f89-8d32-4504-b20c-956610c7bb82-kube-proxy") pod "kube-proxy-c6mkn" (UID: "526e8f89-8d32-4504-b20c-956610c7bb82") : failed to sync configmap cache: timed out waiting for the condition Apr 20 19:51:03.617990 systemd[1]: sshd@55-8217-10.0.0.14:22-10.0.0.1:35942.service: Deactivated successfully. Apr 20 19:51:03.774000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@55-8217-10.0.0.14:22-10.0.0.1:35942 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:51:03.830454 containerd[1659]: time="2026-04-20T19:51:03.825129761Z" level=error msg="get state for 292f0d8812e0e22d6b9c42f42298e77dbbf1a3ff614a20049631a83923de267a" error="context deadline exceeded" Apr 20 19:51:03.830454 containerd[1659]: time="2026-04-20T19:51:03.828979788Z" level=warning msg="unknown status" status=0 Apr 20 19:51:03.971466 systemd[1]: session-57.scope: Deactivated successfully. Apr 20 19:51:03.986301 systemd[1]: session-57.scope: Consumed 1.893s CPU time, 17.7M memory peak. Apr 20 19:51:04.012399 kubelet[3163]: E0420 19:51:04.011939 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/configmaps?fieldSelector=metadata.name%3Dtigera-ca-bundle&resourceVersion=1537\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="object-\"calico-system\"/\"tigera-ca-bundle\"" type="*v1.ConfigMap" Apr 20 19:51:04.017037 kubelet[3163]: E0420 19:51:03.847909 3163 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/events/kube-scheduler-localhost.18a826e9b6ceb5d3\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-scheduler-localhost.18a826e9b6ceb5d3 kube-system 2743 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-scheduler-localhost,UID:33fee6ba1581201eda98a989140db110,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 19:20:38 +0000 UTC,LastTimestamp:2026-04-20 19:20:59.281074808 +0000 UTC m=+757.375324536,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 19:51:04.025838 kubelet[3163]: E0420 19:51:04.013850 3163 event.go:307] "Unable to write event (retry limit exceeded!)" event="&Event{ObjectMeta:{kube-scheduler-localhost.18a826ee9d512a78 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-scheduler-localhost,UID:33fee6ba1581201eda98a989140db110,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 19:20:59.281074808 +0000 UTC m=+757.375324536,LastTimestamp:2026-04-20 19:20:59.281074808 +0000 UTC m=+757.375324536,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 19:51:04.020790 systemd-logind[1627]: Session 57 logged out. Waiting for processes to exit. Apr 20 19:51:04.075287 systemd-logind[1627]: Removed session 57. Apr 20 19:51:04.180315 containerd[1659]: time="2026-04-20T19:51:04.179461212Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 20 19:51:04.459940 kubelet[3163]: E0420 19:51:04.381944 3163 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/events/kube-scheduler-localhost.18a8263f4322ee51\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-scheduler-localhost.18a8263f4322ee51 kube-system 2448 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-scheduler-localhost,UID:33fee6ba1581201eda98a989140db110,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:DNSConfigForming,Message:Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 19:08:26 +0000 UTC,LastTimestamp:2026-04-20 19:21:00.168326093 +0000 UTC m=+758.262577660,Count:14,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 19:51:04.505299 kubelet[3163]: E0420 19:51:04.475031 3163 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:51:04.505299 kubelet[3163]: E0420 19:51:04.478367 3163 projected.go:194] Error preparing data for projected volume kube-api-access-6ncsk for pod kube-system/kube-proxy-c6mkn: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:51:04.506679 kubelet[3163]: E0420 19:51:04.504495 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/526e8f89-8d32-4504-b20c-956610c7bb82-kube-api-access-6ncsk podName:526e8f89-8d32-4504-b20c-956610c7bb82 nodeName:}" failed. No retries permitted until 2026-04-20 19:51:12.479408813 +0000 UTC m=+2570.573658508 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-6ncsk" (UniqueName: "kubernetes.io/projected/526e8f89-8d32-4504-b20c-956610c7bb82-kube-api-access-6ncsk") pod "kube-proxy-c6mkn" (UID: "526e8f89-8d32-4504-b20c-956610c7bb82") : failed to sync configmap cache: timed out waiting for the condition Apr 20 19:51:04.506912 kubelet[3163]: E0420 19:51:04.506363 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=2901\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 20 19:51:04.525157 kubelet[3163]: E0420 19:51:04.524964 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.Secret: Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/secrets?fieldSelector=metadata.name%3Dnode-certs&resourceVersion=2406\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="object-\"calico-system\"/\"node-certs\"" type="*v1.Secret" Apr 20 19:51:04.621000 audit: BPF prog-id=219 op=LOAD Apr 20 19:51:04.627599 kernel: kauditd_printk_skb: 44 callbacks suppressed Apr 20 19:51:04.627720 kernel: audit: type=1334 audit(1776714664.621:1188): prog-id=219 op=LOAD Apr 20 19:51:04.621000 audit[6433]: SYSCALL arch=c000003e syscall=321 success=yes exit=19 a0=5 a1=c00017a490 a2=98 a3=0 items=0 ppid=4032 pid=6433 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:51:04.621000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338383039653563643661663133333531636364663032393238326632 Apr 20 19:51:04.681144 kernel: audit: type=1300 audit(1776714664.621:1188): arch=c000003e syscall=321 success=yes exit=19 a0=5 a1=c00017a490 a2=98 a3=0 items=0 ppid=4032 pid=6433 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:51:04.681477 kubelet[3163]: I0420 19:51:04.628332 3163 status_manager.go:895] "Failed to get status for pod" podUID="33fee6ba1581201eda98a989140db110" pod="kube-system/kube-scheduler-localhost" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 19:51:04.681477 kubelet[3163]: I0420 19:51:04.629425 3163 status_manager.go:895] "Failed to get status for pod" podUID="5ef51a6b32499d3d1e531fb8b3a83d4f" pod="kube-system/kube-apiserver-localhost" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 19:51:04.681477 kubelet[3163]: I0420 19:51:04.630076 3163 status_manager.go:895] "Failed to get status for pod" podUID="071d23f6-a94b-4165-9229-2d0570b516d8" pod="calico-system/calico-node-g9fs5" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-node-g9fs5\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 19:51:04.681477 kubelet[3163]: I0420 19:51:04.630470 3163 status_manager.go:895] "Failed to get status for pod" podUID="dfb0b7d2-b28d-4433-9fba-0074dfdf81ee" pod="calico-system/calico-apiserver-84684997fc-zpm5v" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-apiserver-84684997fc-zpm5v\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 19:51:04.681000 audit: BPF prog-id=220 op=LOAD Apr 20 19:51:04.681000 audit[6433]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a220 a2=98 a3=0 items=0 ppid=4032 pid=6433 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:51:04.681000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338383039653563643661663133333531636364663032393238326632 Apr 20 19:51:04.681000 audit: BPF prog-id=220 op=UNLOAD Apr 20 19:51:04.681000 audit[6433]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4032 pid=6433 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:51:04.681000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338383039653563643661663133333531636364663032393238326632 Apr 20 19:51:04.681000 audit: BPF prog-id=219 op=UNLOAD Apr 20 19:51:04.681000 audit[6433]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=13 a1=0 a2=0 a3=0 items=0 ppid=4032 pid=6433 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:51:04.681000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338383039653563643661663133333531636364663032393238326632 Apr 20 19:51:04.681000 audit: BPF prog-id=221 op=LOAD Apr 20 19:51:04.681000 audit[6433]: SYSCALL arch=c000003e syscall=321 success=yes exit=19 a0=5 a1=c00017a6f0 a2=98 a3=0 items=0 ppid=4032 pid=6433 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:51:04.681000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338383039653563643661663133333531636364663032393238326632 Apr 20 19:51:05.072434 kernel: audit: type=1327 audit(1776714664.621:1188): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338383039653563643661663133333531636364663032393238326632 Apr 20 19:51:05.084253 kernel: audit: type=1334 audit(1776714664.681:1189): prog-id=220 op=LOAD Apr 20 19:51:05.090498 kernel: audit: type=1300 audit(1776714664.681:1189): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a220 a2=98 a3=0 items=0 ppid=4032 pid=6433 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:51:05.104228 kernel: audit: type=1327 audit(1776714664.681:1189): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338383039653563643661663133333531636364663032393238326632 Apr 20 19:51:05.106640 kernel: audit: type=1334 audit(1776714664.681:1190): prog-id=220 op=UNLOAD Apr 20 19:51:05.107228 kernel: audit: type=1300 audit(1776714664.681:1190): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4032 pid=6433 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:51:05.107297 kernel: audit: type=1327 audit(1776714664.681:1190): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338383039653563643661663133333531636364663032393238326632 Apr 20 19:51:05.107385 kernel: audit: type=1334 audit(1776714664.681:1191): prog-id=219 op=UNLOAD Apr 20 19:51:05.126686 kubelet[3163]: E0420 19:51:05.124713 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.14:6443/api/v1/namespaces/tigera-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1537\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="object-\"tigera-operator\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 20 19:51:05.283352 kubelet[3163]: I0420 19:51:05.283101 3163 status_manager.go:895] "Failed to get status for pod" podUID="33fee6ba1581201eda98a989140db110" pod="kube-system/kube-scheduler-localhost" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 19:51:05.301320 containerd[1659]: time="2026-04-20T19:51:05.293232462Z" level=info msg="StartContainer for \"292f0d8812e0e22d6b9c42f42298e77dbbf1a3ff614a20049631a83923de267a\" returns successfully" Apr 20 19:51:05.323720 kubelet[3163]: I0420 19:51:05.317241 3163 status_manager.go:895] "Failed to get status for pod" podUID="5ef51a6b32499d3d1e531fb8b3a83d4f" pod="kube-system/kube-apiserver-localhost" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 19:51:05.323720 kubelet[3163]: I0420 19:51:05.323213 3163 status_manager.go:895] "Failed to get status for pod" podUID="071d23f6-a94b-4165-9229-2d0570b516d8" pod="calico-system/calico-node-g9fs5" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-node-g9fs5\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 19:51:05.326142 kubelet[3163]: I0420 19:51:05.325696 3163 status_manager.go:895] "Failed to get status for pod" podUID="dfb0b7d2-b28d-4433-9fba-0074dfdf81ee" pod="calico-system/calico-apiserver-84684997fc-zpm5v" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-apiserver-84684997fc-zpm5v\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 19:51:05.344894 kubelet[3163]: E0420 19:51:05.344289 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T19:51:05Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T19:51:05Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T19:51:05Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T19:51:05Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\\\",\\\"ghcr.io/flatcar/calico/node:v3.31.4\\\"],\\\"sizeBytes\\\":159838426},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\\\",\\\"ghcr.io/flatcar/calico/cni:v3.31.4\\\"],\\\"sizeBytes\\\":72167716},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\\\",\\\"ghcr.io/flatcar/calico/apiserver:v3.31.4\\\"],\\\"sizeBytes\\\":49971841},{\\\"names\\\":[\\\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\\\",\\\"quay.io/tigera/operator:v1.40.7\\\"],\\\"sizeBytes\\\":40842151},{\\\"names\\\":[\\\"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\\\",\\\"registry.k8s.io/kube-proxy:v1.33.11\\\"],\\\"sizeBytes\\\":32009730},{\\\"names\\\":[\\\"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\\\",\\\"registry.k8s.io/kube-apiserver:v1.33.11\\\"],\\\"sizeBytes\\\":30190588},{\\\"names\\\":[\\\"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\\\",\\\"registry.k8s.io/kube-controller-manager:v1.33.11\\\"],\\\"sizeBytes\\\":27737794},{\\\"names\\\":[\\\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\\\",\\\"registry.k8s.io/etcd:3.5.24-0\\\"],\\\"sizeBytes\\\":23716032},{\\\"names\\\":[\\\"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\\\",\\\"registry.k8s.io/kube-scheduler:v1.33.11\\\"],\\\"sizeBytes\\\":21856121},{\\\"names\\\":[\\\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\\\",\\\"registry.k8s.io/coredns/coredns:v1.12.0\\\"],\\\"sizeBytes\\\":20939036},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\\\",\\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\\\"],\\\"sizeBytes\\\":16260314},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\\\",\\\"ghcr.io/flatcar/calico/csi:v3.31.4\\\"],\\\"sizeBytes\\\":10348547},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\\\",\\\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\\\"],\\\"sizeBytes\\\":6186255},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\\\",\\\"registry.k8s.io/pause:3.10.1\\\"],\\\"sizeBytes\\\":320448},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\\\",\\\"registry.k8s.io/pause:3.10\\\"],\\\"sizeBytes\\\":320368}]}}\" for node \"localhost\": Patch \"https://10.0.0.14:6443/api/v1/nodes/localhost/status?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 19:51:05.382619 kubelet[3163]: E0420 19:51:05.382337 3163 projected.go:289] Couldn't get configMap calico-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:51:05.390024 kubelet[3163]: E0420 19:51:05.388960 3163 projected.go:194] Error preparing data for projected volume kube-api-access-4m6bv for pod calico-system/csi-node-driver-5h6vg: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:51:05.445114 kubelet[3163]: E0420 19:51:05.394369 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.14:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 19:51:05.463239 kubelet[3163]: E0420 19:51:05.461978 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9f02930c-961c-4c4b-8334-b61cbd5c3d20-kube-api-access-4m6bv podName:9f02930c-961c-4c4b-8334-b61cbd5c3d20 nodeName:}" failed. No retries permitted until 2026-04-20 19:51:13.394372301 +0000 UTC m=+2571.488621998 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-4m6bv" (UniqueName: "kubernetes.io/projected/9f02930c-961c-4c4b-8334-b61cbd5c3d20-kube-api-access-4m6bv") pod "csi-node-driver-5h6vg" (UID: "9f02930c-961c-4c4b-8334-b61cbd5c3d20") : failed to sync configmap cache: timed out waiting for the condition Apr 20 19:51:05.512570 kubelet[3163]: E0420 19:51:05.512437 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.14:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 19:51:05.533117 kubelet[3163]: E0420 19:51:05.529278 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.14:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 19:51:05.660979 kubelet[3163]: E0420 19:51:05.660590 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.14:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 19:51:05.660979 kubelet[3163]: E0420 19:51:05.660889 3163 kubelet_node_status.go:535] "Unable to update node status" err="update node status exceeds retry count" Apr 20 19:51:05.931337 containerd[1659]: time="2026-04-20T19:51:05.930373482Z" level=info msg="StartContainer for \"38809e5cd6af13351ccdf029282f28389cf4d54a3e7b12c6f1dbc5b71d29ce3a\" returns successfully" Apr 20 19:51:06.362494 kubelet[3163]: E0420 19:51:06.357219 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-proxy&resourceVersion=1537\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap" Apr 20 19:51:06.673860 kubelet[3163]: E0420 19:51:06.667189 3163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:51:06.836708 kubelet[3163]: E0420 19:51:06.671256 3163 projected.go:289] Couldn't get configMap tigera-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:51:06.836708 kubelet[3163]: E0420 19:51:06.711470 3163 projected.go:194] Error preparing data for projected volume kube-api-access-qj2d9 for pod tigera-operator/tigera-operator-6bf85f8dd-hvgdj: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:51:06.849527 kubelet[3163]: E0420 19:51:06.841294 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/22f1ff03-de8a-48db-b03e-54fdbe0d3d5f-kube-api-access-qj2d9 podName:22f1ff03-de8a-48db-b03e-54fdbe0d3d5f nodeName:}" failed. No retries permitted until 2026-04-20 19:51:14.792018102 +0000 UTC m=+2572.886267812 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-qj2d9" (UniqueName: "kubernetes.io/projected/22f1ff03-de8a-48db-b03e-54fdbe0d3d5f-kube-api-access-qj2d9") pod "tigera-operator-6bf85f8dd-hvgdj" (UID: "22f1ff03-de8a-48db-b03e-54fdbe0d3d5f") : failed to sync configmap cache: timed out waiting for the condition Apr 20 19:51:06.849527 kubelet[3163]: I0420 19:51:06.837135 3163 status_manager.go:895] "Failed to get status for pod" podUID="5ef51a6b32499d3d1e531fb8b3a83d4f" pod="kube-system/kube-apiserver-localhost" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 19:51:06.986509 kubelet[3163]: E0420 19:51:06.984595 3163 secret.go:189] Couldn't get secret calico-system/node-certs: failed to sync secret cache: timed out waiting for the condition Apr 20 19:51:07.021363 kubelet[3163]: E0420 19:51:07.018501 3163 configmap.go:193] Couldn't get configMap calico-system/tigera-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:51:07.046291 kubelet[3163]: I0420 19:51:07.041518 3163 status_manager.go:895] "Failed to get status for pod" podUID="071d23f6-a94b-4165-9229-2d0570b516d8" pod="calico-system/calico-node-g9fs5" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-node-g9fs5\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 19:51:07.046291 kubelet[3163]: E0420 19:51:07.045221 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/071d23f6-a94b-4165-9229-2d0570b516d8-node-certs podName:071d23f6-a94b-4165-9229-2d0570b516d8 nodeName:}" failed. No retries permitted until 2026-04-20 19:51:14.993339222 +0000 UTC m=+2573.087588923 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "node-certs" (UniqueName: "kubernetes.io/secret/071d23f6-a94b-4165-9229-2d0570b516d8-node-certs") pod "calico-node-g9fs5" (UID: "071d23f6-a94b-4165-9229-2d0570b516d8") : failed to sync secret cache: timed out waiting for the condition Apr 20 19:51:07.157273 kubelet[3163]: E0420 19:51:07.124580 3163 projected.go:289] Couldn't get configMap calico-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:51:07.157273 kubelet[3163]: E0420 19:51:07.155336 3163 projected.go:194] Error preparing data for projected volume kube-api-access-5kv6b for pod calico-system/calico-node-g9fs5: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:51:07.277006 kubelet[3163]: E0420 19:51:07.125311 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/071d23f6-a94b-4165-9229-2d0570b516d8-tigera-ca-bundle podName:071d23f6-a94b-4165-9229-2d0570b516d8 nodeName:}" failed. No retries permitted until 2026-04-20 19:51:15.049343122 +0000 UTC m=+2573.143592826 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "tigera-ca-bundle" (UniqueName: "kubernetes.io/configmap/071d23f6-a94b-4165-9229-2d0570b516d8-tigera-ca-bundle") pod "calico-node-g9fs5" (UID: "071d23f6-a94b-4165-9229-2d0570b516d8") : failed to sync configmap cache: timed out waiting for the condition Apr 20 19:51:07.321781 kubelet[3163]: E0420 19:51:07.318476 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/071d23f6-a94b-4165-9229-2d0570b516d8-kube-api-access-5kv6b podName:071d23f6-a94b-4165-9229-2d0570b516d8 nodeName:}" failed. No retries permitted until 2026-04-20 19:51:15.308337678 +0000 UTC m=+2573.402587401 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-5kv6b" (UniqueName: "kubernetes.io/projected/071d23f6-a94b-4165-9229-2d0570b516d8-kube-api-access-5kv6b") pod "calico-node-g9fs5" (UID: "071d23f6-a94b-4165-9229-2d0570b516d8") : failed to sync configmap cache: timed out waiting for the condition Apr 20 19:51:07.328088 kubelet[3163]: I0420 19:51:07.328003 3163 status_manager.go:895] "Failed to get status for pod" podUID="dfb0b7d2-b28d-4433-9fba-0074dfdf81ee" pod="calico-system/calico-apiserver-84684997fc-zpm5v" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-apiserver-84684997fc-zpm5v\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 19:51:07.328475 kubelet[3163]: E0420 19:51:07.327943 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1537\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 20 19:51:07.377748 kubelet[3163]: I0420 19:51:07.377002 3163 status_manager.go:895] "Failed to get status for pod" podUID="33fee6ba1581201eda98a989140db110" pod="kube-system/kube-scheduler-localhost" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 19:51:07.688338 kubelet[3163]: I0420 19:51:07.688204 3163 status_manager.go:895] "Failed to get status for pod" podUID="33fee6ba1581201eda98a989140db110" pod="kube-system/kube-scheduler-localhost" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 19:51:07.767753 kubelet[3163]: I0420 19:51:07.753966 3163 status_manager.go:895] "Failed to get status for pod" podUID="5ef51a6b32499d3d1e531fb8b3a83d4f" pod="kube-system/kube-apiserver-localhost" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 19:51:07.806451 kubelet[3163]: I0420 19:51:07.785479 3163 status_manager.go:895] "Failed to get status for pod" podUID="071d23f6-a94b-4165-9229-2d0570b516d8" pod="calico-system/calico-node-g9fs5" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-node-g9fs5\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 19:51:07.813666 kubelet[3163]: E0420 19:51:07.813478 3163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 20 19:51:07.828046 kubelet[3163]: I0420 19:51:07.826851 3163 status_manager.go:895] "Failed to get status for pod" podUID="dfb0b7d2-b28d-4433-9fba-0074dfdf81ee" pod="calico-system/calico-apiserver-84684997fc-zpm5v" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-apiserver-84684997fc-zpm5v\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 19:51:09.242452 kubelet[3163]: I0420 19:51:09.242089 3163 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 20 19:51:09.448028 systemd[1]: Started sshd@56-8218-10.0.0.14:22-10.0.0.1:35430.service - OpenSSH per-connection server daemon (10.0.0.1:35430). Apr 20 19:51:09.549000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@56-8218-10.0.0.14:22-10.0.0.1:35430 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:51:09.666264 kubelet[3163]: E0420 19:51:09.652745 3163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:51:09.787354 systemd[1]: cri-containerd-094aeb199e5141e10c4aa1ca00e31f3c2ea5db40bdffe7260f4ac4067e20028a.scope: Deactivated successfully. Apr 20 19:51:09.791000 audit: BPF prog-id=213 op=UNLOAD Apr 20 19:51:09.906117 kernel: kauditd_printk_skb: 6 callbacks suppressed Apr 20 19:51:09.903428 systemd[1]: cri-containerd-094aeb199e5141e10c4aa1ca00e31f3c2ea5db40bdffe7260f4ac4067e20028a.scope: Consumed 3.548s CPU time, 17.8M memory peak. Apr 20 19:51:09.882000 audit: BPF prog-id=209 op=UNLOAD Apr 20 19:51:09.927367 kernel: audit: type=1334 audit(1776714669.791:1194): prog-id=213 op=UNLOAD Apr 20 19:51:09.927526 kernel: audit: type=1334 audit(1776714669.882:1195): prog-id=209 op=UNLOAD Apr 20 19:51:09.948319 containerd[1659]: time="2026-04-20T19:51:09.947444962Z" level=info msg="received container exit event container_id:\"094aeb199e5141e10c4aa1ca00e31f3c2ea5db40bdffe7260f4ac4067e20028a\" id:\"094aeb199e5141e10c4aa1ca00e31f3c2ea5db40bdffe7260f4ac4067e20028a\" pid:6414 exit_status:1 exited_at:{seconds:1776714669 nanos:786341667}" Apr 20 19:51:10.474704 kubelet[3163]: E0420 19:51:10.474396 3163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:51:10.521000 audit[6536]: AUDIT1101 pid=6536 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:51:10.530041 sshd[6536]: Accepted publickey for core from 10.0.0.1 port 35430 ssh2: RSA SHA256:ZNIzts6V4KYKlrJxXaosrimCRlmsV/+NkZ5UtjwHrjE Apr 20 19:51:10.546000 audit[6536]: AUDIT1103 pid=6536 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:51:10.548508 sshd-session[6536]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 19:51:10.549479 kernel: audit: type=1101 audit(1776714670.521:1196): pid=6536 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:51:10.549506 kernel: audit: type=1103 audit(1776714670.546:1197): pid=6536 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:51:10.592153 kernel: audit: type=1006 audit(1776714670.547:1198): pid=6536 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=58 res=1 Apr 20 19:51:10.547000 audit[6536]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe897212d0 a2=3 a3=0 items=0 ppid=1 pid=6536 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=58 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:51:10.547000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:51:10.707455 kernel: audit: type=1300 audit(1776714670.547:1198): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe897212d0 a2=3 a3=0 items=0 ppid=1 pid=6536 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=58 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:51:10.710587 kernel: audit: type=1327 audit(1776714670.547:1198): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:51:10.973094 kubelet[3163]: E0420 19:51:10.956367 3163 log.go:32] "StopContainer from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729" Apr 20 19:51:11.014974 containerd[1659]: time="2026-04-20T19:51:11.010434186Z" level=error msg="StopContainer for \"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\" failed" error="rpc error: code = Canceled desc = an error occurs during waiting for container \"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\" to be killed: wait container \"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\": context canceled" Apr 20 19:51:11.013289 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-094aeb199e5141e10c4aa1ca00e31f3c2ea5db40bdffe7260f4ac4067e20028a-rootfs.mount: Deactivated successfully. Apr 20 19:51:11.022037 kubelet[3163]: E0420 19:51:11.013165 3163 kuberuntime_container.go:863] "Container termination failed with gracePeriod" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="kube-system/kube-scheduler-localhost" podUID="33fee6ba1581201eda98a989140db110" containerName="kube-scheduler" containerID="containerd://ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729" gracePeriod=30 Apr 20 19:51:11.022037 kubelet[3163]: E0420 19:51:11.013222 3163 kuberuntime_manager.go:1176] "killContainer for pod failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerName="kube-scheduler" containerID={"Type":"containerd","ID":"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729"} pod="kube-system/kube-scheduler-localhost" Apr 20 19:51:11.022037 kubelet[3163]: E0420 19:51:11.016309 3163 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillContainer\" for \"kube-scheduler\" with KillContainerError: \"rpc error: code = DeadlineExceeded desc = context deadline exceeded\"" pod="kube-system/kube-scheduler-localhost" podUID="33fee6ba1581201eda98a989140db110" Apr 20 19:51:11.022037 kubelet[3163]: E0420 19:51:11.012433 3163 projected.go:289] Couldn't get configMap calico-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:51:11.022037 kubelet[3163]: E0420 19:51:11.018807 3163 projected.go:194] Error preparing data for projected volume kube-api-access-kld4g for pod calico-system/calico-apiserver-84684997fc-zpm5v: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:51:11.022037 kubelet[3163]: E0420 19:51:11.019407 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/dfb0b7d2-b28d-4433-9fba-0074dfdf81ee-kube-api-access-kld4g podName:dfb0b7d2-b28d-4433-9fba-0074dfdf81ee nodeName:}" failed. No retries permitted until 2026-04-20 19:51:27.019226155 +0000 UTC m=+2585.113475868 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-kld4g" (UniqueName: "kubernetes.io/projected/dfb0b7d2-b28d-4433-9fba-0074dfdf81ee-kube-api-access-kld4g") pod "calico-apiserver-84684997fc-zpm5v" (UID: "dfb0b7d2-b28d-4433-9fba-0074dfdf81ee") : failed to sync configmap cache: timed out waiting for the condition Apr 20 19:51:11.065373 systemd-logind[1627]: New session '58' of user 'core' with class 'user' and type 'tty'. Apr 20 19:51:11.161894 systemd[1]: Started session-58.scope - Session 58 of User core. Apr 20 19:51:11.526000 audit[6536]: AUDIT1105 pid=6536 uid=0 auid=500 ses=58 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:51:11.543168 kernel: audit: type=1105 audit(1776714671.526:1199): pid=6536 uid=0 auid=500 ses=58 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:51:11.672923 containerd[1659]: time="2026-04-20T19:51:11.668637669Z" level=info msg="StopContainer for \"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\" with timeout 30 (s)" Apr 20 19:51:11.716105 kubelet[3163]: I0420 19:51:11.711818 3163 scope.go:117] "RemoveContainer" containerID="094aeb199e5141e10c4aa1ca00e31f3c2ea5db40bdffe7260f4ac4067e20028a" Apr 20 19:51:11.729000 audit[6573]: AUDIT1103 pid=6573 uid=0 auid=500 ses=58 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:51:11.746476 kernel: audit: type=1103 audit(1776714671.729:1200): pid=6573 uid=0 auid=500 ses=58 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:51:11.748423 containerd[1659]: time="2026-04-20T19:51:11.746930552Z" level=info msg="Skipping the sending of signal terminated to container \"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\" because a prior stop with timeout>0 request already sent the signal" Apr 20 19:51:12.009588 containerd[1659]: time="2026-04-20T19:51:12.009029545Z" level=info msg="CreateContainer within sandbox \"de19448814837f2189e818ee408a7454c5277a1c9b2f91f664708851a1478571\" for container name:\"calico-apiserver\" attempt:2" Apr 20 19:51:12.222978 containerd[1659]: time="2026-04-20T19:51:12.222268652Z" level=info msg="Container 13a5307cebf8e24a8cf569f8b508b92d0a2c1da1c970849479297704cecf330d: CDI devices from CRI Config.CDIDevices: []" Apr 20 19:51:12.343969 containerd[1659]: time="2026-04-20T19:51:12.340696184Z" level=info msg="CreateContainer within sandbox \"de19448814837f2189e818ee408a7454c5277a1c9b2f91f664708851a1478571\" for name:\"calico-apiserver\" attempt:2 returns container id \"13a5307cebf8e24a8cf569f8b508b92d0a2c1da1c970849479297704cecf330d\"" Apr 20 19:51:12.506223 containerd[1659]: time="2026-04-20T19:51:12.490428196Z" level=info msg="StartContainer for \"13a5307cebf8e24a8cf569f8b508b92d0a2c1da1c970849479297704cecf330d\"" Apr 20 19:51:12.527402 kubelet[3163]: E0420 19:51:12.514878 3163 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:51:12.529857 kubelet[3163]: E0420 19:51:12.529099 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/526e8f89-8d32-4504-b20c-956610c7bb82-kube-proxy podName:526e8f89-8d32-4504-b20c-956610c7bb82 nodeName:}" failed. No retries permitted until 2026-04-20 19:51:28.528644757 +0000 UTC m=+2586.622894471 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/526e8f89-8d32-4504-b20c-956610c7bb82-kube-proxy") pod "kube-proxy-c6mkn" (UID: "526e8f89-8d32-4504-b20c-956610c7bb82") : failed to sync configmap cache: timed out waiting for the condition Apr 20 19:51:12.663977 containerd[1659]: time="2026-04-20T19:51:12.663739639Z" level=info msg="connecting to shim 13a5307cebf8e24a8cf569f8b508b92d0a2c1da1c970849479297704cecf330d" address="unix:///run/containerd/s/9f25d20f4617cde34f7397032d9ecbc0b43cd780bc15ce3e8713428f4b2ceb63" protocol=ttrpc version=3 Apr 20 19:51:13.026044 systemd[1]: Started cri-containerd-13a5307cebf8e24a8cf569f8b508b92d0a2c1da1c970849479297704cecf330d.scope - libcontainer container 13a5307cebf8e24a8cf569f8b508b92d0a2c1da1c970849479297704cecf330d. Apr 20 19:51:14.366246 kubelet[3163]: E0420 19:51:14.345445 3163 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:51:14.588116 kubelet[3163]: E0420 19:51:14.558505 3163 projected.go:194] Error preparing data for projected volume kube-api-access-6ncsk for pod kube-system/kube-proxy-c6mkn: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:51:14.622141 kubelet[3163]: E0420 19:51:14.618948 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/526e8f89-8d32-4504-b20c-956610c7bb82-kube-api-access-6ncsk podName:526e8f89-8d32-4504-b20c-956610c7bb82 nodeName:}" failed. No retries permitted until 2026-04-20 19:51:30.602297921 +0000 UTC m=+2588.696547641 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-6ncsk" (UniqueName: "kubernetes.io/projected/526e8f89-8d32-4504-b20c-956610c7bb82-kube-api-access-6ncsk") pod "kube-proxy-c6mkn" (UID: "526e8f89-8d32-4504-b20c-956610c7bb82") : failed to sync configmap cache: timed out waiting for the condition Apr 20 19:51:15.349177 kubelet[3163]: E0420 19:51:15.348224 3163 projected.go:289] Couldn't get configMap calico-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:51:15.533162 kubelet[3163]: E0420 19:51:15.529265 3163 projected.go:194] Error preparing data for projected volume kube-api-access-4m6bv for pod calico-system/csi-node-driver-5h6vg: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:51:15.579505 kubelet[3163]: E0420 19:51:15.548386 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9f02930c-961c-4c4b-8334-b61cbd5c3d20-kube-api-access-4m6bv podName:9f02930c-961c-4c4b-8334-b61cbd5c3d20 nodeName:}" failed. No retries permitted until 2026-04-20 19:51:31.548027078 +0000 UTC m=+2589.642276774 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-4m6bv" (UniqueName: "kubernetes.io/projected/9f02930c-961c-4c4b-8334-b61cbd5c3d20-kube-api-access-4m6bv") pod "csi-node-driver-5h6vg" (UID: "9f02930c-961c-4c4b-8334-b61cbd5c3d20") : failed to sync configmap cache: timed out waiting for the condition Apr 20 19:51:15.579505 kubelet[3163]: E0420 19:51:15.529416 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.669s" Apr 20 19:51:16.371480 kubelet[3163]: E0420 19:51:16.368175 3163 projected.go:289] Couldn't get configMap tigera-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:51:16.415820 kubelet[3163]: E0420 19:51:16.376338 3163 projected.go:194] Error preparing data for projected volume kube-api-access-qj2d9 for pod tigera-operator/tigera-operator-6bf85f8dd-hvgdj: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:51:16.475355 containerd[1659]: time="2026-04-20T19:51:16.384487016Z" level=error msg="get state for 13a5307cebf8e24a8cf569f8b508b92d0a2c1da1c970849479297704cecf330d" error="context deadline exceeded" Apr 20 19:51:16.713108 containerd[1659]: time="2026-04-20T19:51:16.549433914Z" level=warning msg="unknown status" status=0 Apr 20 19:51:16.720177 kubelet[3163]: E0420 19:51:16.720035 3163 projected.go:289] Couldn't get configMap calico-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:51:16.747996 kubelet[3163]: E0420 19:51:16.747384 3163 projected.go:194] Error preparing data for projected volume kube-api-access-5kv6b for pod calico-system/calico-node-g9fs5: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:51:16.874429 kubelet[3163]: E0420 19:51:16.676038 3163 configmap.go:193] Couldn't get configMap calico-system/tigera-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:51:16.893361 kubelet[3163]: E0420 19:51:16.645713 3163 secret.go:189] Couldn't get secret calico-system/node-certs: failed to sync secret cache: timed out waiting for the condition Apr 20 19:51:16.917164 kubelet[3163]: E0420 19:51:16.749070 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.177s" Apr 20 19:51:16.924888 kubelet[3163]: E0420 19:51:16.924852 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/071d23f6-a94b-4165-9229-2d0570b516d8-node-certs podName:071d23f6-a94b-4165-9229-2d0570b516d8 nodeName:}" failed. No retries permitted until 2026-04-20 19:51:32.913376159 +0000 UTC m=+2591.007625859 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "node-certs" (UniqueName: "kubernetes.io/secret/071d23f6-a94b-4165-9229-2d0570b516d8-node-certs") pod "calico-node-g9fs5" (UID: "071d23f6-a94b-4165-9229-2d0570b516d8") : failed to sync secret cache: timed out waiting for the condition Apr 20 19:51:16.924888 kubelet[3163]: E0420 19:51:16.924894 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/071d23f6-a94b-4165-9229-2d0570b516d8-tigera-ca-bundle podName:071d23f6-a94b-4165-9229-2d0570b516d8 nodeName:}" failed. No retries permitted until 2026-04-20 19:51:32.924886162 +0000 UTC m=+2591.019135864 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "tigera-ca-bundle" (UniqueName: "kubernetes.io/configmap/071d23f6-a94b-4165-9229-2d0570b516d8-tigera-ca-bundle") pod "calico-node-g9fs5" (UID: "071d23f6-a94b-4165-9229-2d0570b516d8") : failed to sync configmap cache: timed out waiting for the condition Apr 20 19:51:16.937626 kubelet[3163]: E0420 19:51:16.924929 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/22f1ff03-de8a-48db-b03e-54fdbe0d3d5f-kube-api-access-qj2d9 podName:22f1ff03-de8a-48db-b03e-54fdbe0d3d5f nodeName:}" failed. No retries permitted until 2026-04-20 19:51:32.924923857 +0000 UTC m=+2591.019173554 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-qj2d9" (UniqueName: "kubernetes.io/projected/22f1ff03-de8a-48db-b03e-54fdbe0d3d5f-kube-api-access-qj2d9") pod "tigera-operator-6bf85f8dd-hvgdj" (UID: "22f1ff03-de8a-48db-b03e-54fdbe0d3d5f") : failed to sync configmap cache: timed out waiting for the condition Apr 20 19:51:16.937626 kubelet[3163]: E0420 19:51:16.936207 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/071d23f6-a94b-4165-9229-2d0570b516d8-kube-api-access-5kv6b podName:071d23f6-a94b-4165-9229-2d0570b516d8 nodeName:}" failed. No retries permitted until 2026-04-20 19:51:32.935810145 +0000 UTC m=+2591.030059842 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-5kv6b" (UniqueName: "kubernetes.io/projected/071d23f6-a94b-4165-9229-2d0570b516d8-kube-api-access-5kv6b") pod "calico-node-g9fs5" (UID: "071d23f6-a94b-4165-9229-2d0570b516d8") : failed to sync configmap cache: timed out waiting for the condition Apr 20 19:51:17.281722 kubelet[3163]: E0420 19:51:17.280269 3163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:51:18.129000 audit: BPF prog-id=222 op=LOAD Apr 20 19:51:18.148624 kernel: audit: type=1334 audit(1776714678.129:1201): prog-id=222 op=LOAD Apr 20 19:51:18.342000 audit: BPF prog-id=223 op=LOAD Apr 20 19:51:18.342000 audit[6582]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000154240 a2=98 a3=0 items=0 ppid=5396 pid=6582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:51:18.355838 kernel: audit: type=1334 audit(1776714678.342:1202): prog-id=223 op=LOAD Apr 20 19:51:18.356408 kernel: audit: type=1300 audit(1776714678.342:1202): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000154240 a2=98 a3=0 items=0 ppid=5396 pid=6582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:51:18.342000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133613533303763656266386532346138636635363966386235303862 Apr 20 19:51:18.348000 audit: BPF prog-id=223 op=UNLOAD Apr 20 19:51:18.348000 audit[6582]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5396 pid=6582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:51:18.348000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133613533303763656266386532346138636635363966386235303862 Apr 20 19:51:18.354000 audit: BPF prog-id=224 op=LOAD Apr 20 19:51:18.354000 audit[6582]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000154490 a2=98 a3=0 items=0 ppid=5396 pid=6582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:51:18.354000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133613533303763656266386532346138636635363966386235303862 Apr 20 19:51:18.429647 kernel: audit: type=1327 audit(1776714678.342:1202): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133613533303763656266386532346138636635363966386235303862 Apr 20 19:51:18.429808 kernel: audit: type=1334 audit(1776714678.348:1203): prog-id=223 op=UNLOAD Apr 20 19:51:18.429905 kernel: audit: type=1300 audit(1776714678.348:1203): arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5396 pid=6582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:51:18.429922 kernel: audit: type=1327 audit(1776714678.348:1203): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133613533303763656266386532346138636635363966386235303862 Apr 20 19:51:18.429990 kernel: audit: type=1334 audit(1776714678.354:1204): prog-id=224 op=LOAD Apr 20 19:51:18.430008 kernel: audit: type=1300 audit(1776714678.354:1204): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000154490 a2=98 a3=0 items=0 ppid=5396 pid=6582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:51:18.430020 kernel: audit: type=1327 audit(1776714678.354:1204): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133613533303763656266386532346138636635363966386235303862 Apr 20 19:51:18.360000 audit: BPF prog-id=225 op=LOAD Apr 20 19:51:18.360000 audit[6582]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000154220 a2=98 a3=0 items=0 ppid=5396 pid=6582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:51:18.360000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133613533303763656266386532346138636635363966386235303862 Apr 20 19:51:18.365000 audit: BPF prog-id=225 op=UNLOAD Apr 20 19:51:18.365000 audit[6582]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5396 pid=6582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:51:18.365000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133613533303763656266386532346138636635363966386235303862 Apr 20 19:51:18.380000 audit: BPF prog-id=224 op=UNLOAD Apr 20 19:51:18.380000 audit[6582]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5396 pid=6582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:51:18.380000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133613533303763656266386532346138636635363966386235303862 Apr 20 19:51:18.380000 audit: BPF prog-id=226 op=LOAD Apr 20 19:51:18.380000 audit[6582]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001546f0 a2=98 a3=0 items=0 ppid=5396 pid=6582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:51:18.380000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133613533303763656266386532346138636635363966386235303862 Apr 20 19:51:19.126793 containerd[1659]: time="2026-04-20T19:51:19.126276383Z" level=error msg="get state for 13a5307cebf8e24a8cf569f8b508b92d0a2c1da1c970849479297704cecf330d" error="context deadline exceeded" Apr 20 19:51:19.148276 containerd[1659]: time="2026-04-20T19:51:19.137009806Z" level=warning msg="unknown status" status=0 Apr 20 19:51:20.134813 containerd[1659]: time="2026-04-20T19:51:20.120844083Z" level=error msg="ttrpc: received message on inactive stream" stream=5 Apr 20 19:51:20.244604 containerd[1659]: time="2026-04-20T19:51:20.149122377Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 20 19:51:20.274465 kubelet[3163]: E0420 19:51:20.136854 3163 secret.go:189] Couldn't get secret calico-system/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Apr 20 19:51:20.274465 kubelet[3163]: E0420 19:51:20.146485 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dfb0b7d2-b28d-4433-9fba-0074dfdf81ee-calico-apiserver-certs podName:dfb0b7d2-b28d-4433-9fba-0074dfdf81ee nodeName:}" failed. No retries permitted until 2026-04-20 19:51:52.143648999 +0000 UTC m=+2610.237898696 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/dfb0b7d2-b28d-4433-9fba-0074dfdf81ee-calico-apiserver-certs") pod "calico-apiserver-84684997fc-zpm5v" (UID: "dfb0b7d2-b28d-4433-9fba-0074dfdf81ee") : failed to sync secret cache: timed out waiting for the condition Apr 20 19:51:20.960394 kubelet[3163]: E0420 19:51:20.950881 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.Secret: Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/secrets?fieldSelector=metadata.name%3Dcalico-apiserver-certs&resourceVersion=2406\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"calico-system\"/\"calico-apiserver-certs\"" type="*v1.Secret" Apr 20 19:51:21.913638 kubelet[3163]: I0420 19:51:21.911839 3163 status_manager.go:895] "Failed to get status for pod" podUID="5ef51a6b32499d3d1e531fb8b3a83d4f" pod="kube-system/kube-apiserver-localhost" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": net/http: TLS handshake timeout" Apr 20 19:51:22.069379 kubelet[3163]: E0420 19:51:21.887491 3163 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/events/kube-scheduler-localhost.18a8263f4322ee51\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{kube-scheduler-localhost.18a8263f4322ee51 kube-system 2448 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-scheduler-localhost,UID:33fee6ba1581201eda98a989140db110,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:DNSConfigForming,Message:Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 19:08:26 +0000 UTC,LastTimestamp:2026-04-20 19:21:00.168326093 +0000 UTC m=+758.262577660,Count:14,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 19:51:22.280481 kubelet[3163]: E0420 19:51:22.243312 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.Secret: Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/secrets?fieldSelector=metadata.name%3Dnode-certs&resourceVersion=2406\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"calico-system\"/\"node-certs\"" type="*v1.Secret" Apr 20 19:51:22.633997 kubelet[3163]: E0420 19:51:22.603635 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.713s" Apr 20 19:51:23.052340 sshd[6573]: Connection closed by 10.0.0.1 port 35430 Apr 20 19:51:23.068110 sshd-session[6536]: pam_unix(sshd:session): session closed for user core Apr 20 19:51:23.280000 audit[6536]: AUDIT1106 pid=6536 uid=0 auid=500 ses=58 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:51:23.329000 audit[6536]: AUDIT1104 pid=6536 uid=0 auid=500 ses=58 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:51:23.371071 kernel: kauditd_printk_skb: 12 callbacks suppressed Apr 20 19:51:23.373712 kernel: audit: type=1106 audit(1776714683.280:1209): pid=6536 uid=0 auid=500 ses=58 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:51:23.388246 kernel: audit: type=1104 audit(1776714683.329:1210): pid=6536 uid=0 auid=500 ses=58 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:51:23.727279 systemd[1]: sshd@56-8218-10.0.0.14:22-10.0.0.1:35430.service: Deactivated successfully. Apr 20 19:51:23.875000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@56-8218-10.0.0.14:22-10.0.0.1:35430 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:51:24.025315 kernel: audit: type=1131 audit(1776714683.875:1211): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@56-8218-10.0.0.14:22-10.0.0.1:35430 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:51:24.062462 systemd[1]: session-58.scope: Deactivated successfully. Apr 20 19:51:24.259420 systemd[1]: session-58.scope: Consumed 1.095s CPU time, 17M memory peak. Apr 20 19:51:24.768029 systemd-logind[1627]: Session 58 logged out. Waiting for processes to exit. Apr 20 19:51:25.275997 systemd-logind[1627]: Removed session 58. Apr 20 19:51:26.169195 kubelet[3163]: E0420 19:51:26.168165 3163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 20 19:51:26.183343 kubelet[3163]: E0420 19:51:26.182691 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=2837\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 20 19:51:26.654145 containerd[1659]: time="2026-04-20T19:51:26.536455647Z" level=info msg="Kill container \"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\"" Apr 20 19:51:26.685353 containerd[1659]: time="2026-04-20T19:51:26.685241975Z" level=info msg="StartContainer for \"13a5307cebf8e24a8cf569f8b508b92d0a2c1da1c970849479297704cecf330d\" returns successfully" Apr 20 19:51:27.062191 kubelet[3163]: E0420 19:51:27.056803 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/configmaps?fieldSelector=metadata.name%3Dtigera-ca-bundle&resourceVersion=1537\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"calico-system\"/\"tigera-ca-bundle\"" type="*v1.ConfigMap" Apr 20 19:51:27.166235 kubelet[3163]: E0420 19:51:27.057073 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1537\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"calico-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 20 19:51:27.410882 kubelet[3163]: E0420 19:51:27.388425 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T19:51:15Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T19:51:15Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T19:51:15Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T19:51:15Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\\\",\\\"ghcr.io/flatcar/calico/node:v3.31.4\\\"],\\\"sizeBytes\\\":159838426},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\\\",\\\"ghcr.io/flatcar/calico/cni:v3.31.4\\\"],\\\"sizeBytes\\\":72167716},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\\\",\\\"ghcr.io/flatcar/calico/apiserver:v3.31.4\\\"],\\\"sizeBytes\\\":49971841},{\\\"names\\\":[\\\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\\\",\\\"quay.io/tigera/operator:v1.40.7\\\"],\\\"sizeBytes\\\":40842151},{\\\"names\\\":[\\\"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\\\",\\\"registry.k8s.io/kube-proxy:v1.33.11\\\"],\\\"sizeBytes\\\":32009730},{\\\"names\\\":[\\\"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\\\",\\\"registry.k8s.io/kube-apiserver:v1.33.11\\\"],\\\"sizeBytes\\\":30190588},{\\\"names\\\":[\\\"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\\\",\\\"registry.k8s.io/kube-controller-manager:v1.33.11\\\"],\\\"sizeBytes\\\":27737794},{\\\"names\\\":[\\\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\\\",\\\"registry.k8s.io/etcd:3.5.24-0\\\"],\\\"sizeBytes\\\":23716032},{\\\"names\\\":[\\\"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\\\",\\\"registry.k8s.io/kube-scheduler:v1.33.11\\\"],\\\"sizeBytes\\\":21856121},{\\\"names\\\":[\\\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\\\",\\\"registry.k8s.io/coredns/coredns:v1.12.0\\\"],\\\"sizeBytes\\\":20939036},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\\\",\\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\\\"],\\\"sizeBytes\\\":16260314},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\\\",\\\"ghcr.io/flatcar/calico/csi:v3.31.4\\\"],\\\"sizeBytes\\\":10348547},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\\\",\\\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\\\"],\\\"sizeBytes\\\":6186255},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\\\",\\\"registry.k8s.io/pause:3.10.1\\\"],\\\"sizeBytes\\\":320448},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\\\",\\\"registry.k8s.io/pause:3.10\\\"],\\\"sizeBytes\\\":320368}]}}\" for node \"localhost\": Patch \"https://10.0.0.14:6443/api/v1/nodes/localhost/status?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 20 19:51:27.464366 kubelet[3163]: E0420 19:51:27.254490 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.14:6443/api/v1/namespaces/tigera-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1537\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"tigera-operator\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 20 19:51:27.567270 kubelet[3163]: E0420 19:51:26.994411 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=2825\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 20 19:51:27.915455 kubelet[3163]: E0420 19:51:27.915333 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.05s" Apr 20 19:51:28.693429 kubelet[3163]: E0420 19:51:28.665221 3163 projected.go:289] Couldn't get configMap calico-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:51:28.894258 kubelet[3163]: E0420 19:51:28.886211 3163 projected.go:194] Error preparing data for projected volume kube-api-access-kld4g for pod calico-system/calico-apiserver-84684997fc-zpm5v: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:51:28.983370 kubelet[3163]: E0420 19:51:28.978025 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/dfb0b7d2-b28d-4433-9fba-0074dfdf81ee-kube-api-access-kld4g podName:dfb0b7d2-b28d-4433-9fba-0074dfdf81ee nodeName:}" failed. No retries permitted until 2026-04-20 19:52:00.977266974 +0000 UTC m=+2619.071516682 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-kld4g" (UniqueName: "kubernetes.io/projected/dfb0b7d2-b28d-4433-9fba-0074dfdf81ee-kube-api-access-kld4g") pod "calico-apiserver-84684997fc-zpm5v" (UID: "dfb0b7d2-b28d-4433-9fba-0074dfdf81ee") : failed to sync configmap cache: timed out waiting for the condition Apr 20 19:51:29.688194 systemd[1]: Started sshd@57-4107-10.0.0.14:22-10.0.0.1:46212.service - OpenSSH per-connection server daemon (10.0.0.1:46212). Apr 20 19:51:29.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@57-4107-10.0.0.14:22-10.0.0.1:46212 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:51:30.083165 kernel: audit: type=1130 audit(1776714689.763:1212): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@57-4107-10.0.0.14:22-10.0.0.1:46212 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:51:30.086393 kubelet[3163]: E0420 19:51:29.763691 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.848s" Apr 20 19:51:31.144104 kubelet[3163]: E0420 19:51:31.118428 3163 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:51:38.623427 kubelet[3163]: E0420 19:51:36.983583 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/526e8f89-8d32-4504-b20c-956610c7bb82-kube-proxy podName:526e8f89-8d32-4504-b20c-956610c7bb82 nodeName:}" failed. No retries permitted until 2026-04-20 19:52:05.441493211 +0000 UTC m=+2623.535743099 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/526e8f89-8d32-4504-b20c-956610c7bb82-kube-proxy") pod "kube-proxy-c6mkn" (UID: "526e8f89-8d32-4504-b20c-956610c7bb82") : failed to sync configmap cache: timed out waiting for the condition Apr 20 19:51:42.347261 kubelet[3163]: E0420 19:51:42.344389 3163 projected.go:289] Couldn't get configMap calico-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:51:42.550274 containerd[1659]: time="2026-04-20T19:51:42.549940144Z" level=info msg="Kill container \"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\"" Apr 20 19:51:44.571931 kubelet[3163]: I0420 19:51:44.561272 3163 request.go:752] "Waited before sending request" delay="1.261698181s" reason="client-side throttling, not priority and fairness" verb="PATCH" URL="https://10.0.0.14:6443/api/v1/namespaces/kube-system/events/kube-scheduler-localhost.18a8263f4322ee51" Apr 20 19:51:46.097760 kubelet[3163]: E0420 19:51:46.083750 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1537\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 20 19:51:46.097760 kubelet[3163]: E0420 19:51:46.086898 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.14:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 20 19:51:46.408000 audit[6647]: AUDIT1101 pid=6647 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:51:46.875193 sshd[6647]: Accepted publickey for core from 10.0.0.1 port 46212 ssh2: RSA SHA256:ZNIzts6V4KYKlrJxXaosrimCRlmsV/+NkZ5UtjwHrjE Apr 20 19:51:47.891935 kernel: audit: type=1101 audit(1776714706.408:1213): pid=6647 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:51:47.998954 kubelet[3163]: E0420 19:51:44.763277 3163 projected.go:194] Error preparing data for projected volume kube-api-access-4m6bv for pod calico-system/csi-node-driver-5h6vg: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:51:48.234000 audit[6647]: AUDIT1103 pid=6647 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:51:48.365000 audit[6647]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe75708d00 a2=3 a3=0 items=0 ppid=1 pid=6647 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=59 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:51:48.365000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:51:48.742728 kernel: audit: type=1103 audit(1776714708.234:1214): pid=6647 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:51:48.882910 kernel: audit: type=1006 audit(1776714708.365:1215): pid=6647 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=59 res=1 Apr 20 19:51:48.883971 kernel: audit: type=1300 audit(1776714708.365:1215): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe75708d00 a2=3 a3=0 items=0 ppid=1 pid=6647 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=59 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:51:48.884094 kernel: audit: type=1327 audit(1776714708.365:1215): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:51:49.690632 sshd-session[6647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 19:51:54.825309 kubelet[3163]: E0420 19:51:52.602406 3163 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:51:55.380403 kubelet[3163]: E0420 19:51:46.617259 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&resourceVersion=2900\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 20 19:51:55.968071 systemd-logind[1627]: New session '59' of user 'core' with class 'user' and type 'tty'. Apr 20 19:51:56.258913 systemd[1]: Started session-59.scope - Session 59 of User core. Apr 20 19:51:58.938000 audit[6647]: AUDIT1105 pid=6647 uid=0 auid=500 ses=59 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:51:59.456189 kernel: audit: type=1105 audit(1776714718.938:1216): pid=6647 uid=0 auid=500 ses=59 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:51:59.983410 kubelet[3163]: I0420 19:51:50.973855 3163 status_manager.go:895] "Failed to get status for pod" podUID="071d23f6-a94b-4165-9229-2d0570b516d8" pod="calico-system/calico-node-g9fs5" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-node-g9fs5\": net/http: TLS handshake timeout" Apr 20 19:52:00.322207 kubelet[3163]: E0420 19:51:57.811441 3163 projected.go:194] Error preparing data for projected volume kube-api-access-6ncsk for pod kube-system/kube-proxy-c6mkn: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:52:00.322207 kubelet[3163]: E0420 19:52:00.286117 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-proxy&resourceVersion=1537\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap" Apr 20 19:52:00.468000 audit[6660]: AUDIT1103 pid=6660 uid=0 auid=500 ses=59 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:52:00.821961 kernel: audit: type=1103 audit(1776714720.468:1217): pid=6660 uid=0 auid=500 ses=59 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:52:01.013018 kubelet[3163]: E0420 19:51:58.973429 3163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 20 19:52:01.284121 kubelet[3163]: E0420 19:52:00.760809 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9f02930c-961c-4c4b-8334-b61cbd5c3d20-kube-api-access-4m6bv podName:9f02930c-961c-4c4b-8334-b61cbd5c3d20 nodeName:}" failed. No retries permitted until 2026-04-20 19:52:24.280258403 +0000 UTC m=+2642.374508102 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-4m6bv" (UniqueName: "kubernetes.io/projected/9f02930c-961c-4c4b-8334-b61cbd5c3d20-kube-api-access-4m6bv") pod "csi-node-driver-5h6vg" (UID: "9f02930c-961c-4c4b-8334-b61cbd5c3d20") : failed to sync configmap cache: timed out waiting for the condition Apr 20 19:52:05.555045 containerd[1659]: time="2026-04-20T19:52:05.548887929Z" level=error msg="ExecSync for \"38809e5cd6af13351ccdf029282f28389cf4d54a3e7b12c6f1dbc5b71d29ce3a\" failed" error="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" Apr 20 19:52:06.049403 kubelet[3163]: E0420 19:52:05.548342 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/526e8f89-8d32-4504-b20c-956610c7bb82-kube-api-access-6ncsk podName:526e8f89-8d32-4504-b20c-956610c7bb82 nodeName:}" failed. No retries permitted until 2026-04-20 19:52:34.274193257 +0000 UTC m=+2652.368442973 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-6ncsk" (UniqueName: "kubernetes.io/projected/526e8f89-8d32-4504-b20c-956610c7bb82-kube-api-access-6ncsk") pod "kube-proxy-c6mkn" (UID: "526e8f89-8d32-4504-b20c-956610c7bb82") : failed to sync configmap cache: timed out waiting for the condition Apr 20 19:52:10.121976 kubelet[3163]: E0420 19:52:10.117560 3163 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" containerID="38809e5cd6af13351ccdf029282f28389cf4d54a3e7b12c6f1dbc5b71d29ce3a" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 20 19:52:10.578480 kubelet[3163]: E0420 19:52:10.185087 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.14:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded" Apr 20 19:52:11.749491 kubelet[3163]: E0420 19:52:09.093436 3163 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/events/kube-scheduler-localhost.18a8263f4322ee51\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{kube-scheduler-localhost.18a8263f4322ee51 kube-system 2448 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-scheduler-localhost,UID:33fee6ba1581201eda98a989140db110,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:DNSConfigForming,Message:Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 19:08:26 +0000 UTC,LastTimestamp:2026-04-20 19:21:00.168326093 +0000 UTC m=+758.262577660,Count:14,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 19:52:11.773449 kubelet[3163]: E0420 19:52:11.769244 3163 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:52:12.654286 kubelet[3163]: E0420 19:52:12.653959 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/526e8f89-8d32-4504-b20c-956610c7bb82-kube-proxy podName:526e8f89-8d32-4504-b20c-956610c7bb82 nodeName:}" failed. No retries permitted until 2026-04-20 19:53:16.638349639 +0000 UTC m=+2694.732599347 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/526e8f89-8d32-4504-b20c-956610c7bb82-kube-proxy") pod "kube-proxy-c6mkn" (UID: "526e8f89-8d32-4504-b20c-956610c7bb82") : failed to sync configmap cache: timed out waiting for the condition Apr 20 19:52:13.224301 kubelet[3163]: I0420 19:52:13.224114 3163 scope.go:117] "RemoveContainer" containerID="d60bfc28bae39dd4c39466e0fffee6553b16b69bc14ddc6752a782b3abc019c6" Apr 20 19:52:13.758516 kubelet[3163]: E0420 19:52:13.745954 3163 projected.go:289] Couldn't get configMap tigera-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:52:13.772401 kubelet[3163]: E0420 19:52:13.762285 3163 projected.go:194] Error preparing data for projected volume kube-api-access-qj2d9 for pod tigera-operator/tigera-operator-6bf85f8dd-hvgdj: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:52:13.881754 kubelet[3163]: E0420 19:52:13.878369 3163 projected.go:289] Couldn't get configMap calico-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:52:14.153882 kubelet[3163]: E0420 19:52:14.152133 3163 projected.go:194] Error preparing data for projected volume kube-api-access-kld4g for pod calico-system/calico-apiserver-84684997fc-zpm5v: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:52:14.957913 kubelet[3163]: E0420 19:52:14.192648 3163 projected.go:289] Couldn't get configMap calico-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:52:15.279441 kubelet[3163]: E0420 19:52:15.174950 3163 projected.go:194] Error preparing data for projected volume kube-api-access-5kv6b for pod calico-system/calico-node-g9fs5: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:52:15.516078 kubelet[3163]: E0420 19:52:15.492994 3163 secret.go:189] Couldn't get secret calico-system/node-certs: failed to sync secret cache: timed out waiting for the condition Apr 20 19:52:16.075010 kubelet[3163]: E0420 19:52:15.522961 3163 configmap.go:193] Couldn't get configMap calico-system/tigera-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:52:16.262168 kubelet[3163]: E0420 19:52:16.085083 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/dfb0b7d2-b28d-4433-9fba-0074dfdf81ee-kube-api-access-kld4g podName:dfb0b7d2-b28d-4433-9fba-0074dfdf81ee nodeName:}" failed. No retries permitted until 2026-04-20 19:53:19.557071472 +0000 UTC m=+2697.651321206 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-kld4g" (UniqueName: "kubernetes.io/projected/dfb0b7d2-b28d-4433-9fba-0074dfdf81ee-kube-api-access-kld4g") pod "calico-apiserver-84684997fc-zpm5v" (UID: "dfb0b7d2-b28d-4433-9fba-0074dfdf81ee") : failed to sync configmap cache: timed out waiting for the condition Apr 20 19:52:16.262168 kubelet[3163]: E0420 19:52:16.128387 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/071d23f6-a94b-4165-9229-2d0570b516d8-kube-api-access-5kv6b podName:071d23f6-a94b-4165-9229-2d0570b516d8 nodeName:}" failed. No retries permitted until 2026-04-20 19:52:48.086156486 +0000 UTC m=+2666.180406186 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-5kv6b" (UniqueName: "kubernetes.io/projected/071d23f6-a94b-4165-9229-2d0570b516d8-kube-api-access-5kv6b") pod "calico-node-g9fs5" (UID: "071d23f6-a94b-4165-9229-2d0570b516d8") : failed to sync configmap cache: timed out waiting for the condition Apr 20 19:52:16.286204 kubelet[3163]: E0420 19:52:16.052311 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.Secret: Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/secrets?fieldSelector=metadata.name%3Dcalico-apiserver-certs&resourceVersion=2406\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"calico-system\"/\"calico-apiserver-certs\"" type="*v1.Secret" Apr 20 19:52:16.692250 kubelet[3163]: E0420 19:52:16.580221 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/22f1ff03-de8a-48db-b03e-54fdbe0d3d5f-kube-api-access-qj2d9 podName:22f1ff03-de8a-48db-b03e-54fdbe0d3d5f nodeName:}" failed. No retries permitted until 2026-04-20 19:52:48.165844778 +0000 UTC m=+2666.260094496 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-qj2d9" (UniqueName: "kubernetes.io/projected/22f1ff03-de8a-48db-b03e-54fdbe0d3d5f-kube-api-access-qj2d9") pod "tigera-operator-6bf85f8dd-hvgdj" (UID: "22f1ff03-de8a-48db-b03e-54fdbe0d3d5f") : failed to sync configmap cache: timed out waiting for the condition Apr 20 19:52:16.853161 kubelet[3163]: E0420 19:52:16.787849 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.Secret: Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/secrets?fieldSelector=metadata.name%3Dnode-certs&resourceVersion=2406\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"calico-system\"/\"node-certs\"" type="*v1.Secret" Apr 20 19:52:16.853161 kubelet[3163]: E0420 19:52:16.786408 3163 secret.go:189] Couldn't get secret calico-system/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Apr 20 19:52:17.128326 kubelet[3163]: E0420 19:52:16.975430 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/071d23f6-a94b-4165-9229-2d0570b516d8-node-certs podName:071d23f6-a94b-4165-9229-2d0570b516d8 nodeName:}" failed. No retries permitted until 2026-04-20 19:52:48.847216764 +0000 UTC m=+2666.941466463 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "node-certs" (UniqueName: "kubernetes.io/secret/071d23f6-a94b-4165-9229-2d0570b516d8-node-certs") pod "calico-node-g9fs5" (UID: "071d23f6-a94b-4165-9229-2d0570b516d8") : failed to sync secret cache: timed out waiting for the condition Apr 20 19:52:17.766427 kubelet[3163]: E0420 19:52:17.763144 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/071d23f6-a94b-4165-9229-2d0570b516d8-tigera-ca-bundle podName:071d23f6-a94b-4165-9229-2d0570b516d8 nodeName:}" failed. No retries permitted until 2026-04-20 19:52:49.686416481 +0000 UTC m=+2667.780666185 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "tigera-ca-bundle" (UniqueName: "kubernetes.io/configmap/071d23f6-a94b-4165-9229-2d0570b516d8-tigera-ca-bundle") pod "calico-node-g9fs5" (UID: "071d23f6-a94b-4165-9229-2d0570b516d8") : failed to sync configmap cache: timed out waiting for the condition Apr 20 19:52:18.589053 kubelet[3163]: E0420 19:52:18.570192 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dfb0b7d2-b28d-4433-9fba-0074dfdf81ee-calico-apiserver-certs podName:dfb0b7d2-b28d-4433-9fba-0074dfdf81ee nodeName:}" failed. No retries permitted until 2026-04-20 19:53:21.827420589 +0000 UTC m=+2699.921670295 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/dfb0b7d2-b28d-4433-9fba-0074dfdf81ee-calico-apiserver-certs") pod "calico-apiserver-84684997fc-zpm5v" (UID: "dfb0b7d2-b28d-4433-9fba-0074dfdf81ee") : failed to sync secret cache: timed out waiting for the condition Apr 20 19:52:19.691095 kubelet[3163]: I0420 19:52:19.557506 3163 status_manager.go:895] "Failed to get status for pod" podUID="dfb0b7d2-b28d-4433-9fba-0074dfdf81ee" pod="calico-system/calico-apiserver-84684997fc-zpm5v" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-apiserver-84684997fc-zpm5v\": net/http: TLS handshake timeout" Apr 20 19:52:20.064802 kubelet[3163]: E0420 19:52:19.964020 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/configmaps?fieldSelector=metadata.name%3Dtigera-ca-bundle&resourceVersion=1537\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"calico-system\"/\"tigera-ca-bundle\"" type="*v1.ConfigMap" Apr 20 19:52:20.523489 kubelet[3163]: E0420 19:52:20.523245 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.14:6443/api/v1/namespaces/tigera-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1537\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"tigera-operator\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 20 19:52:20.890331 kubelet[3163]: E0420 19:52:20.849489 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=2837\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 20 19:52:27.817165 kubelet[3163]: E0420 19:52:26.159390 3163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 20 19:52:29.238270 kubelet[3163]: E0420 19:52:26.777208 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.14:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 20 19:52:29.943199 kubelet[3163]: E0420 19:52:29.937211 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1537\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"calico-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 20 19:52:31.889674 kubelet[3163]: E0420 19:52:25.456340 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=2825\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 20 19:52:32.741296 containerd[1659]: time="2026-04-20T19:52:32.509498222Z" level=info msg="TaskExit event container_id:\"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\" id:\"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\" pid:2981 exit_status:1 exited_at:{seconds:1776712820 nanos:249587616}" Apr 20 19:52:34.214414 kubelet[3163]: E0420 19:52:32.088057 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=2901\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 20 19:52:37.170919 containerd[1659]: time="2026-04-20T19:52:37.162025963Z" level=info msg="RemoveContainer for \"d60bfc28bae39dd4c39466e0fffee6553b16b69bc14ddc6752a782b3abc019c6\"" Apr 20 19:52:37.545999 kubelet[3163]: E0420 19:52:36.346462 3163 projected.go:289] Couldn't get configMap calico-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:52:38.086155 kubelet[3163]: I0420 19:52:38.077958 3163 status_manager.go:895] "Failed to get status for pod" podUID="33fee6ba1581201eda98a989140db110" pod="kube-system/kube-scheduler-localhost" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": net/http: TLS handshake timeout" Apr 20 19:52:38.120235 kubelet[3163]: E0420 19:52:38.094283 3163 projected.go:194] Error preparing data for projected volume kube-api-access-4m6bv for pod calico-system/csi-node-driver-5h6vg: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:52:38.447159 kubelet[3163]: E0420 19:52:38.334900 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9f02930c-961c-4c4b-8334-b61cbd5c3d20-kube-api-access-4m6bv podName:9f02930c-961c-4c4b-8334-b61cbd5c3d20 nodeName:}" failed. No retries permitted until 2026-04-20 19:53:42.117494903 +0000 UTC m=+2720.211744600 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-4m6bv" (UniqueName: "kubernetes.io/projected/9f02930c-961c-4c4b-8334-b61cbd5c3d20-kube-api-access-4m6bv") pod "csi-node-driver-5h6vg" (UID: "9f02930c-961c-4c4b-8334-b61cbd5c3d20") : failed to sync configmap cache: timed out waiting for the condition Apr 20 19:52:42.297179 kubelet[3163]: E0420 19:52:42.268912 3163 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:52:43.291000 audit[6647]: AUDIT1106 pid=6647 uid=0 auid=500 ses=59 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:52:43.328258 kubelet[3163]: E0420 19:52:43.291279 3163 projected.go:194] Error preparing data for projected volume kube-api-access-6ncsk for pod kube-system/kube-proxy-c6mkn: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:52:43.291000 audit[6647]: AUDIT1104 pid=6647 uid=0 auid=500 ses=59 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:52:43.781515 sshd[6660]: Connection closed by 10.0.0.1 port 46212 Apr 20 19:52:43.820000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@57-4107-10.0.0.14:22-10.0.0.1:46212 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:52:42.987242 sshd-session[6647]: pam_unix(sshd:session): session closed for user core Apr 20 19:52:43.976330 kernel: audit: type=1106 audit(1776714763.291:1218): pid=6647 uid=0 auid=500 ses=59 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:52:44.072184 containerd[1659]: time="2026-04-20T19:52:43.329221263Z" level=error msg="get state for ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729" error="context deadline exceeded" Apr 20 19:52:44.072184 containerd[1659]: time="2026-04-20T19:52:43.329334067Z" level=warning msg="unknown status" status=0 Apr 20 19:52:43.675988 systemd[1]: sshd@57-4107-10.0.0.14:22-10.0.0.1:46212.service: Deactivated successfully. Apr 20 19:52:44.585466 kernel: audit: type=1104 audit(1776714763.291:1219): pid=6647 uid=0 auid=500 ses=59 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:52:43.840112 systemd[1]: sshd@57-4107-10.0.0.14:22-10.0.0.1:46212.service: Consumed 4.189s CPU time, 4.1M memory peak. Apr 20 19:52:44.727145 kubelet[3163]: E0420 19:52:43.767493 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.14:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded" Apr 20 19:52:44.727145 kubelet[3163]: E0420 19:52:44.692265 3163 kubelet_node_status.go:535] "Unable to update node status" err="update node status exceeds retry count" Apr 20 19:52:44.955261 containerd[1659]: time="2026-04-20T19:52:44.516230270Z" level=error msg="ttrpc: received message on inactive stream" stream=285 Apr 20 19:52:44.960723 kernel: audit: type=1131 audit(1776714763.820:1220): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@57-4107-10.0.0.14:22-10.0.0.1:46212 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:52:44.553027 systemd[1]: session-59.scope: Deactivated successfully. Apr 20 19:52:45.128507 kubelet[3163]: E0420 19:52:44.958660 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1m13.68s" Apr 20 19:52:44.686389 systemd[1]: session-59.scope: Consumed 17.998s CPU time, 17.7M memory peak. Apr 20 19:52:45.191885 systemd-logind[1627]: Session 59 logged out. Waiting for processes to exit. Apr 20 19:52:45.515669 kubelet[3163]: E0420 19:52:45.497168 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/526e8f89-8d32-4504-b20c-956610c7bb82-kube-api-access-6ncsk podName:526e8f89-8d32-4504-b20c-956610c7bb82 nodeName:}" failed. No retries permitted until 2026-04-20 19:53:47.767201401 +0000 UTC m=+2725.861451104 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-6ncsk" (UniqueName: "kubernetes.io/projected/526e8f89-8d32-4504-b20c-956610c7bb82-kube-api-access-6ncsk") pod "kube-proxy-c6mkn" (UID: "526e8f89-8d32-4504-b20c-956610c7bb82") : failed to sync configmap cache: timed out waiting for the condition Apr 20 19:52:45.945424 systemd-logind[1627]: Removed session 59. Apr 20 19:52:47.184312 containerd[1659]: time="2026-04-20T19:52:47.183012870Z" level=error msg="failed to delete task" error="context deadline exceeded" id=ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729 Apr 20 19:52:47.585715 containerd[1659]: time="2026-04-20T19:52:47.279093357Z" level=error msg="failed to drain init process ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729 io" error="context deadline exceeded" runtime=io.containerd.runc.v2 Apr 20 19:52:48.186399 containerd[1659]: time="2026-04-20T19:52:48.044364871Z" level=error msg="Failed to handle backOff event container_id:\"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\" id:\"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\" pid:2981 exit_status:1 exited_at:{seconds:1776712820 nanos:249587616} for ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded" Apr 20 19:52:49.019322 containerd[1659]: time="2026-04-20T19:52:49.016925121Z" level=info msg="RemoveContainer for \"d60bfc28bae39dd4c39466e0fffee6553b16b69bc14ddc6752a782b3abc019c6\" returns successfully" Apr 20 19:52:49.434672 containerd[1659]: time="2026-04-20T19:52:49.391078627Z" level=error msg="ttrpc: received message on inactive stream" stream=287 Apr 20 19:52:50.084458 kubelet[3163]: E0420 19:52:50.028894 3163 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/events/kube-scheduler-localhost.18a8263f4322ee51\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{kube-scheduler-localhost.18a8263f4322ee51 kube-system 2448 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-scheduler-localhost,UID:33fee6ba1581201eda98a989140db110,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:DNSConfigForming,Message:Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 19:08:26 +0000 UTC,LastTimestamp:2026-04-20 19:21:00.168326093 +0000 UTC m=+758.262577660,Count:14,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 19:52:51.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@58-11-10.0.0.14:22-10.0.0.1:40778 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:52:51.595306 systemd[1]: Started sshd@58-11-10.0.0.14:22-10.0.0.1:40778.service - OpenSSH per-connection server daemon (10.0.0.1:40778). Apr 20 19:52:52.357113 kernel: audit: type=1130 audit(1776714771.646:1221): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@58-11-10.0.0.14:22-10.0.0.1:40778 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:52:54.937428 kubelet[3163]: E0420 19:52:54.909352 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-proxy&resourceVersion=1537\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap" Apr 20 19:52:55.440341 containerd[1659]: time="2026-04-20T19:52:55.329347719Z" level=info msg="TaskExit event container_id:\"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\" id:\"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\" pid:2993 exit_status:1 exited_at:{seconds:1776712815 nanos:539071424}" Apr 20 19:52:58.619485 kubelet[3163]: E0420 19:52:58.581571 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&resourceVersion=2900\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 20 19:53:05.365991 containerd[1659]: time="2026-04-20T19:53:05.355365042Z" level=error msg="Failed to handle backOff event container_id:\"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\" id:\"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\" pid:2993 exit_status:1 exited_at:{seconds:1776712815 nanos:539071424} for d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 20 19:53:05.742168 containerd[1659]: time="2026-04-20T19:53:05.741884884Z" level=error msg="ttrpc: received message on inactive stream" stream=273 Apr 20 19:53:05.818688 containerd[1659]: time="2026-04-20T19:53:05.814861169Z" level=error msg="ttrpc: received message on inactive stream" stream=275 Apr 20 19:53:06.366000 audit[6705]: AUDIT1101 pid=6705 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:53:06.461000 audit[6705]: AUDIT1103 pid=6705 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:53:06.560492 sshd[6705]: Accepted publickey for core from 10.0.0.1 port 40778 ssh2: RSA SHA256:ZNIzts6V4KYKlrJxXaosrimCRlmsV/+NkZ5UtjwHrjE Apr 20 19:53:06.489000 audit[6705]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff42bc5aa0 a2=3 a3=0 items=0 ppid=1 pid=6705 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=60 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:53:06.489000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:53:06.682909 kernel: audit: type=1101 audit(1776714786.366:1222): pid=6705 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:53:06.697429 kernel: audit: type=1103 audit(1776714786.461:1223): pid=6705 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:53:06.740973 kernel: audit: type=1006 audit(1776714786.489:1224): pid=6705 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=60 res=1 Apr 20 19:53:06.742304 kernel: audit: type=1300 audit(1776714786.489:1224): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff42bc5aa0 a2=3 a3=0 items=0 ppid=1 pid=6705 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=60 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:53:06.742524 kernel: audit: type=1327 audit(1776714786.489:1224): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:53:06.767829 sshd-session[6705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 19:53:06.789672 kubelet[3163]: E0420 19:53:06.787893 3163 projected.go:289] Couldn't get configMap tigera-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:53:06.789672 kubelet[3163]: E0420 19:53:06.788004 3163 projected.go:194] Error preparing data for projected volume kube-api-access-qj2d9 for pod tigera-operator/tigera-operator-6bf85f8dd-hvgdj: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:53:06.789672 kubelet[3163]: E0420 19:53:06.788203 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/22f1ff03-de8a-48db-b03e-54fdbe0d3d5f-kube-api-access-qj2d9 podName:22f1ff03-de8a-48db-b03e-54fdbe0d3d5f nodeName:}" failed. No retries permitted until 2026-04-20 19:54:10.788108272 +0000 UTC m=+2748.882357977 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-qj2d9" (UniqueName: "kubernetes.io/projected/22f1ff03-de8a-48db-b03e-54fdbe0d3d5f-kube-api-access-qj2d9") pod "tigera-operator-6bf85f8dd-hvgdj" (UID: "22f1ff03-de8a-48db-b03e-54fdbe0d3d5f") : failed to sync configmap cache: timed out waiting for the condition Apr 20 19:53:06.789672 kubelet[3163]: E0420 19:53:06.788301 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1537\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 20 19:53:06.790458 kubelet[3163]: E0420 19:53:06.790428 3163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 20 19:53:06.863865 kubelet[3163]: I0420 19:53:06.461382 3163 status_manager.go:895] "Failed to get status for pod" podUID="071d23f6-a94b-4165-9229-2d0570b516d8" pod="calico-system/calico-node-g9fs5" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-node-g9fs5\": net/http: TLS handshake timeout" Apr 20 19:53:06.990945 systemd-logind[1627]: New session '60' of user 'core' with class 'user' and type 'tty'. Apr 20 19:53:07.070613 systemd[1]: Started session-60.scope - Session 60 of User core. Apr 20 19:53:07.783725 kubelet[3163]: E0420 19:53:07.770333 3163 configmap.go:193] Couldn't get configMap calico-system/tigera-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:53:07.787422 kubelet[3163]: E0420 19:53:07.787302 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/071d23f6-a94b-4165-9229-2d0570b516d8-tigera-ca-bundle podName:071d23f6-a94b-4165-9229-2d0570b516d8 nodeName:}" failed. No retries permitted until 2026-04-20 19:54:11.784123255 +0000 UTC m=+2749.878372951 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "tigera-ca-bundle" (UniqueName: "kubernetes.io/configmap/071d23f6-a94b-4165-9229-2d0570b516d8-tigera-ca-bundle") pod "calico-node-g9fs5" (UID: "071d23f6-a94b-4165-9229-2d0570b516d8") : failed to sync configmap cache: timed out waiting for the condition Apr 20 19:53:07.836000 audit[6705]: AUDIT1105 pid=6705 uid=0 auid=500 ses=60 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:53:08.056410 kernel: audit: type=1105 audit(1776714787.836:1225): pid=6705 uid=0 auid=500 ses=60 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:53:08.150922 kubelet[3163]: E0420 19:53:07.838644 3163 projected.go:289] Couldn't get configMap calico-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:53:08.150922 kubelet[3163]: E0420 19:53:07.838901 3163 projected.go:194] Error preparing data for projected volume kube-api-access-5kv6b for pod calico-system/calico-node-g9fs5: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:53:08.150922 kubelet[3163]: E0420 19:53:07.839017 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/071d23f6-a94b-4165-9229-2d0570b516d8-kube-api-access-5kv6b podName:071d23f6-a94b-4165-9229-2d0570b516d8 nodeName:}" failed. No retries permitted until 2026-04-20 19:54:11.838979458 +0000 UTC m=+2749.933229157 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-5kv6b" (UniqueName: "kubernetes.io/projected/071d23f6-a94b-4165-9229-2d0570b516d8-kube-api-access-5kv6b") pod "calico-node-g9fs5" (UID: "071d23f6-a94b-4165-9229-2d0570b516d8") : failed to sync configmap cache: timed out waiting for the condition Apr 20 19:53:09.062000 audit[6723]: AUDIT1103 pid=6723 uid=0 auid=500 ses=60 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:53:09.698123 kernel: audit: type=1103 audit(1776714789.062:1226): pid=6723 uid=0 auid=500 ses=60 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:53:13.022082 kubelet[3163]: E0420 19:53:13.018452 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.Secret: Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/secrets?fieldSelector=metadata.name%3Dnode-certs&resourceVersion=2406\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"calico-system\"/\"node-certs\"" type="*v1.Secret" Apr 20 19:53:14.232510 kubelet[3163]: E0420 19:53:13.774462 3163 secret.go:189] Couldn't get secret calico-system/node-certs: failed to sync secret cache: timed out waiting for the condition Apr 20 19:53:14.675392 kubelet[3163]: E0420 19:53:14.673363 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/071d23f6-a94b-4165-9229-2d0570b516d8-node-certs podName:071d23f6-a94b-4165-9229-2d0570b516d8 nodeName:}" failed. No retries permitted until 2026-04-20 19:54:18.596323461 +0000 UTC m=+2756.690573170 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "node-certs" (UniqueName: "kubernetes.io/secret/071d23f6-a94b-4165-9229-2d0570b516d8-node-certs") pod "calico-node-g9fs5" (UID: "071d23f6-a94b-4165-9229-2d0570b516d8") : failed to sync secret cache: timed out waiting for the condition Apr 20 19:53:16.288165 kubelet[3163]: E0420 19:53:16.266482 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="31.306s" Apr 20 19:53:22.684691 kubelet[3163]: E0420 19:53:21.045813 3163 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/events/kube-scheduler-localhost.18a8263f4322ee51\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{kube-scheduler-localhost.18a8263f4322ee51 kube-system 2448 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-scheduler-localhost,UID:33fee6ba1581201eda98a989140db110,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:DNSConfigForming,Message:Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 19:08:26 +0000 UTC,LastTimestamp:2026-04-20 19:21:00.168326093 +0000 UTC m=+758.262577660,Count:14,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 19:53:23.561294 kubelet[3163]: E0420 19:53:23.310386 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.14:6443/api/v1/namespaces/tigera-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1537\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"tigera-operator\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 20 19:53:23.743925 kubelet[3163]: E0420 19:53:23.727046 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=2837\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 20 19:53:23.743925 kubelet[3163]: E0420 19:53:23.728207 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.Secret: Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/secrets?fieldSelector=metadata.name%3Dcalico-apiserver-certs&resourceVersion=2406\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"calico-system\"/\"calico-apiserver-certs\"" type="*v1.Secret" Apr 20 19:53:24.081486 kubelet[3163]: E0420 19:53:24.031328 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T19:53:06Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T19:53:06Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T19:53:06Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T19:53:06Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\\\",\\\"ghcr.io/flatcar/calico/node:v3.31.4\\\"],\\\"sizeBytes\\\":159838426},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\\\",\\\"ghcr.io/flatcar/calico/cni:v3.31.4\\\"],\\\"sizeBytes\\\":72167716},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\\\",\\\"ghcr.io/flatcar/calico/apiserver:v3.31.4\\\"],\\\"sizeBytes\\\":49971841},{\\\"names\\\":[\\\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\\\",\\\"quay.io/tigera/operator:v1.40.7\\\"],\\\"sizeBytes\\\":40842151},{\\\"names\\\":[\\\"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\\\",\\\"registry.k8s.io/kube-proxy:v1.33.11\\\"],\\\"sizeBytes\\\":32009730},{\\\"names\\\":[\\\"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\\\",\\\"registry.k8s.io/kube-apiserver:v1.33.11\\\"],\\\"sizeBytes\\\":30190588},{\\\"names\\\":[\\\"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\\\",\\\"registry.k8s.io/kube-controller-manager:v1.33.11\\\"],\\\"sizeBytes\\\":27737794},{\\\"names\\\":[\\\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\\\",\\\"registry.k8s.io/etcd:3.5.24-0\\\"],\\\"sizeBytes\\\":23716032},{\\\"names\\\":[\\\"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\\\",\\\"registry.k8s.io/kube-scheduler:v1.33.11\\\"],\\\"sizeBytes\\\":21856121},{\\\"names\\\":[\\\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\\\",\\\"registry.k8s.io/coredns/coredns:v1.12.0\\\"],\\\"sizeBytes\\\":20939036},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\\\",\\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\\\"],\\\"sizeBytes\\\":16260314},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\\\",\\\"ghcr.io/flatcar/calico/csi:v3.31.4\\\"],\\\"sizeBytes\\\":10348547},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\\\",\\\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\\\"],\\\"sizeBytes\\\":6186255},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\\\",\\\"registry.k8s.io/pause:3.10.1\\\"],\\\"sizeBytes\\\":320448},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\\\",\\\"registry.k8s.io/pause:3.10\\\"],\\\"sizeBytes\\\":320368}]}}\" for node \"localhost\": Patch \"https://10.0.0.14:6443/api/v1/nodes/localhost/status?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Apr 20 19:53:25.184269 kubelet[3163]: E0420 19:53:24.748121 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1537\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"calico-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 20 19:53:26.695249 containerd[1659]: time="2026-04-20T19:53:26.684686330Z" level=error msg="StopContainer for \"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\" failed" error="rpc error: code = Canceled desc = an error occurs during waiting for container \"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\" to be killed: wait container \"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\": context canceled" Apr 20 19:53:27.277170 kubelet[3163]: E0420 19:53:27.276048 3163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 20 19:53:28.238871 kubelet[3163]: I0420 19:53:28.217525 3163 status_manager.go:895] "Failed to get status for pod" podUID="dfb0b7d2-b28d-4433-9fba-0074dfdf81ee" pod="calico-system/calico-apiserver-84684997fc-zpm5v" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-apiserver-84684997fc-zpm5v\": net/http: TLS handshake timeout" Apr 20 19:53:31.233679 kubelet[3163]: E0420 19:53:30.489219 3163 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:53:32.274906 kubelet[3163]: E0420 19:53:30.556362 3163 log.go:32] "StopContainer from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf" Apr 20 19:53:34.541476 kubelet[3163]: E0420 19:53:32.679281 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=2825\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 20 19:53:36.186698 kubelet[3163]: E0420 19:53:36.149161 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/configmaps?fieldSelector=metadata.name%3Dtigera-ca-bundle&resourceVersion=1537\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"calico-system\"/\"tigera-ca-bundle\"" type="*v1.ConfigMap" Apr 20 19:53:36.544832 kubelet[3163]: E0420 19:53:34.689751 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/526e8f89-8d32-4504-b20c-956610c7bb82-kube-proxy podName:526e8f89-8d32-4504-b20c-956610c7bb82 nodeName:}" failed. No retries permitted until 2026-04-20 19:55:36.597390375 +0000 UTC m=+2834.691640082 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/526e8f89-8d32-4504-b20c-956610c7bb82-kube-proxy") pod "kube-proxy-c6mkn" (UID: "526e8f89-8d32-4504-b20c-956610c7bb82") : failed to sync configmap cache: timed out waiting for the condition Apr 20 19:53:39.618904 kubelet[3163]: E0420 19:53:39.610477 3163 projected.go:289] Couldn't get configMap calico-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:53:40.649853 kubelet[3163]: E0420 19:53:40.646326 3163 secret.go:189] Couldn't get secret calico-system/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Apr 20 19:53:41.237174 kubelet[3163]: E0420 19:53:41.229747 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.14:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded" Apr 20 19:53:41.840826 kubelet[3163]: E0420 19:53:37.220428 3163 kuberuntime_container.go:863] "Container termination failed with gracePeriod" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="kube-system/kube-controller-manager-localhost" podUID="e9ca41790ae21be9f4cbd451ade0acec" containerName="kube-controller-manager" containerID="containerd://d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf" gracePeriod=30 Apr 20 19:53:42.286759 containerd[1659]: time="2026-04-20T19:53:42.103705725Z" level=error msg="StopContainer for \"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\" failed" error="rpc error: code = Canceled desc = an error occurs during waiting for container \"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\" to be killed: wait container \"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\": context canceled" Apr 20 19:53:43.093670 kubelet[3163]: E0420 19:53:39.665332 3163 projected.go:194] Error preparing data for projected volume kube-api-access-kld4g for pod calico-system/calico-apiserver-84684997fc-zpm5v: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:53:43.842381 sshd[6723]: Connection closed by 10.0.0.1 port 40778 Apr 20 19:53:44.563000 audit[6705]: AUDIT1106 pid=6705 uid=0 auid=500 ses=60 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:53:44.615000 audit[6705]: AUDIT1104 pid=6705 uid=0 auid=500 ses=60 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:53:43.886385 sshd-session[6705]: pam_unix(sshd:session): session closed for user core Apr 20 19:53:45.236684 kernel: audit: type=1106 audit(1776714824.563:1227): pid=6705 uid=0 auid=500 ses=60 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:53:45.237469 kernel: audit: type=1104 audit(1776714824.615:1228): pid=6705 uid=0 auid=500 ses=60 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:53:45.425388 kubelet[3163]: E0420 19:53:45.371132 3163 kuberuntime_manager.go:1176] "killContainer for pod failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerName="kube-controller-manager" containerID={"Type":"containerd","ID":"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf"} pod="kube-system/kube-controller-manager-localhost" Apr 20 19:53:45.692000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@58-11-10.0.0.14:22-10.0.0.1:40778 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:53:46.507203 kernel: audit: type=1131 audit(1776714825.692:1229): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@58-11-10.0.0.14:22-10.0.0.1:40778 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:53:45.578105 systemd[1]: sshd@58-11-10.0.0.14:22-10.0.0.1:40778.service: Deactivated successfully. Apr 20 19:53:45.779505 systemd[1]: sshd@58-11-10.0.0.14:22-10.0.0.1:40778.service: Consumed 4.270s CPU time, 4.2M memory peak. Apr 20 19:53:46.777139 systemd[1]: session-60.scope: Deactivated successfully. Apr 20 19:53:47.724523 kubelet[3163]: E0420 19:53:47.449469 3163 log.go:32] "StopContainer from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729" Apr 20 19:53:47.724523 kubelet[3163]: E0420 19:53:47.670151 3163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 20 19:53:46.965395 systemd[1]: session-60.scope: Consumed 12.570s CPU time, 15.8M memory peak. Apr 20 19:53:48.922738 systemd-logind[1627]: Session 60 logged out. Waiting for processes to exit. Apr 20 19:53:51.382259 kubelet[3163]: E0420 19:53:51.381518 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dfb0b7d2-b28d-4433-9fba-0074dfdf81ee-calico-apiserver-certs podName:dfb0b7d2-b28d-4433-9fba-0074dfdf81ee nodeName:}" failed. No retries permitted until 2026-04-20 19:55:43.07244487 +0000 UTC m=+2841.166694579 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/dfb0b7d2-b28d-4433-9fba-0074dfdf81ee-calico-apiserver-certs") pod "calico-apiserver-84684997fc-zpm5v" (UID: "dfb0b7d2-b28d-4433-9fba-0074dfdf81ee") : failed to sync secret cache: timed out waiting for the condition Apr 20 19:53:52.165625 kubelet[3163]: E0420 19:53:52.069114 3163 kuberuntime_container.go:863] "Container termination failed with gracePeriod" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="kube-system/kube-scheduler-localhost" podUID="33fee6ba1581201eda98a989140db110" containerName="kube-scheduler" containerID="containerd://ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729" gracePeriod=30 Apr 20 19:53:52.343395 kubelet[3163]: E0420 19:53:52.252336 3163 kuberuntime_manager.go:1176] "killContainer for pod failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerName="kube-scheduler" containerID={"Type":"containerd","ID":"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729"} pod="kube-system/kube-scheduler-localhost" Apr 20 19:53:55.394105 systemd[1]: Started sshd@59-4108-10.0.0.14:22-10.0.0.1:36236.service - OpenSSH per-connection server daemon (10.0.0.1:36236). Apr 20 19:53:55.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@59-4108-10.0.0.14:22-10.0.0.1:36236 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:53:57.068389 kernel: audit: type=1130 audit(1776714835.639:1230): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@59-4108-10.0.0.14:22-10.0.0.1:36236 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:53:56.608323 systemd-logind[1627]: Removed session 60. Apr 20 19:53:58.466635 kubelet[3163]: E0420 19:53:58.465198 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/dfb0b7d2-b28d-4433-9fba-0074dfdf81ee-kube-api-access-kld4g podName:dfb0b7d2-b28d-4433-9fba-0074dfdf81ee nodeName:}" failed. No retries permitted until 2026-04-20 19:55:53.967512173 +0000 UTC m=+2852.061761878 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-kld4g" (UniqueName: "kubernetes.io/projected/dfb0b7d2-b28d-4433-9fba-0074dfdf81ee-kube-api-access-kld4g") pod "calico-apiserver-84684997fc-zpm5v" (UID: "dfb0b7d2-b28d-4433-9fba-0074dfdf81ee") : failed to sync configmap cache: timed out waiting for the condition Apr 20 19:53:59.264318 kubelet[3163]: E0420 19:53:52.850224 3163 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillContainer\" for \"kube-controller-manager\" with KillContainerError: \"rpc error: code = DeadlineExceeded desc = context deadline exceeded\"" pod="kube-system/kube-controller-manager-localhost" podUID="e9ca41790ae21be9f4cbd451ade0acec" Apr 20 19:54:03.059357 kubelet[3163]: E0420 19:54:01.584491 3163 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillContainer\" for \"kube-scheduler\" with KillContainerError: \"rpc error: code = DeadlineExceeded desc = context deadline exceeded\"" pod="kube-system/kube-scheduler-localhost" podUID="33fee6ba1581201eda98a989140db110" Apr 20 19:54:03.992660 kubelet[3163]: E0420 19:54:03.972423 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.14:6443/api/v1/nodes/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Apr 20 19:54:09.335516 kubelet[3163]: I0420 19:54:08.669506 3163 status_manager.go:895] "Failed to get status for pod" podUID="33fee6ba1581201eda98a989140db110" pod="kube-system/kube-scheduler-localhost" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": net/http: TLS handshake timeout" Apr 20 19:54:10.083873 kubelet[3163]: E0420 19:54:09.771453 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=2901\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 20 19:54:11.896074 kubelet[3163]: E0420 19:54:10.997622 3163 projected.go:289] Couldn't get configMap calico-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:54:12.570254 kubelet[3163]: E0420 19:54:12.569869 3163 projected.go:194] Error preparing data for projected volume kube-api-access-4m6bv for pod calico-system/csi-node-driver-5h6vg: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:54:13.086517 kubelet[3163]: E0420 19:54:13.069942 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9f02930c-961c-4c4b-8334-b61cbd5c3d20-kube-api-access-4m6bv podName:9f02930c-961c-4c4b-8334-b61cbd5c3d20 nodeName:}" failed. No retries permitted until 2026-04-20 19:56:14.574776609 +0000 UTC m=+2872.669026309 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-4m6bv" (UniqueName: "kubernetes.io/projected/9f02930c-961c-4c4b-8334-b61cbd5c3d20-kube-api-access-4m6bv") pod "csi-node-driver-5h6vg" (UID: "9f02930c-961c-4c4b-8334-b61cbd5c3d20") : failed to sync configmap cache: timed out waiting for the condition Apr 20 19:54:13.802318 containerd[1659]: time="2026-04-20T19:54:13.443350594Z" level=error msg="ExecSync for \"38809e5cd6af13351ccdf029282f28389cf4d54a3e7b12c6f1dbc5b71d29ce3a\" failed" error="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" Apr 20 19:54:15.289228 containerd[1659]: time="2026-04-20T19:54:15.286228554Z" level=info msg="TaskExit event container_id:\"bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf\" id:\"bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf\" pid:3768 exit_status:1 exited_at:{seconds:1776712842 nanos:981099034}" Apr 20 19:54:15.336774 kubelet[3163]: E0420 19:54:14.370380 3163 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" containerID="38809e5cd6af13351ccdf029282f28389cf4d54a3e7b12c6f1dbc5b71d29ce3a" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 20 19:54:15.698495 kubelet[3163]: E0420 19:54:15.496464 3163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 20 19:54:15.870414 kubelet[3163]: E0420 19:54:15.859308 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.14:6443/api/v1/nodes/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Apr 20 19:54:18.229193 containerd[1659]: time="2026-04-20T19:54:18.228283593Z" level=error msg="get state for 535cbf317370e2ee0ec5e64de676b160729bcf3ec8cac6f2b79f5d2eb1374a04" error="context deadline exceeded" Apr 20 19:54:18.378324 containerd[1659]: time="2026-04-20T19:54:18.235274200Z" level=warning msg="unknown status" status=0 Apr 20 19:54:18.527216 kubelet[3163]: E0420 19:54:17.396118 3163 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:54:18.743342 containerd[1659]: time="2026-04-20T19:54:18.245188094Z" level=error msg="ttrpc: received message on inactive stream" stream=61 Apr 20 19:54:19.432201 kubelet[3163]: E0420 19:54:19.430742 3163 projected.go:194] Error preparing data for projected volume kube-api-access-6ncsk for pod kube-system/kube-proxy-c6mkn: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:54:19.740116 kubelet[3163]: E0420 19:54:18.819372 3163 projected.go:289] Couldn't get configMap tigera-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:54:19.740116 kubelet[3163]: E0420 19:54:18.826117 3163 configmap.go:193] Couldn't get configMap calico-system/tigera-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:54:19.971262 kubelet[3163]: E0420 19:54:19.854324 3163 projected.go:289] Couldn't get configMap calico-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:54:20.026669 kubelet[3163]: E0420 19:54:17.425765 3163 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/events/kube-scheduler-localhost.18a8263f4322ee51\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{kube-scheduler-localhost.18a8263f4322ee51 kube-system 2448 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-scheduler-localhost,UID:33fee6ba1581201eda98a989140db110,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:DNSConfigForming,Message:Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 19:08:26 +0000 UTC,LastTimestamp:2026-04-20 19:21:00.168326093 +0000 UTC m=+758.262577660,Count:14,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 19:54:20.856787 kubelet[3163]: E0420 19:54:20.670373 3163 projected.go:194] Error preparing data for projected volume kube-api-access-5kv6b for pod calico-system/calico-node-g9fs5: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:54:21.330312 kubelet[3163]: E0420 19:54:20.857387 3163 projected.go:194] Error preparing data for projected volume kube-api-access-qj2d9 for pod tigera-operator/tigera-operator-6bf85f8dd-hvgdj: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:54:23.261098 kubelet[3163]: E0420 19:54:20.675377 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-proxy&resourceVersion=1537\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap" Apr 20 19:54:24.758000 audit[6742]: AUDIT1101 pid=6742 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:54:25.393275 kernel: audit: type=1101 audit(1776714864.758:1231): pid=6742 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:54:25.586924 sshd[6742]: Accepted publickey for core from 10.0.0.1 port 36236 ssh2: RSA SHA256:ZNIzts6V4KYKlrJxXaosrimCRlmsV/+NkZ5UtjwHrjE Apr 20 19:54:25.719000 audit[6742]: AUDIT1103 pid=6742 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:54:25.823000 audit[6742]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd73e4a6b0 a2=3 a3=0 items=0 ppid=1 pid=6742 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=61 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:54:25.823000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:54:27.015469 kernel: audit: type=1103 audit(1776714865.719:1232): pid=6742 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:54:27.077764 kubelet[3163]: E0420 19:54:26.342895 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/526e8f89-8d32-4504-b20c-956610c7bb82-kube-api-access-6ncsk podName:526e8f89-8d32-4504-b20c-956610c7bb82 nodeName:}" failed. No retries permitted until 2026-04-20 19:56:22.683380727 +0000 UTC m=+2880.777630425 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-6ncsk" (UniqueName: "kubernetes.io/projected/526e8f89-8d32-4504-b20c-956610c7bb82-kube-api-access-6ncsk") pod "kube-proxy-c6mkn" (UID: "526e8f89-8d32-4504-b20c-956610c7bb82") : failed to sync configmap cache: timed out waiting for the condition Apr 20 19:54:28.285715 containerd[1659]: time="2026-04-20T19:54:26.684612002Z" level=error msg="Failed to handle backOff event container_id:\"bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf\" id:\"bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf\" pid:3768 exit_status:1 exited_at:{seconds:1776712842 nanos:981099034} for bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 20 19:54:28.285715 containerd[1659]: time="2026-04-20T19:54:28.247879046Z" level=error msg="ttrpc: received message on inactive stream" stream=203 Apr 20 19:54:26.777381 sshd-session[6742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 19:54:29.533356 kernel: audit: type=1006 audit(1776714865.823:1233): pid=6742 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=61 res=1 Apr 20 19:54:29.613466 containerd[1659]: time="2026-04-20T19:54:28.586512416Z" level=error msg="ttrpc: received message on inactive stream" stream=207 Apr 20 19:54:30.689049 kernel: audit: type=1300 audit(1776714865.823:1233): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd73e4a6b0 a2=3 a3=0 items=0 ppid=1 pid=6742 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=61 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:54:30.877279 kernel: audit: type=1327 audit(1776714865.823:1233): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:54:34.177695 systemd-logind[1627]: New session '61' of user 'core' with class 'user' and type 'tty'. Apr 20 19:54:35.690648 systemd[1]: Started session-61.scope - Session 61 of User core. Apr 20 19:54:38.120496 kubelet[3163]: E0420 19:54:32.770449 3163 secret.go:189] Couldn't get secret calico-system/node-certs: failed to sync secret cache: timed out waiting for the condition Apr 20 19:54:42.757000 audit[6742]: AUDIT1105 pid=6742 uid=0 auid=500 ses=61 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:54:43.016515 kernel: audit: type=1105 audit(1776714882.757:1234): pid=6742 uid=0 auid=500 ses=61 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:54:44.169179 kubelet[3163]: E0420 19:54:32.492973 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&resourceVersion=2900\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 20 19:54:44.588000 audit[6760]: AUDIT1103 pid=6760 uid=0 auid=500 ses=61 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:54:45.358244 kernel: audit: type=1103 audit(1776714884.588:1235): pid=6760 uid=0 auid=500 ses=61 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:54:47.686063 kubelet[3163]: E0420 19:54:41.251314 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.14:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 20 19:54:48.653861 kubelet[3163]: I0420 19:54:47.587347 3163 status_manager.go:895] "Failed to get status for pod" podUID="5ef51a6b32499d3d1e531fb8b3a83d4f" pod="kube-system/kube-apiserver-localhost" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": net/http: TLS handshake timeout" Apr 20 19:54:50.748789 kubelet[3163]: E0420 19:54:45.416745 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.14:6443/api/v1/namespaces/tigera-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1537\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"tigera-operator\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 20 19:54:50.748789 kubelet[3163]: E0420 19:54:50.726405 3163 kubelet_node_status.go:535] "Unable to update node status" err="update node status exceeds retry count" Apr 20 19:54:50.748789 kubelet[3163]: E0420 19:54:41.241854 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/071d23f6-a94b-4165-9229-2d0570b516d8-tigera-ca-bundle podName:071d23f6-a94b-4165-9229-2d0570b516d8 nodeName:}" failed. No retries permitted until 2026-04-20 19:56:32.186366268 +0000 UTC m=+2890.280615970 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "tigera-ca-bundle" (UniqueName: "kubernetes.io/configmap/071d23f6-a94b-4165-9229-2d0570b516d8-tigera-ca-bundle") pod "calico-node-g9fs5" (UID: "071d23f6-a94b-4165-9229-2d0570b516d8") : failed to sync configmap cache: timed out waiting for the condition Apr 20 19:54:50.748789 kubelet[3163]: E0420 19:54:50.744759 3163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 20 19:54:51.382151 kubelet[3163]: E0420 19:54:50.939938 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1537\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 20 19:54:51.382151 kubelet[3163]: E0420 19:54:46.222399 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.Secret: Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/secrets?fieldSelector=metadata.name%3Dcalico-apiserver-certs&resourceVersion=2406\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"calico-system\"/\"calico-apiserver-certs\"" type="*v1.Secret" Apr 20 19:54:51.576380 kubelet[3163]: E0420 19:54:41.450402 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=2837\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 20 19:54:51.576380 kubelet[3163]: E0420 19:54:41.459203 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.Secret: Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/secrets?fieldSelector=metadata.name%3Dnode-certs&resourceVersion=2406\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"calico-system\"/\"node-certs\"" type="*v1.Secret" Apr 20 19:54:56.310490 kubelet[3163]: E0420 19:54:56.304220 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/22f1ff03-de8a-48db-b03e-54fdbe0d3d5f-kube-api-access-qj2d9 podName:22f1ff03-de8a-48db-b03e-54fdbe0d3d5f nodeName:}" failed. No retries permitted until 2026-04-20 19:56:55.025263541 +0000 UTC m=+2913.119513257 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-qj2d9" (UniqueName: "kubernetes.io/projected/22f1ff03-de8a-48db-b03e-54fdbe0d3d5f-kube-api-access-qj2d9") pod "tigera-operator-6bf85f8dd-hvgdj" (UID: "22f1ff03-de8a-48db-b03e-54fdbe0d3d5f") : failed to sync configmap cache: timed out waiting for the condition Apr 20 19:54:57.609075 kubelet[3163]: E0420 19:54:57.576927 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1537\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"calico-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 20 19:54:59.255745 kubelet[3163]: E0420 19:54:59.248511 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/071d23f6-a94b-4165-9229-2d0570b516d8-kube-api-access-5kv6b podName:071d23f6-a94b-4165-9229-2d0570b516d8 nodeName:}" failed. No retries permitted until 2026-04-20 19:56:59.216352062 +0000 UTC m=+2917.310601771 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-5kv6b" (UniqueName: "kubernetes.io/projected/071d23f6-a94b-4165-9229-2d0570b516d8-kube-api-access-5kv6b") pod "calico-node-g9fs5" (UID: "071d23f6-a94b-4165-9229-2d0570b516d8") : failed to sync configmap cache: timed out waiting for the condition Apr 20 19:55:03.686010 kubelet[3163]: E0420 19:55:03.684410 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/071d23f6-a94b-4165-9229-2d0570b516d8-node-certs podName:071d23f6-a94b-4165-9229-2d0570b516d8 nodeName:}" failed. No retries permitted until 2026-04-20 19:57:02.782450376 +0000 UTC m=+2920.876700085 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "node-certs" (UniqueName: "kubernetes.io/secret/071d23f6-a94b-4165-9229-2d0570b516d8-node-certs") pod "calico-node-g9fs5" (UID: "071d23f6-a94b-4165-9229-2d0570b516d8") : failed to sync secret cache: timed out waiting for the condition Apr 20 19:55:10.639071 kubelet[3163]: E0420 19:55:10.639026 3163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 20 19:55:13.556110 kubelet[3163]: E0420 19:55:13.105490 3163 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/events/kube-scheduler-localhost.18a8263f4322ee51\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{kube-scheduler-localhost.18a8263f4322ee51 kube-system 2448 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-scheduler-localhost,UID:33fee6ba1581201eda98a989140db110,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:DNSConfigForming,Message:Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 19:08:26 +0000 UTC,LastTimestamp:2026-04-20 19:21:00.168326093 +0000 UTC m=+758.262577660,Count:14,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 19:55:13.677748 containerd[1659]: time="2026-04-20T19:55:13.670729677Z" level=info msg="StopContainer for \"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\" with timeout 30 (s)" Apr 20 19:55:13.853244 kubelet[3163]: E0420 19:55:13.846872 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/configmaps?fieldSelector=metadata.name%3Dtigera-ca-bundle&resourceVersion=1537\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"calico-system\"/\"tigera-ca-bundle\"" type="*v1.ConfigMap" Apr 20 19:55:13.881109 containerd[1659]: time="2026-04-20T19:55:13.849171460Z" level=info msg="Skipping the sending of signal terminated to container \"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\" because a prior stop with timeout>0 request already sent the signal" Apr 20 19:55:14.186286 kubelet[3163]: E0420 19:55:13.756640 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1m57.376s" Apr 20 19:55:16.265747 kubelet[3163]: E0420 19:55:16.265389 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=2825\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 20 19:55:19.006234 kubelet[3163]: E0420 19:55:18.723433 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.533s" Apr 20 19:55:23.261431 sshd[6760]: Connection closed by 10.0.0.1 port 36236 Apr 20 19:55:23.382352 sshd-session[6742]: pam_unix(sshd:session): session closed for user core Apr 20 19:55:23.484442 kubelet[3163]: E0420 19:55:23.066393 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-proxy&resourceVersion=1537\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap" Apr 20 19:55:23.543000 audit[6742]: AUDIT1106 pid=6742 uid=0 auid=500 ses=61 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:55:23.567000 audit[6742]: AUDIT1104 pid=6742 uid=0 auid=500 ses=61 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:55:23.723000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@59-4108-10.0.0.14:22-10.0.0.1:36236 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:23.729218 kernel: audit: type=1106 audit(1776714923.543:1236): pid=6742 uid=0 auid=500 ses=61 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:55:23.729304 kubelet[3163]: I0420 19:55:23.712100 3163 status_manager.go:895] "Failed to get status for pod" podUID="5ef51a6b32499d3d1e531fb8b3a83d4f" pod="kube-system/kube-apiserver-localhost" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": net/http: TLS handshake timeout" Apr 20 19:55:23.717826 systemd[1]: sshd@59-4108-10.0.0.14:22-10.0.0.1:36236.service: Deactivated successfully. Apr 20 19:55:23.729662 kernel: audit: type=1104 audit(1776714923.567:1237): pid=6742 uid=0 auid=500 ses=61 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:55:23.724938 systemd[1]: sshd@59-4108-10.0.0.14:22-10.0.0.1:36236.service: Consumed 7.737s CPU time, 4.1M memory peak. Apr 20 19:55:23.729774 kernel: audit: type=1131 audit(1776714923.723:1238): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@59-4108-10.0.0.14:22-10.0.0.1:36236 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:23.729243 systemd[1]: session-61.scope: Deactivated successfully. Apr 20 19:55:23.729468 systemd[1]: session-61.scope: Consumed 16.863s CPU time, 17.6M memory peak. Apr 20 19:55:23.747985 systemd-logind[1627]: Session 61 logged out. Waiting for processes to exit. Apr 20 19:55:23.764423 systemd-logind[1627]: Removed session 61. Apr 20 19:55:23.805319 kubelet[3163]: E0420 19:55:23.805083 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=2901\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 20 19:55:24.147760 kubelet[3163]: E0420 19:55:24.147177 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.937s" Apr 20 19:55:27.556024 kubelet[3163]: E0420 19:55:27.520148 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T19:55:15Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T19:55:15Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T19:55:15Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T19:55:15Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\\\",\\\"ghcr.io/flatcar/calico/node:v3.31.4\\\"],\\\"sizeBytes\\\":159838426},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\\\",\\\"ghcr.io/flatcar/calico/cni:v3.31.4\\\"],\\\"sizeBytes\\\":72167716},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\\\",\\\"ghcr.io/flatcar/calico/apiserver:v3.31.4\\\"],\\\"sizeBytes\\\":49971841},{\\\"names\\\":[\\\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\\\",\\\"quay.io/tigera/operator:v1.40.7\\\"],\\\"sizeBytes\\\":40842151},{\\\"names\\\":[\\\"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\\\",\\\"registry.k8s.io/kube-proxy:v1.33.11\\\"],\\\"sizeBytes\\\":32009730},{\\\"names\\\":[\\\"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\\\",\\\"registry.k8s.io/kube-apiserver:v1.33.11\\\"],\\\"sizeBytes\\\":30190588},{\\\"names\\\":[\\\"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\\\",\\\"registry.k8s.io/kube-controller-manager:v1.33.11\\\"],\\\"sizeBytes\\\":27737794},{\\\"names\\\":[\\\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\\\",\\\"registry.k8s.io/etcd:3.5.24-0\\\"],\\\"sizeBytes\\\":23716032},{\\\"names\\\":[\\\"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\\\",\\\"registry.k8s.io/kube-scheduler:v1.33.11\\\"],\\\"sizeBytes\\\":21856121},{\\\"names\\\":[\\\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\\\",\\\"registry.k8s.io/coredns/coredns:v1.12.0\\\"],\\\"sizeBytes\\\":20939036},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\\\",\\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\\\"],\\\"sizeBytes\\\":16260314},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\\\",\\\"ghcr.io/flatcar/calico/csi:v3.31.4\\\"],\\\"sizeBytes\\\":10348547},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\\\",\\\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\\\"],\\\"sizeBytes\\\":6186255},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\\\",\\\"registry.k8s.io/pause:3.10.1\\\"],\\\"sizeBytes\\\":320448},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\\\",\\\"registry.k8s.io/pause:3.10\\\"],\\\"sizeBytes\\\":320368}]}}\" for node \"localhost\": Patch \"https://10.0.0.14:6443/api/v1/nodes/localhost/status?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 20 19:55:28.674588 kubelet[3163]: E0420 19:55:28.554198 3163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 20 19:55:29.187164 kubelet[3163]: E0420 19:55:29.057447 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.908s" Apr 20 19:55:29.375680 kubelet[3163]: I0420 19:55:29.234098 3163 status_manager.go:895] "Failed to get status for pod" podUID="071d23f6-a94b-4165-9229-2d0570b516d8" pod="calico-system/calico-node-g9fs5" err="pods \"calico-node-g9fs5\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" Apr 20 19:55:29.489814 kubelet[3163]: E0420 19:55:29.465647 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.Secret: secrets \"calico-apiserver-certs\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" reflector="object-\"calico-system\"/\"calico-apiserver-certs\"" type="*v1.Secret" Apr 20 19:55:29.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@60-8219-10.0.0.14:22-10.0.0.1:46034 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:29.799174 kernel: audit: type=1130 audit(1776714929.649:1239): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@60-8219-10.0.0.14:22-10.0.0.1:46034 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:29.650317 systemd[1]: Started sshd@60-8219-10.0.0.14:22-10.0.0.1:46034.service - OpenSSH per-connection server daemon (10.0.0.1:46034). Apr 20 19:55:34.006478 kubelet[3163]: E0420 19:55:34.000272 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'localhost' and this object" logger="UnhandledError" reflector="object-\"tigera-operator\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 20 19:55:34.020937 kubelet[3163]: I0420 19:55:33.695792 3163 status_manager.go:895] "Failed to get status for pod" podUID="dfb0b7d2-b28d-4433-9fba-0074dfdf81ee" pod="calico-system/calico-apiserver-84684997fc-zpm5v" err="pods \"calico-apiserver-84684997fc-zpm5v\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" Apr 20 19:55:34.023252 kubelet[3163]: E0420 19:55:33.849389 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.Secret: secrets \"node-certs\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" reflector="object-\"calico-system\"/\"node-certs\"" type="*v1.Secret" Apr 20 19:55:34.226588 kubelet[3163]: E0420 19:55:34.224902 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.999s" Apr 20 19:55:34.567000 audit[6799]: AUDIT1101 pid=6799 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:55:34.672233 kernel: audit: type=1101 audit(1776714934.567:1240): pid=6799 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:55:34.684000 audit[6799]: AUDIT1103 pid=6799 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:55:34.802653 sshd[6799]: Accepted publickey for core from 10.0.0.1 port 46034 ssh2: RSA SHA256:ZNIzts6V4KYKlrJxXaosrimCRlmsV/+NkZ5UtjwHrjE Apr 20 19:55:34.810000 audit[6799]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe9aab85d0 a2=3 a3=0 items=0 ppid=1 pid=6799 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=62 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:55:34.810000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:55:35.375152 kernel: audit: type=1103 audit(1776714934.684:1241): pid=6799 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:55:35.024419 sshd-session[6799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 19:55:35.592450 kernel: audit: type=1006 audit(1776714934.810:1242): pid=6799 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=62 res=1 Apr 20 19:55:35.626169 kernel: audit: type=1300 audit(1776714934.810:1242): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe9aab85d0 a2=3 a3=0 items=0 ppid=1 pid=6799 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=62 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:55:35.657702 kernel: audit: type=1327 audit(1776714934.810:1242): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:55:35.678762 kubelet[3163]: E0420 19:55:35.593141 3163 reflector.go:200] "Failed to watch" err=< Apr 20 19:55:35.678762 kubelet[3163]: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Apr 20 19:55:35.678762 kubelet[3163]: RBAC: [clusterrole.rbac.authorization.k8s.io "calico-tiered-policy-passthrough" not found, clusterrole.rbac.authorization.k8s.io "system:certificates.k8s.io:certificatesigningrequests:selfnodeclient" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found] Apr 20 19:55:35.678762 kubelet[3163]: > logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 20 19:55:36.088358 kubelet[3163]: I0420 19:55:36.087423 3163 status_manager.go:895] "Failed to get status for pod" podUID="33fee6ba1581201eda98a989140db110" pod="kube-system/kube-scheduler-localhost" err=< Apr 20 19:55:36.088358 kubelet[3163]: pods "kube-scheduler-localhost" is forbidden: User "system:node:localhost" cannot get resource "pods" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Apr 20 19:55:36.088358 kubelet[3163]: RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "calico-tiered-policy-passthrough" not found, clusterrole.rbac.authorization.k8s.io "system:certificates.k8s.io:certificatesigningrequests:selfnodeclient" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found] Apr 20 19:55:36.088358 kubelet[3163]: > Apr 20 19:55:36.295241 systemd-logind[1627]: New session '62' of user 'core' with class 'user' and type 'tty'. Apr 20 19:55:36.319385 systemd[1]: Started session-62.scope - Session 62 of User core. Apr 20 19:55:37.020000 audit[6799]: AUDIT1105 pid=6799 uid=0 auid=500 ses=62 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:55:37.037184 kernel: audit: type=1105 audit(1776714937.020:1243): pid=6799 uid=0 auid=500 ses=62 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:55:37.146000 audit[6804]: AUDIT1103 pid=6804 uid=0 auid=500 ses=62 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:55:37.263592 kernel: audit: type=1103 audit(1776714937.146:1244): pid=6804 uid=0 auid=500 ses=62 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:55:38.146348 kubelet[3163]: E0420 19:55:38.145923 3163 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:55:38.146348 kubelet[3163]: E0420 19:55:38.146205 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/526e8f89-8d32-4504-b20c-956610c7bb82-kube-proxy podName:526e8f89-8d32-4504-b20c-956610c7bb82 nodeName:}" failed. No retries permitted until 2026-04-20 19:57:40.146035753 +0000 UTC m=+2958.240285452 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/526e8f89-8d32-4504-b20c-956610c7bb82-kube-proxy") pod "kube-proxy-c6mkn" (UID: "526e8f89-8d32-4504-b20c-956610c7bb82") : failed to sync configmap cache: timed out waiting for the condition Apr 20 19:55:41.024227 kubelet[3163]: E0420 19:55:41.020015 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.431s" Apr 20 19:55:41.717774 kubelet[3163]: I0420 19:55:41.137512 3163 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d60bfc28bae39dd4c39466e0fffee6553b16b69bc14ddc6752a782b3abc019c6" Apr 20 19:55:43.923940 containerd[1659]: time="2026-04-20T19:55:43.920654792Z" level=info msg="Kill container \"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\"" Apr 20 19:55:46.616136 kubelet[3163]: E0420 19:55:46.616098 3163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:55:46.637417 kubelet[3163]: E0420 19:55:46.635774 3163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:55:48.467637 kubelet[3163]: E0420 19:55:48.428836 3163 secret.go:189] Couldn't get secret calico-system/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Apr 20 19:55:49.111683 kubelet[3163]: E0420 19:55:49.110783 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dfb0b7d2-b28d-4433-9fba-0074dfdf81ee-calico-apiserver-certs podName:dfb0b7d2-b28d-4433-9fba-0074dfdf81ee nodeName:}" failed. No retries permitted until 2026-04-20 19:57:51.104143308 +0000 UTC m=+2969.198393005 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/dfb0b7d2-b28d-4433-9fba-0074dfdf81ee-calico-apiserver-certs") pod "calico-apiserver-84684997fc-zpm5v" (UID: "dfb0b7d2-b28d-4433-9fba-0074dfdf81ee") : failed to sync secret cache: timed out waiting for the condition Apr 20 19:55:49.121155 containerd[1659]: time="2026-04-20T19:55:49.120955009Z" level=info msg="StopContainer for \"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\" with timeout 30 (s)" Apr 20 19:55:49.312956 containerd[1659]: time="2026-04-20T19:55:49.312768033Z" level=info msg="Skipping the sending of signal terminated to container \"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\" because a prior stop with timeout>0 request already sent the signal" Apr 20 19:55:51.698992 containerd[1659]: time="2026-04-20T19:55:51.693928587Z" level=info msg="StopContainer for \"38809e5cd6af13351ccdf029282f28389cf4d54a3e7b12c6f1dbc5b71d29ce3a\" with timeout 2 (s)" Apr 20 19:55:51.836094 sshd[6804]: Connection closed by 10.0.0.1 port 46034 Apr 20 19:55:51.835000 audit[6799]: AUDIT1106 pid=6799 uid=0 auid=500 ses=62 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:55:51.831350 sshd-session[6799]: pam_unix(sshd:session): session closed for user core Apr 20 19:55:51.872000 audit[6799]: AUDIT1104 pid=6799 uid=0 auid=500 ses=62 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:55:51.898583 kernel: audit: type=1106 audit(1776714951.835:1245): pid=6799 uid=0 auid=500 ses=62 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:55:51.910081 kernel: audit: type=1104 audit(1776714951.872:1246): pid=6799 uid=0 auid=500 ses=62 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:55:52.145054 systemd[1]: sshd@60-8219-10.0.0.14:22-10.0.0.1:46034.service: Deactivated successfully. Apr 20 19:55:52.200000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@60-8219-10.0.0.14:22-10.0.0.1:46034 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:52.564179 kernel: audit: type=1131 audit(1776714952.200:1247): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@60-8219-10.0.0.14:22-10.0.0.1:46034 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:52.202420 systemd[1]: sshd@60-8219-10.0.0.14:22-10.0.0.1:46034.service: Consumed 1.904s CPU time, 4.1M memory peak. Apr 20 19:55:52.786382 kubelet[3163]: E0420 19:55:52.202291 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="10.809s" Apr 20 19:55:52.417400 systemd[1]: session-62.scope: Deactivated successfully. Apr 20 19:55:52.428828 systemd[1]: session-62.scope: Consumed 6.528s CPU time, 17.9M memory peak. Apr 20 19:55:52.673357 systemd-logind[1627]: Session 62 logged out. Waiting for processes to exit. Apr 20 19:55:53.394951 systemd-logind[1627]: Removed session 62. Apr 20 19:55:53.810753 containerd[1659]: time="2026-04-20T19:55:53.806725284Z" level=error msg="get state for 38809e5cd6af13351ccdf029282f28389cf4d54a3e7b12c6f1dbc5b71d29ce3a" error="context deadline exceeded" Apr 20 19:55:53.810753 containerd[1659]: time="2026-04-20T19:55:53.807862352Z" level=warning msg="unknown status" status=0 Apr 20 19:55:53.810753 containerd[1659]: time="2026-04-20T19:55:53.808245082Z" level=info msg="Stop container \"38809e5cd6af13351ccdf029282f28389cf4d54a3e7b12c6f1dbc5b71d29ce3a\" with signal terminated" Apr 20 19:55:58.045690 containerd[1659]: time="2026-04-20T19:55:57.892285139Z" level=info msg="container event discarded" container=094aeb199e5141e10c4aa1ca00e31f3c2ea5db40bdffe7260f4ac4067e20028a type=CONTAINER_CREATED_EVENT Apr 20 19:55:59.593398 systemd[1]: Started sshd@61-8220-10.0.0.14:22-10.0.0.1:35918.service - OpenSSH per-connection server daemon (10.0.0.1:35918). Apr 20 19:55:59.619000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@61-8220-10.0.0.14:22-10.0.0.1:35918 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:59.777439 kernel: audit: type=1130 audit(1776714959.619:1248): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@61-8220-10.0.0.14:22-10.0.0.1:35918 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:55:59.807027 containerd[1659]: time="2026-04-20T19:55:59.800219643Z" level=info msg="container event discarded" container=7ad53f3fafdaae0a232b3f26afb202844744846718e9540c77ad7513e837efd9 type=CONTAINER_STOPPED_EVENT Apr 20 19:56:00.184876 containerd[1659]: time="2026-04-20T19:56:00.181384717Z" level=info msg="container event discarded" container=336c04308422bb47be47dbabd9a54c52608564fda7f37c62a62203892575bd65 type=CONTAINER_STOPPED_EVENT Apr 20 19:56:00.214060 containerd[1659]: time="2026-04-20T19:56:00.211869034Z" level=info msg="container event discarded" container=292f0d8812e0e22d6b9c42f42298e77dbbf1a3ff614a20049631a83923de267a type=CONTAINER_CREATED_EVENT Apr 20 19:56:03.940598 containerd[1659]: time="2026-04-20T19:56:01.634079444Z" level=info msg="container event discarded" container=38809e5cd6af13351ccdf029282f28389cf4d54a3e7b12c6f1dbc5b71d29ce3a type=CONTAINER_CREATED_EVENT Apr 20 19:56:08.291375 containerd[1659]: time="2026-04-20T19:56:08.266577966Z" level=info msg="container event discarded" container=094aeb199e5141e10c4aa1ca00e31f3c2ea5db40bdffe7260f4ac4067e20028a type=CONTAINER_STARTED_EVENT Apr 20 19:56:08.291375 containerd[1659]: time="2026-04-20T19:56:08.289329353Z" level=info msg="container event discarded" container=292f0d8812e0e22d6b9c42f42298e77dbbf1a3ff614a20049631a83923de267a type=CONTAINER_STARTED_EVENT Apr 20 19:56:08.291375 containerd[1659]: time="2026-04-20T19:56:08.289627404Z" level=info msg="container event discarded" container=38809e5cd6af13351ccdf029282f28389cf4d54a3e7b12c6f1dbc5b71d29ce3a type=CONTAINER_STARTED_EVENT Apr 20 19:56:11.049304 containerd[1659]: time="2026-04-20T19:56:11.048616658Z" level=info msg="container event discarded" container=094aeb199e5141e10c4aa1ca00e31f3c2ea5db40bdffe7260f4ac4067e20028a type=CONTAINER_STOPPED_EVENT Apr 20 19:56:11.510973 containerd[1659]: time="2026-04-20T19:56:11.506509517Z" level=error msg="ttrpc: received message on inactive stream" stream=153 Apr 20 19:56:12.343498 containerd[1659]: time="2026-04-20T19:56:12.333301470Z" level=info msg="container event discarded" container=13a5307cebf8e24a8cf569f8b508b92d0a2c1da1c970849479297704cecf330d type=CONTAINER_CREATED_EVENT Apr 20 19:56:13.557105 containerd[1659]: time="2026-04-20T19:56:13.556351062Z" level=info msg="Kill container \"38809e5cd6af13351ccdf029282f28389cf4d54a3e7b12c6f1dbc5b71d29ce3a\"" Apr 20 19:56:13.740264 containerd[1659]: time="2026-04-20T19:56:13.685387850Z" level=error msg="ExecSync for \"38809e5cd6af13351ccdf029282f28389cf4d54a3e7b12c6f1dbc5b71d29ce3a\" failed" error="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" Apr 20 19:56:13.743844 kubelet[3163]: E0420 19:56:13.743326 3163 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 20 19:56:14.386097 kubelet[3163]: E0420 19:56:14.144262 3163 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" containerID="38809e5cd6af13351ccdf029282f28389cf4d54a3e7b12c6f1dbc5b71d29ce3a" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 20 19:56:15.223000 audit[6833]: AUDIT1101 pid=6833 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:56:15.250488 kernel: audit: type=1101 audit(1776714975.223:1249): pid=6833 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:56:15.254943 sshd[6833]: Accepted publickey for core from 10.0.0.1 port 35918 ssh2: RSA SHA256:ZNIzts6V4KYKlrJxXaosrimCRlmsV/+NkZ5UtjwHrjE Apr 20 19:56:15.422195 systemd[1]: cri-containerd-292f0d8812e0e22d6b9c42f42298e77dbbf1a3ff614a20049631a83923de267a.scope: Deactivated successfully. Apr 20 19:56:15.427000 audit: BPF prog-id=214 op=UNLOAD Apr 20 19:56:15.427000 audit: BPF prog-id=218 op=UNLOAD Apr 20 19:56:15.441000 audit[6833]: AUDIT1103 pid=6833 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:56:15.558000 audit[6833]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd4563b410 a2=3 a3=0 items=0 ppid=1 pid=6833 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=63 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:56:15.558000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:56:15.596466 kernel: audit: type=1334 audit(1776714975.427:1250): prog-id=214 op=UNLOAD Apr 20 19:56:15.425612 systemd[1]: cri-containerd-292f0d8812e0e22d6b9c42f42298e77dbbf1a3ff614a20049631a83923de267a.scope: Consumed 4min 27.125s CPU time, 254.1M memory peak, 24.3M read from disk. Apr 20 19:56:15.596943 kernel: audit: type=1334 audit(1776714975.427:1251): prog-id=218 op=UNLOAD Apr 20 19:56:15.596969 kernel: audit: type=1103 audit(1776714975.441:1252): pid=6833 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:56:15.596983 kernel: audit: type=1006 audit(1776714975.558:1253): pid=6833 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=63 res=1 Apr 20 19:56:15.596996 kernel: audit: type=1300 audit(1776714975.558:1253): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd4563b410 a2=3 a3=0 items=0 ppid=1 pid=6833 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=63 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:56:15.597013 kernel: audit: type=1327 audit(1776714975.558:1253): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:56:15.597522 sshd-session[6833]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 19:56:15.602490 systemd[1]: cri-containerd-38809e5cd6af13351ccdf029282f28389cf4d54a3e7b12c6f1dbc5b71d29ce3a.scope: Deactivated successfully. Apr 20 19:56:15.605623 systemd[1]: cri-containerd-38809e5cd6af13351ccdf029282f28389cf4d54a3e7b12c6f1dbc5b71d29ce3a.scope: Consumed 19.727s CPU time, 48M memory peak, 324K read from disk, 4K written to disk. Apr 20 19:56:15.612000 audit: BPF prog-id=221 op=UNLOAD Apr 20 19:56:15.623421 kernel: audit: type=1334 audit(1776714975.612:1254): prog-id=221 op=UNLOAD Apr 20 19:56:15.640012 containerd[1659]: time="2026-04-20T19:56:15.639021732Z" level=info msg="received container exit event container_id:\"292f0d8812e0e22d6b9c42f42298e77dbbf1a3ff614a20049631a83923de267a\" id:\"292f0d8812e0e22d6b9c42f42298e77dbbf1a3ff614a20049631a83923de267a\" pid:6475 exit_status:255 exited_at:{seconds:1776714975 nanos:637766443}" Apr 20 19:56:15.640509 kubelet[3163]: W0420 19:56:15.639047 3163 helpers.go:245] readString: Failed to read "/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod071d23f6_a94b_4165_9229_2d0570b516d8.slice/cri-containerd-38809e5cd6af13351ccdf029282f28389cf4d54a3e7b12c6f1dbc5b71d29ce3a.scope/memory.min": read /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod071d23f6_a94b_4165_9229_2d0570b516d8.slice/cri-containerd-38809e5cd6af13351ccdf029282f28389cf4d54a3e7b12c6f1dbc5b71d29ce3a.scope/memory.min: no such device Apr 20 19:56:15.640509 kubelet[3163]: E0420 19:56:15.596490 3163 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/events/kube-apiserver-localhost.18a826e4a66a3a4e\": unexpected EOF" event="&Event{ObjectMeta:{kube-apiserver-localhost.18a826e4a66a3a4e kube-system 2637 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-localhost,UID:5ef51a6b32499d3d1e531fb8b3a83d4f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://10.0.0.14:6443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 19:20:16 +0000 UTC,LastTimestamp:2026-04-20 19:21:09.946906552 +0000 UTC m=+768.041156259,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 19:56:16.610398 systemd-logind[1627]: New session '63' of user 'core' with class 'user' and type 'tty'. Apr 20 19:56:16.672342 systemd[1]: Started session-63.scope - Session 63 of User core. Apr 20 19:56:16.747468 kubelet[3163]: E0420 19:56:16.619460 3163 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": unexpected EOF" Apr 20 19:56:16.747468 kubelet[3163]: E0420 19:56:16.744036 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.Secret: Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/secrets?fieldSelector=metadata.name%3Dcalico-apiserver-certs&resourceVersion=2406\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="object-\"calico-system\"/\"calico-apiserver-certs\"" type="*v1.Secret" Apr 20 19:56:17.075000 audit[6833]: AUDIT1105 pid=6833 uid=0 auid=500 ses=63 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:56:17.097816 kernel: audit: type=1105 audit(1776714977.075:1255): pid=6833 uid=0 auid=500 ses=63 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:56:17.512016 kubelet[3163]: E0420 19:56:17.511038 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="24.94s" Apr 20 19:56:17.509000 audit[6866]: AUDIT1103 pid=6866 uid=0 auid=500 ses=63 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:56:17.660434 kernel: audit: type=1103 audit(1776714977.509:1256): pid=6866 uid=0 auid=500 ses=63 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:56:17.760162 kubelet[3163]: I0420 19:56:17.759585 3163 status_manager.go:895] "Failed to get status for pod" podUID="071d23f6-a94b-4165-9229-2d0570b516d8" pod="calico-system/calico-node-g9fs5" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-node-g9fs5\": dial tcp 10.0.0.14:6443: connect: connection refused - error from a previous attempt: unexpected EOF" Apr 20 19:56:17.858970 kubelet[3163]: E0420 19:56:17.761798 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.Secret: Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/secrets?fieldSelector=metadata.name%3Dnode-certs&resourceVersion=2406\": dial tcp 10.0.0.14:6443: connect: connection refused - error from a previous attempt: unexpected EOF" logger="UnhandledError" reflector="object-\"calico-system\"/\"node-certs\"" type="*v1.Secret" Apr 20 19:56:17.975466 kubelet[3163]: E0420 19:56:17.972821 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.14:6443/api/v1/namespaces/tigera-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1537\": dial tcp 10.0.0.14:6443: connect: connection refused - error from a previous attempt: unexpected EOF" logger="UnhandledError" reflector="object-\"tigera-operator\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 20 19:56:17.981689 containerd[1659]: time="2026-04-20T19:56:17.974263998Z" level=info msg="received container exit event container_id:\"38809e5cd6af13351ccdf029282f28389cf4d54a3e7b12c6f1dbc5b71d29ce3a\" id:\"38809e5cd6af13351ccdf029282f28389cf4d54a3e7b12c6f1dbc5b71d29ce3a\" pid:6483 exit_status:137 exited_at:{seconds:1776714977 nanos:972999368}" Apr 20 19:56:17.981689 containerd[1659]: time="2026-04-20T19:56:17.981327031Z" level=info msg="received container exit event container_id:\"13a5307cebf8e24a8cf569f8b508b92d0a2c1da1c970849479297704cecf330d\" id:\"13a5307cebf8e24a8cf569f8b508b92d0a2c1da1c970849479297704cecf330d\" pid:6594 exit_status:1 exited_at:{seconds:1776714977 nanos:973621590}" Apr 20 19:56:17.981117 systemd[1]: cri-containerd-13a5307cebf8e24a8cf569f8b508b92d0a2c1da1c970849479297704cecf330d.scope: Deactivated successfully. Apr 20 19:56:17.986000 audit: BPF prog-id=222 op=UNLOAD Apr 20 19:56:17.986000 audit: BPF prog-id=226 op=UNLOAD Apr 20 19:56:17.988579 systemd[1]: cri-containerd-13a5307cebf8e24a8cf569f8b508b92d0a2c1da1c970849479297704cecf330d.scope: Consumed 15.593s CPU time, 19.2M memory peak. Apr 20 19:56:18.026496 kubelet[3163]: I0420 19:56:18.024168 3163 status_manager.go:895] "Failed to get status for pod" podUID="dfb0b7d2-b28d-4433-9fba-0074dfdf81ee" pod="calico-system/calico-apiserver-84684997fc-zpm5v" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-apiserver-84684997fc-zpm5v\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 19:56:18.038644 kubelet[3163]: E0420 19:56:18.038477 3163 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ef51a6b32499d3d1e531fb8b3a83d4f.slice/cri-containerd-292f0d8812e0e22d6b9c42f42298e77dbbf1a3ff614a20049631a83923de267a.scope\": RecentStats: unable to find data in memory cache]" Apr 20 19:56:18.241073 kubelet[3163]: E0420 19:56:18.239403 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=2825\": dial tcp 10.0.0.14:6443: connect: connection refused - error from a previous attempt: unexpected EOF" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 20 19:56:18.429344 kubelet[3163]: E0420 19:56:18.419513 3163 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 19:56:18.456127 kubelet[3163]: I0420 19:56:18.350379 3163 status_manager.go:895] "Failed to get status for pod" podUID="071d23f6-a94b-4165-9229-2d0570b516d8" pod="calico-system/calico-node-g9fs5" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-node-g9fs5\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 19:56:18.573396 kubelet[3163]: E0420 19:56:18.571269 3163 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 19:56:18.603302 kubelet[3163]: I0420 19:56:18.603127 3163 status_manager.go:895] "Failed to get status for pod" podUID="dfb0b7d2-b28d-4433-9fba-0074dfdf81ee" pod="calico-system/calico-apiserver-84684997fc-zpm5v" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-apiserver-84684997fc-zpm5v\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 19:56:18.610764 kubelet[3163]: E0420 19:56:18.610510 3163 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 19:56:18.619805 containerd[1659]: time="2026-04-20T19:56:18.619717628Z" level=error msg="ExecSync for \"38809e5cd6af13351ccdf029282f28389cf4d54a3e7b12c6f1dbc5b71d29ce3a\" failed" error="rpc error: code = Unknown desc = failed to exec in container: failed to start exec \"86127632e87b6e0054e6de41e1fb55f25c008430522a30335c2c43c02eafcea9\": OCI runtime exec failed: exec failed: unable to start container process: error executing setns process: exit status 1; runc init error(s): nsexec-1[6863]: failed to open /proc/6483/ns/ipc: No such file or directory; nsexec-0[6851]: failed to sync with stage-1: next state (got 0 of 4 bytes)" Apr 20 19:56:18.640491 kubelet[3163]: E0420 19:56:18.640209 3163 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to start exec \"86127632e87b6e0054e6de41e1fb55f25c008430522a30335c2c43c02eafcea9\": OCI runtime exec failed: exec failed: unable to start container process: error executing setns process: exit status 1; runc init error(s): nsexec-1[6863]: failed to open /proc/6483/ns/ipc: No such file or directory; nsexec-0[6851]: failed to sync with stage-1: next state (got 0 of 4 bytes)" containerID="38809e5cd6af13351ccdf029282f28389cf4d54a3e7b12c6f1dbc5b71d29ce3a" cmd=["/bin/calico-node","-shutdown"] Apr 20 19:56:18.652738 kubelet[3163]: I0420 19:56:18.648793 3163 status_manager.go:895] "Failed to get status for pod" podUID="071d23f6-a94b-4165-9229-2d0570b516d8" pod="calico-system/calico-node-g9fs5" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-node-g9fs5\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 19:56:18.652738 kubelet[3163]: I0420 19:56:18.651469 3163 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Apr 20 19:56:18.686682 kubelet[3163]: E0420 19:56:18.640487 3163 kuberuntime_container.go:741] "PreStop hook failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to start exec \"86127632e87b6e0054e6de41e1fb55f25c008430522a30335c2c43c02eafcea9\": OCI runtime exec failed: exec failed: unable to start container process: error executing setns process: exit status 1; runc init error(s): nsexec-1[6863]: failed to open /proc/6483/ns/ipc: No such file or directory; nsexec-0[6851]: failed to sync with stage-1: next state (got 0 of 4 bytes)" pod="calico-system/calico-node-g9fs5" podUID="071d23f6-a94b-4165-9229-2d0570b516d8" containerName="calico-node" containerID="containerd://38809e5cd6af13351ccdf029282f28389cf4d54a3e7b12c6f1dbc5b71d29ce3a" Apr 20 19:56:18.797971 kubelet[3163]: I0420 19:56:18.787372 3163 status_manager.go:895] "Failed to get status for pod" podUID="dfb0b7d2-b28d-4433-9fba-0074dfdf81ee" pod="calico-system/calico-apiserver-84684997fc-zpm5v" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-apiserver-84684997fc-zpm5v\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 19:56:18.800815 kubelet[3163]: E0420 19:56:18.800638 3163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="200ms" Apr 20 19:56:18.836729 kubelet[3163]: E0420 19:56:18.835470 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=2901\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 20 19:56:18.844781 kubelet[3163]: E0420 19:56:18.843479 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.317s" Apr 20 19:56:19.037765 kubelet[3163]: E0420 19:56:19.037294 3163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="400ms" Apr 20 19:56:19.380262 containerd[1659]: time="2026-04-20T19:56:19.378094933Z" level=info msg="Kill container \"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\"" Apr 20 19:56:19.590131 kubelet[3163]: E0420 19:56:19.587632 3163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="800ms" Apr 20 19:56:20.093996 containerd[1659]: time="2026-04-20T19:56:20.071030860Z" level=error msg="ExecSync for \"38809e5cd6af13351ccdf029282f28389cf4d54a3e7b12c6f1dbc5b71d29ce3a\" failed" error="rpc error: code = FailedPrecondition desc = failed to exec in container: failed to start exec \"265b83900e9a95b0b9d56b5cf44eaea1cffb8dbd719a4834d5e9ef9524f40c5f\": container 38809e5cd6af13351ccdf029282f28389cf4d54a3e7b12c6f1dbc5b71d29ce3a init process is not running: failed precondition" Apr 20 19:56:20.224189 kubelet[3163]: E0420 19:56:20.221467 3163 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = FailedPrecondition desc = failed to exec in container: failed to start exec \"265b83900e9a95b0b9d56b5cf44eaea1cffb8dbd719a4834d5e9ef9524f40c5f\": container 38809e5cd6af13351ccdf029282f28389cf4d54a3e7b12c6f1dbc5b71d29ce3a init process is not running: failed precondition" containerID="38809e5cd6af13351ccdf029282f28389cf4d54a3e7b12c6f1dbc5b71d29ce3a" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 20 19:56:20.631838 kubelet[3163]: E0420 19:56:20.624451 3163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="1.6s" Apr 20 19:56:21.683441 sshd[6866]: Connection closed by 10.0.0.1 port 35918 Apr 20 19:56:21.752490 sshd-session[6833]: pam_unix(sshd:session): session closed for user core Apr 20 19:56:21.791000 audit[6833]: AUDIT1106 pid=6833 uid=0 auid=500 ses=63 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:56:21.794343 kernel: kauditd_printk_skb: 2 callbacks suppressed Apr 20 19:56:21.829738 kernel: audit: type=1106 audit(1776714981.791:1259): pid=6833 uid=0 auid=500 ses=63 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:56:21.830000 audit[6833]: AUDIT1104 pid=6833 uid=0 auid=500 ses=63 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:56:21.847705 kernel: audit: type=1104 audit(1776714981.830:1260): pid=6833 uid=0 auid=500 ses=63 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:56:22.391198 systemd[1]: sshd@61-8220-10.0.0.14:22-10.0.0.1:35918.service: Deactivated successfully. Apr 20 19:56:22.593000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@61-8220-10.0.0.14:22-10.0.0.1:35918 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:56:22.614995 systemd[1]: sshd@61-8220-10.0.0.14:22-10.0.0.1:35918.service: Consumed 3.822s CPU time, 4.2M memory peak. Apr 20 19:56:22.623069 kernel: audit: type=1131 audit(1776714982.593:1261): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@61-8220-10.0.0.14:22-10.0.0.1:35918 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:56:22.751124 kubelet[3163]: E0420 19:56:22.642964 3163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="3.2s" Apr 20 19:56:23.281459 containerd[1659]: time="2026-04-20T19:56:23.280688229Z" level=error msg="get state for 38809e5cd6af13351ccdf029282f28389cf4d54a3e7b12c6f1dbc5b71d29ce3a" error="context deadline exceeded" Apr 20 19:56:23.418517 containerd[1659]: time="2026-04-20T19:56:23.288668010Z" level=warning msg="unknown status" status=0 Apr 20 19:56:23.595987 systemd[1]: session-63.scope: Deactivated successfully. Apr 20 19:56:23.664396 systemd[1]: session-63.scope: Consumed 3.073s CPU time, 18.1M memory peak. Apr 20 19:56:24.022277 systemd-logind[1627]: Session 63 logged out. Waiting for processes to exit. Apr 20 19:56:24.525325 systemd-logind[1627]: Removed session 63. Apr 20 19:56:24.725431 kubelet[3163]: E0420 19:56:24.721938 3163 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:56:25.015772 kubelet[3163]: E0420 19:56:24.834385 3163 projected.go:194] Error preparing data for projected volume kube-api-access-6ncsk for pod kube-system/kube-proxy-c6mkn: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:56:25.157254 kubelet[3163]: E0420 19:56:25.154279 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/526e8f89-8d32-4504-b20c-956610c7bb82-kube-api-access-6ncsk podName:526e8f89-8d32-4504-b20c-956610c7bb82 nodeName:}" failed. No retries permitted until 2026-04-20 19:58:27.096394376 +0000 UTC m=+3005.190644092 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-6ncsk" (UniqueName: "kubernetes.io/projected/526e8f89-8d32-4504-b20c-956610c7bb82-kube-api-access-6ncsk") pod "kube-proxy-c6mkn" (UID: "526e8f89-8d32-4504-b20c-956610c7bb82") : failed to sync configmap cache: timed out waiting for the condition Apr 20 19:56:25.355116 kubelet[3163]: E0420 19:56:25.342013 3163 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/events/kube-apiserver-localhost.18a826e4a66a3a4e\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-localhost.18a826e4a66a3a4e kube-system 2637 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-localhost,UID:5ef51a6b32499d3d1e531fb8b3a83d4f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://10.0.0.14:6443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 19:20:16 +0000 UTC,LastTimestamp:2026-04-20 19:21:09.946906552 +0000 UTC m=+768.041156259,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 19:56:25.646005 containerd[1659]: time="2026-04-20T19:56:25.641621652Z" level=error msg="failed to delete task" error="context deadline exceeded" id=292f0d8812e0e22d6b9c42f42298e77dbbf1a3ff614a20049631a83923de267a Apr 20 19:56:25.646005 containerd[1659]: time="2026-04-20T19:56:25.645583079Z" level=error msg="failed to handle container TaskExit event container_id:\"292f0d8812e0e22d6b9c42f42298e77dbbf1a3ff614a20049631a83923de267a\" id:\"292f0d8812e0e22d6b9c42f42298e77dbbf1a3ff614a20049631a83923de267a\" pid:6475 exit_status:255 exited_at:{seconds:1776714975 nanos:637766443}" error="failed to stop container: failed to delete task: context deadline exceeded" Apr 20 19:56:25.648167 kubelet[3163]: I0420 19:56:25.647594 3163 status_manager.go:895] "Failed to get status for pod" podUID="071d23f6-a94b-4165-9229-2d0570b516d8" pod="calico-system/calico-node-g9fs5" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-node-g9fs5\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 19:56:25.650345 kubelet[3163]: I0420 19:56:25.648820 3163 status_manager.go:895] "Failed to get status for pod" podUID="dfb0b7d2-b28d-4433-9fba-0074dfdf81ee" pod="calico-system/calico-apiserver-84684997fc-zpm5v" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-apiserver-84684997fc-zpm5v\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 19:56:26.060283 containerd[1659]: time="2026-04-20T19:56:26.039507280Z" level=error msg="ttrpc: received message on inactive stream" stream=41 Apr 20 19:56:26.225107 kubelet[3163]: E0420 19:56:26.213439 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.142s" Apr 20 19:56:26.395333 containerd[1659]: time="2026-04-20T19:56:26.283296796Z" level=info msg="container event discarded" container=13a5307cebf8e24a8cf569f8b508b92d0a2c1da1c970849479297704cecf330d type=CONTAINER_STARTED_EVENT Apr 20 19:56:26.424045 kubelet[3163]: E0420 19:56:26.423930 3163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="6.4s" Apr 20 19:56:26.424525 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-292f0d8812e0e22d6b9c42f42298e77dbbf1a3ff614a20049631a83923de267a-rootfs.mount: Deactivated successfully. Apr 20 19:56:26.887198 containerd[1659]: time="2026-04-20T19:56:26.886010711Z" level=error msg="ExecSync for \"38809e5cd6af13351ccdf029282f28389cf4d54a3e7b12c6f1dbc5b71d29ce3a\" failed" error="rpc error: code = Unknown desc = failed to exec in container: failed to create exec \"68a28751aad681f22186e73958a2523b95244a37f1c4fa19c9c6654da841b92b\": ttrpc: closed" Apr 20 19:56:27.186734 containerd[1659]: time="2026-04-20T19:56:27.028817015Z" level=error msg="failed sending message on channel" error="write unix /run/containerd/s/d6fd6f578359a16fb6047ac6b8915843558ecdd02f7ae288b74c76a061bb8a9a->@: write: broken pipe" runtime=io.containerd.runc.v2 Apr 20 19:56:27.298598 kubelet[3163]: E0420 19:56:27.296947 3163 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to create exec \"68a28751aad681f22186e73958a2523b95244a37f1c4fa19c9c6654da841b92b\": ttrpc: closed" containerID="38809e5cd6af13351ccdf029282f28389cf4d54a3e7b12c6f1dbc5b71d29ce3a" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 20 19:56:27.300814 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-38809e5cd6af13351ccdf029282f28389cf4d54a3e7b12c6f1dbc5b71d29ce3a-rootfs.mount: Deactivated successfully. Apr 20 19:56:27.476391 containerd[1659]: time="2026-04-20T19:56:27.466835041Z" level=info msg="TaskExit event container_id:\"292f0d8812e0e22d6b9c42f42298e77dbbf1a3ff614a20049631a83923de267a\" id:\"292f0d8812e0e22d6b9c42f42298e77dbbf1a3ff614a20049631a83923de267a\" pid:6475 exit_status:255 exited_at:{seconds:1776714975 nanos:637766443}" Apr 20 19:56:28.146184 containerd[1659]: time="2026-04-20T19:56:28.123265362Z" level=error msg="ExecSync for \"38809e5cd6af13351ccdf029282f28389cf4d54a3e7b12c6f1dbc5b71d29ce3a\" failed" error="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task 38809e5cd6af13351ccdf029282f28389cf4d54a3e7b12c6f1dbc5b71d29ce3a not found" Apr 20 19:56:28.469832 systemd[1]: Started sshd@62-4109-10.0.0.14:22-10.0.0.1:57676.service - OpenSSH per-connection server daemon (10.0.0.1:57676). Apr 20 19:56:28.594000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@62-4109-10.0.0.14:22-10.0.0.1:57676 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:56:28.822388 kubelet[3163]: E0420 19:56:28.663325 3163 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task 38809e5cd6af13351ccdf029282f28389cf4d54a3e7b12c6f1dbc5b71d29ce3a not found" containerID="38809e5cd6af13351ccdf029282f28389cf4d54a3e7b12c6f1dbc5b71d29ce3a" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 20 19:56:29.016952 kernel: audit: type=1130 audit(1776714988.594:1262): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@62-4109-10.0.0.14:22-10.0.0.1:57676 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:56:29.017463 kubelet[3163]: E0420 19:56:29.013254 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.571s" Apr 20 19:56:29.216033 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-13a5307cebf8e24a8cf569f8b508b92d0a2c1da1c970849479297704cecf330d-rootfs.mount: Deactivated successfully. Apr 20 19:56:29.496495 containerd[1659]: time="2026-04-20T19:56:29.241735172Z" level=error msg="failed to delete task" error="rpc error: code = NotFound desc = container not created: not found" id=292f0d8812e0e22d6b9c42f42298e77dbbf1a3ff614a20049631a83923de267a Apr 20 19:56:29.892455 containerd[1659]: time="2026-04-20T19:56:29.827174314Z" level=info msg="StopContainer for \"38809e5cd6af13351ccdf029282f28389cf4d54a3e7b12c6f1dbc5b71d29ce3a\" returns successfully" Apr 20 19:56:30.146161 containerd[1659]: time="2026-04-20T19:56:30.143936308Z" level=info msg="Ensure that container 292f0d8812e0e22d6b9c42f42298e77dbbf1a3ff614a20049631a83923de267a in task-service has been cleanup successfully" Apr 20 19:56:30.319678 kubelet[3163]: E0420 19:56:30.318412 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1537\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 20 19:56:30.551081 kubelet[3163]: E0420 19:56:30.547499 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.53s" Apr 20 19:56:30.794293 containerd[1659]: time="2026-04-20T19:56:30.789727159Z" level=info msg="CreateContainer within sandbox \"1bba4a8c43a21f7135445620d6423b332e6946df1fe8c521ac8720d95194888f\" for container name:\"calico-node\" attempt:2" Apr 20 19:56:32.037337 kubelet[3163]: E0420 19:56:32.036957 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.154s" Apr 20 19:56:32.064002 containerd[1659]: time="2026-04-20T19:56:32.063718563Z" level=info msg="Container 6c1d610fa13cf975d72818ef3ba28644ae87099c5273d909f3b984c9ff12b616: CDI devices from CRI Config.CDIDevices: []" Apr 20 19:56:32.148707 kubelet[3163]: I0420 19:56:32.148213 3163 scope.go:117] "RemoveContainer" containerID="7ad53f3fafdaae0a232b3f26afb202844744846718e9540c77ad7513e837efd9" Apr 20 19:56:32.191408 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3524448275.mount: Deactivated successfully. Apr 20 19:56:32.463000 audit[6928]: AUDIT1101 pid=6928 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:56:32.525891 sshd[6928]: Accepted publickey for core from 10.0.0.1 port 57676 ssh2: RSA SHA256:ZNIzts6V4KYKlrJxXaosrimCRlmsV/+NkZ5UtjwHrjE Apr 20 19:56:32.536000 audit[6928]: AUDIT1103 pid=6928 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:56:32.536000 audit[6928]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffed8976020 a2=3 a3=0 items=0 ppid=1 pid=6928 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=64 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:56:32.536000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:56:32.725948 kernel: audit: type=1101 audit(1776714992.463:1263): pid=6928 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:56:32.537891 sshd-session[6928]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 19:56:32.893504 containerd[1659]: time="2026-04-20T19:56:32.693230557Z" level=info msg="CreateContainer within sandbox \"1bba4a8c43a21f7135445620d6423b332e6946df1fe8c521ac8720d95194888f\" for name:\"calico-node\" attempt:2 returns container id \"6c1d610fa13cf975d72818ef3ba28644ae87099c5273d909f3b984c9ff12b616\"" Apr 20 19:56:32.908191 kernel: audit: type=1103 audit(1776714992.536:1264): pid=6928 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:56:32.915048 kernel: audit: type=1006 audit(1776714992.536:1265): pid=6928 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=64 res=1 Apr 20 19:56:32.923653 kernel: audit: type=1300 audit(1776714992.536:1265): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffed8976020 a2=3 a3=0 items=0 ppid=1 pid=6928 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=64 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:56:32.924324 kernel: audit: type=1327 audit(1776714992.536:1265): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:56:32.980461 containerd[1659]: time="2026-04-20T19:56:32.978276969Z" level=info msg="StartContainer for \"6c1d610fa13cf975d72818ef3ba28644ae87099c5273d909f3b984c9ff12b616\"" Apr 20 19:56:33.058501 containerd[1659]: time="2026-04-20T19:56:32.983012220Z" level=info msg="RemoveContainer for \"7ad53f3fafdaae0a232b3f26afb202844744846718e9540c77ad7513e837efd9\"" Apr 20 19:56:33.063348 kubelet[3163]: E0420 19:56:33.059500 3163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 20 19:56:33.090005 containerd[1659]: time="2026-04-20T19:56:33.089857055Z" level=info msg="connecting to shim 6c1d610fa13cf975d72818ef3ba28644ae87099c5273d909f3b984c9ff12b616" address="unix:///run/containerd/s/d6fd6f578359a16fb6047ac6b8915843558ecdd02f7ae288b74c76a061bb8a9a" protocol=ttrpc version=3 Apr 20 19:56:33.090853 systemd-logind[1627]: New session '64' of user 'core' with class 'user' and type 'tty'. Apr 20 19:56:33.114774 systemd[1]: Started session-64.scope - Session 64 of User core. Apr 20 19:56:33.221747 kubelet[3163]: I0420 19:56:33.221522 3163 scope.go:117] "RemoveContainer" containerID="13a5307cebf8e24a8cf569f8b508b92d0a2c1da1c970849479297704cecf330d" Apr 20 19:56:33.248338 containerd[1659]: time="2026-04-20T19:56:33.248186123Z" level=info msg="RemoveContainer for \"7ad53f3fafdaae0a232b3f26afb202844744846718e9540c77ad7513e837efd9\" returns successfully" Apr 20 19:56:33.345996 kubelet[3163]: I0420 19:56:33.337087 3163 scope.go:117] "RemoveContainer" containerID="094aeb199e5141e10c4aa1ca00e31f3c2ea5db40bdffe7260f4ac4067e20028a" Apr 20 19:56:33.392000 audit[6928]: AUDIT1105 pid=6928 uid=0 auid=500 ses=64 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:56:33.554891 kernel: audit: type=1105 audit(1776714993.392:1266): pid=6928 uid=0 auid=500 ses=64 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:56:33.765065 kubelet[3163]: I0420 19:56:33.764359 3163 status_manager.go:895] "Failed to get status for pod" podUID="071d23f6-a94b-4165-9229-2d0570b516d8" pod="calico-system/calico-node-g9fs5" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-node-g9fs5\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 19:56:33.776000 audit[6933]: AUDIT1103 pid=6933 uid=0 auid=500 ses=64 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:56:33.849296 kernel: audit: type=1103 audit(1776714993.776:1267): pid=6933 uid=0 auid=500 ses=64 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:56:33.889690 kubelet[3163]: I0420 19:56:33.888572 3163 status_manager.go:895] "Failed to get status for pod" podUID="dfb0b7d2-b28d-4433-9fba-0074dfdf81ee" pod="calico-system/calico-apiserver-84684997fc-zpm5v" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-apiserver-84684997fc-zpm5v\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 19:56:34.002937 containerd[1659]: time="2026-04-20T19:56:34.002687368Z" level=info msg="RemoveContainer for \"094aeb199e5141e10c4aa1ca00e31f3c2ea5db40bdffe7260f4ac4067e20028a\"" Apr 20 19:56:34.097687 kubelet[3163]: I0420 19:56:34.093510 3163 scope.go:117] "RemoveContainer" containerID="292f0d8812e0e22d6b9c42f42298e77dbbf1a3ff614a20049631a83923de267a" Apr 20 19:56:34.097687 kubelet[3163]: I0420 19:56:34.095520 3163 status_manager.go:895] "Failed to get status for pod" podUID="071d23f6-a94b-4165-9229-2d0570b516d8" pod="calico-system/calico-node-g9fs5" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-node-g9fs5\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 19:56:34.126457 kubelet[3163]: E0420 19:56:34.116471 3163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:56:34.126501 containerd[1659]: time="2026-04-20T19:56:34.126149147Z" level=info msg="CreateContainer within sandbox \"de19448814837f2189e818ee408a7454c5277a1c9b2f91f664708851a1478571\" for container name:\"calico-apiserver\" attempt:3" Apr 20 19:56:34.278253 kubelet[3163]: I0420 19:56:34.266139 3163 status_manager.go:895] "Failed to get status for pod" podUID="dfb0b7d2-b28d-4433-9fba-0074dfdf81ee" pod="calico-system/calico-apiserver-84684997fc-zpm5v" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-apiserver-84684997fc-zpm5v\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 19:56:34.505794 kubelet[3163]: I0420 19:56:34.505004 3163 status_manager.go:895] "Failed to get status for pod" podUID="5ef51a6b32499d3d1e531fb8b3a83d4f" pod="kube-system/kube-apiserver-localhost" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 19:56:34.534624 containerd[1659]: time="2026-04-20T19:56:34.534260210Z" level=info msg="CreateContainer within sandbox \"a0a1c013bb9119be3e83c967343167afaabfa5d3210072f49e9de991e138aad2\" for container name:\"kube-apiserver\" attempt:2" Apr 20 19:56:35.490749 kubelet[3163]: I0420 19:56:35.456129 3163 status_manager.go:895] "Failed to get status for pod" podUID="5ef51a6b32499d3d1e531fb8b3a83d4f" pod="kube-system/kube-apiserver-localhost" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 19:56:35.526763 kubelet[3163]: E0420 19:56:35.522012 3163 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/events/kube-apiserver-localhost.18a826e4a66a3a4e\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-localhost.18a826e4a66a3a4e kube-system 2637 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-localhost,UID:5ef51a6b32499d3d1e531fb8b3a83d4f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://10.0.0.14:6443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 19:20:16 +0000 UTC,LastTimestamp:2026-04-20 19:21:09.946906552 +0000 UTC m=+768.041156259,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 19:56:35.551185 kubelet[3163]: I0420 19:56:35.544841 3163 status_manager.go:895] "Failed to get status for pod" podUID="071d23f6-a94b-4165-9229-2d0570b516d8" pod="calico-system/calico-node-g9fs5" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-node-g9fs5\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 19:56:35.713296 containerd[1659]: time="2026-04-20T19:56:35.710353100Z" level=info msg="RemoveContainer for \"094aeb199e5141e10c4aa1ca00e31f3c2ea5db40bdffe7260f4ac4067e20028a\" returns successfully" Apr 20 19:56:35.754505 kubelet[3163]: I0420 19:56:35.753375 3163 status_manager.go:895] "Failed to get status for pod" podUID="dfb0b7d2-b28d-4433-9fba-0074dfdf81ee" pod="calico-system/calico-apiserver-84684997fc-zpm5v" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-apiserver-84684997fc-zpm5v\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 19:56:35.754505 kubelet[3163]: I0420 19:56:35.753587 3163 scope.go:117] "RemoveContainer" containerID="336c04308422bb47be47dbabd9a54c52608564fda7f37c62a62203892575bd65" Apr 20 19:56:36.512440 containerd[1659]: time="2026-04-20T19:56:36.393246013Z" level=info msg="Container 31fa0b5b42e52fa654e4e40a75db16fac90293fdf9e236b87985c2e9f4e819b2: CDI devices from CRI Config.CDIDevices: []" Apr 20 19:56:36.810272 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2191794583.mount: Deactivated successfully. Apr 20 19:56:37.660702 containerd[1659]: time="2026-04-20T19:56:37.659309446Z" level=info msg="RemoveContainer for \"336c04308422bb47be47dbabd9a54c52608564fda7f37c62a62203892575bd65\"" Apr 20 19:56:39.146987 containerd[1659]: time="2026-04-20T19:56:39.146756558Z" level=info msg="CreateContainer within sandbox \"de19448814837f2189e818ee408a7454c5277a1c9b2f91f664708851a1478571\" for name:\"calico-apiserver\" attempt:3 returns container id \"31fa0b5b42e52fa654e4e40a75db16fac90293fdf9e236b87985c2e9f4e819b2\"" Apr 20 19:56:39.547957 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2064427281.mount: Deactivated successfully. Apr 20 19:56:40.039447 containerd[1659]: time="2026-04-20T19:56:40.019466522Z" level=info msg="Container 54c6b42922bf5b9031d3f8d7454f339017c2e17aaf05317c673f5a78f690205c: CDI devices from CRI Config.CDIDevices: []" Apr 20 19:56:40.182688 containerd[1659]: time="2026-04-20T19:56:40.179313559Z" level=info msg="StartContainer for \"31fa0b5b42e52fa654e4e40a75db16fac90293fdf9e236b87985c2e9f4e819b2\"" Apr 20 19:56:41.613840 containerd[1659]: time="2026-04-20T19:56:41.613722535Z" level=info msg="RemoveContainer for \"336c04308422bb47be47dbabd9a54c52608564fda7f37c62a62203892575bd65\" returns successfully" Apr 20 19:56:41.615008 containerd[1659]: time="2026-04-20T19:56:41.614981183Z" level=info msg="connecting to shim 31fa0b5b42e52fa654e4e40a75db16fac90293fdf9e236b87985c2e9f4e819b2" address="unix:///run/containerd/s/9f25d20f4617cde34f7397032d9ecbc0b43cd780bc15ce3e8713428f4b2ceb63" protocol=ttrpc version=3 Apr 20 19:56:41.619206 kubelet[3163]: E0420 19:56:41.615776 3163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 20 19:56:41.730205 systemd[1]: Started cri-containerd-6c1d610fa13cf975d72818ef3ba28644ae87099c5273d909f3b984c9ff12b616.scope - libcontainer container 6c1d610fa13cf975d72818ef3ba28644ae87099c5273d909f3b984c9ff12b616. Apr 20 19:56:43.824487 containerd[1659]: time="2026-04-20T19:56:43.824102302Z" level=info msg="CreateContainer within sandbox \"a0a1c013bb9119be3e83c967343167afaabfa5d3210072f49e9de991e138aad2\" for name:\"kube-apiserver\" attempt:2 returns container id \"54c6b42922bf5b9031d3f8d7454f339017c2e17aaf05317c673f5a78f690205c\"" Apr 20 19:56:46.231198 sshd[6933]: Connection closed by 10.0.0.1 port 57676 Apr 20 19:56:46.285152 sshd-session[6928]: pam_unix(sshd:session): session closed for user core Apr 20 19:56:46.480957 kubelet[3163]: E0420 19:56:46.479602 3163 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/events/kube-apiserver-localhost.18a826e4a66a3a4e\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-localhost.18a826e4a66a3a4e kube-system 2637 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-localhost,UID:5ef51a6b32499d3d1e531fb8b3a83d4f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://10.0.0.14:6443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 19:20:16 +0000 UTC,LastTimestamp:2026-04-20 19:21:09.946906552 +0000 UTC m=+768.041156259,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 19:56:46.480000 audit[6928]: AUDIT1106 pid=6928 uid=0 auid=500 ses=64 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:56:46.480000 audit[6928]: AUDIT1104 pid=6928 uid=0 auid=500 ses=64 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:56:46.529293 kernel: audit: type=1106 audit(1776715006.480:1268): pid=6928 uid=0 auid=500 ses=64 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:56:46.529346 kernel: audit: type=1104 audit(1776715006.480:1269): pid=6928 uid=0 auid=500 ses=64 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:56:46.530230 systemd[1]: sshd@62-4109-10.0.0.14:22-10.0.0.1:57676.service: Deactivated successfully. Apr 20 19:56:46.568000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@62-4109-10.0.0.14:22-10.0.0.1:57676 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:56:46.877273 kernel: audit: type=1131 audit(1776715006.568:1270): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@62-4109-10.0.0.14:22-10.0.0.1:57676 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:56:46.589096 systemd[1]: sshd@62-4109-10.0.0.14:22-10.0.0.1:57676.service: Consumed 2.058s CPU time, 4.2M memory peak. Apr 20 19:56:46.875124 systemd[1]: session-64.scope: Deactivated successfully. Apr 20 19:56:46.900121 systemd[1]: session-64.scope: Consumed 8.247s CPU time, 17.8M memory peak. Apr 20 19:56:46.914377 systemd-logind[1627]: Session 64 logged out. Waiting for processes to exit. Apr 20 19:56:46.915522 systemd-logind[1627]: Removed session 64. Apr 20 19:56:47.488320 containerd[1659]: time="2026-04-20T19:56:47.486330817Z" level=info msg="StartContainer for \"54c6b42922bf5b9031d3f8d7454f339017c2e17aaf05317c673f5a78f690205c\"" Apr 20 19:56:49.710508 containerd[1659]: time="2026-04-20T19:56:49.661946995Z" level=info msg="connecting to shim 54c6b42922bf5b9031d3f8d7454f339017c2e17aaf05317c673f5a78f690205c" address="unix:///run/containerd/s/80102222aa3ed4b7ee78377cd8f0cd98fe2254d5e4d09c655e1726e3fa17fed4" protocol=ttrpc version=3 Apr 20 19:56:51.271247 containerd[1659]: time="2026-04-20T19:56:51.172358881Z" level=error msg="ExecSync for \"38809e5cd6af13351ccdf029282f28389cf4d54a3e7b12c6f1dbc5b71d29ce3a\" failed" error="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" Apr 20 19:56:52.418322 kubelet[3163]: E0420 19:56:52.412199 3163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 20 19:56:53.355527 kubelet[3163]: E0420 19:56:53.346327 3163 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" containerID="38809e5cd6af13351ccdf029282f28389cf4d54a3e7b12c6f1dbc5b71d29ce3a" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 20 19:56:53.851424 kubelet[3163]: I0420 19:56:53.849917 3163 status_manager.go:895] "Failed to get status for pod" podUID="5ef51a6b32499d3d1e531fb8b3a83d4f" pod="kube-system/kube-apiserver-localhost" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 19:56:53.990921 kubelet[3163]: I0420 19:56:53.990815 3163 status_manager.go:895] "Failed to get status for pod" podUID="071d23f6-a94b-4165-9229-2d0570b516d8" pod="calico-system/calico-node-g9fs5" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-node-g9fs5\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 19:56:54.007123 kubelet[3163]: E0420 19:56:53.991134 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="17.133s" Apr 20 19:56:54.013831 systemd[1]: Started sshd@63-4110-10.0.0.14:22-10.0.0.1:52166.service - OpenSSH per-connection server daemon (10.0.0.1:52166). Apr 20 19:56:54.014000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@63-4110-10.0.0.14:22-10.0.0.1:52166 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:56:54.043279 kernel: audit: type=1130 audit(1776715014.014:1271): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@63-4110-10.0.0.14:22-10.0.0.1:52166 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:56:54.072163 kubelet[3163]: I0420 19:56:54.056193 3163 status_manager.go:895] "Failed to get status for pod" podUID="dfb0b7d2-b28d-4433-9fba-0074dfdf81ee" pod="calico-system/calico-apiserver-84684997fc-zpm5v" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-apiserver-84684997fc-zpm5v\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 19:56:54.158252 containerd[1659]: time="2026-04-20T19:56:54.071982467Z" level=error msg="ExecSync for \"38809e5cd6af13351ccdf029282f28389cf4d54a3e7b12c6f1dbc5b71d29ce3a\" failed" error="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" Apr 20 19:56:54.861310 kubelet[3163]: E0420 19:56:54.840466 3163 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" containerID="38809e5cd6af13351ccdf029282f28389cf4d54a3e7b12c6f1dbc5b71d29ce3a" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 20 19:56:55.337000 audit: BPF prog-id=227 op=LOAD Apr 20 19:56:55.337000 audit[6932]: SYSCALL arch=c000003e syscall=321 success=yes exit=19 a0=5 a1=c0001a8490 a2=98 a3=0 items=0 ppid=4032 pid=6932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:56:55.337000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663316436313066613133636639373564373238313865663362613238 Apr 20 19:56:55.338000 audit: BPF prog-id=228 op=LOAD Apr 20 19:56:55.338000 audit[6932]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8220 a2=98 a3=0 items=0 ppid=4032 pid=6932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:56:55.338000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663316436313066613133636639373564373238313865663362613238 Apr 20 19:56:55.338000 audit: BPF prog-id=228 op=UNLOAD Apr 20 19:56:55.338000 audit[6932]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4032 pid=6932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:56:55.338000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663316436313066613133636639373564373238313865663362613238 Apr 20 19:56:55.353000 audit: BPF prog-id=227 op=UNLOAD Apr 20 19:56:55.353000 audit[6932]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=13 a1=0 a2=0 a3=0 items=0 ppid=4032 pid=6932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:56:55.353000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663316436313066613133636639373564373238313865663362613238 Apr 20 19:56:55.354000 audit: BPF prog-id=229 op=LOAD Apr 20 19:56:55.354000 audit[6932]: SYSCALL arch=c000003e syscall=321 success=yes exit=19 a0=5 a1=c0001a86f0 a2=98 a3=0 items=0 ppid=4032 pid=6932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:56:55.354000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663316436313066613133636639373564373238313865663362613238 Apr 20 19:56:55.853433 kernel: audit: type=1334 audit(1776715015.337:1272): prog-id=227 op=LOAD Apr 20 19:56:55.880141 kernel: audit: type=1300 audit(1776715015.337:1272): arch=c000003e syscall=321 success=yes exit=19 a0=5 a1=c0001a8490 a2=98 a3=0 items=0 ppid=4032 pid=6932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:56:55.917409 kernel: audit: type=1327 audit(1776715015.337:1272): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663316436313066613133636639373564373238313865663362613238 Apr 20 19:56:55.921975 kernel: audit: type=1334 audit(1776715015.338:1273): prog-id=228 op=LOAD Apr 20 19:56:55.924525 kernel: audit: type=1300 audit(1776715015.338:1273): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8220 a2=98 a3=0 items=0 ppid=4032 pid=6932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:56:55.930404 kernel: audit: type=1327 audit(1776715015.338:1273): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663316436313066613133636639373564373238313865663362613238 Apr 20 19:56:55.936965 kernel: audit: type=1334 audit(1776715015.338:1274): prog-id=228 op=UNLOAD Apr 20 19:56:55.937709 kernel: audit: type=1300 audit(1776715015.338:1274): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4032 pid=6932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:56:55.937738 kernel: audit: type=1327 audit(1776715015.338:1274): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663316436313066613133636639373564373238313865663362613238 Apr 20 19:56:56.679045 containerd[1659]: time="2026-04-20T19:56:56.546281543Z" level=error msg="ExecSync for \"38809e5cd6af13351ccdf029282f28389cf4d54a3e7b12c6f1dbc5b71d29ce3a\" failed" error="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" Apr 20 19:56:56.739480 containerd[1659]: time="2026-04-20T19:56:56.595444073Z" level=error msg="get state for 6c1d610fa13cf975d72818ef3ba28644ae87099c5273d909f3b984c9ff12b616" error="context deadline exceeded" Apr 20 19:56:56.822066 containerd[1659]: time="2026-04-20T19:56:56.710508800Z" level=warning msg="unknown status" status=0 Apr 20 19:56:56.850062 kubelet[3163]: E0420 19:56:56.849972 3163 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" containerID="38809e5cd6af13351ccdf029282f28389cf4d54a3e7b12c6f1dbc5b71d29ce3a" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 20 19:56:57.019416 kubelet[3163]: E0420 19:56:57.009430 3163 token_manager.go:124] "Couldn't update token" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/tigera-operator/serviceaccounts/tigera-operator/token\": dial tcp 10.0.0.14:6443: connect: connection refused" cacheKey="\"tigera-operator\"/\"tigera-operator\"/[]string(nil)/3607/v1.BoundObjectReference{Kind:\"Pod\", APIVersion:\"v1\", Name:\"tigera-operator-6bf85f8dd-hvgdj\", UID:\"22f1ff03-de8a-48db-b03e-54fdbe0d3d5f\"}" Apr 20 19:56:57.030268 kubelet[3163]: E0420 19:56:57.027416 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.004s" Apr 20 19:56:57.367329 kubelet[3163]: E0420 19:56:57.243345 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.Secret: Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/secrets?fieldSelector=metadata.name%3Dcalico-apiserver-certs&resourceVersion=2406\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="object-\"calico-system\"/\"calico-apiserver-certs\"" type="*v1.Secret" Apr 20 19:56:57.664579 kubelet[3163]: I0420 19:56:57.662116 3163 status_manager.go:895] "Failed to get status for pod" podUID="5ef51a6b32499d3d1e531fb8b3a83d4f" pod="kube-system/kube-apiserver-localhost" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 19:56:57.764272 kubelet[3163]: E0420 19:56:57.422002 3163 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/events/kube-apiserver-localhost.18a826e4a66a3a4e\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-localhost.18a826e4a66a3a4e kube-system 2637 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-localhost,UID:5ef51a6b32499d3d1e531fb8b3a83d4f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://10.0.0.14:6443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 19:20:16 +0000 UTC,LastTimestamp:2026-04-20 19:21:09.946906552 +0000 UTC m=+768.041156259,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 19:56:58.053778 kubelet[3163]: E0420 19:56:58.025209 3163 projected.go:289] Couldn't get configMap tigera-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:56:58.137008 kubelet[3163]: E0420 19:56:58.125336 3163 projected.go:194] Error preparing data for projected volume kube-api-access-qj2d9 for pod tigera-operator/tigera-operator-6bf85f8dd-hvgdj: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:56:58.171318 kubelet[3163]: I0420 19:56:58.171173 3163 status_manager.go:895] "Failed to get status for pod" podUID="071d23f6-a94b-4165-9229-2d0570b516d8" pod="calico-system/calico-node-g9fs5" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-node-g9fs5\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 19:56:58.289854 kubelet[3163]: E0420 19:56:58.171198 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/22f1ff03-de8a-48db-b03e-54fdbe0d3d5f-kube-api-access-qj2d9 podName:22f1ff03-de8a-48db-b03e-54fdbe0d3d5f nodeName:}" failed. No retries permitted until 2026-04-20 19:59:00.170615875 +0000 UTC m=+3038.264865578 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-qj2d9" (UniqueName: "kubernetes.io/projected/22f1ff03-de8a-48db-b03e-54fdbe0d3d5f-kube-api-access-qj2d9") pod "tigera-operator-6bf85f8dd-hvgdj" (UID: "22f1ff03-de8a-48db-b03e-54fdbe0d3d5f") : failed to sync configmap cache: timed out waiting for the condition Apr 20 19:56:58.291628 kubelet[3163]: I0420 19:56:58.290791 3163 status_manager.go:895] "Failed to get status for pod" podUID="dfb0b7d2-b28d-4433-9fba-0074dfdf81ee" pod="calico-system/calico-apiserver-84684997fc-zpm5v" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-apiserver-84684997fc-zpm5v\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 19:56:58.297253 systemd[1]: Started cri-containerd-31fa0b5b42e52fa654e4e40a75db16fac90293fdf9e236b87985c2e9f4e819b2.scope - libcontainer container 31fa0b5b42e52fa654e4e40a75db16fac90293fdf9e236b87985c2e9f4e819b2. Apr 20 19:56:59.440099 kubelet[3163]: E0420 19:56:59.440054 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.412s" Apr 20 19:57:00.014811 containerd[1659]: time="2026-04-20T19:57:00.011996568Z" level=error msg="get state for 6c1d610fa13cf975d72818ef3ba28644ae87099c5273d909f3b984c9ff12b616" error="context deadline exceeded" Apr 20 19:57:00.235474 containerd[1659]: time="2026-04-20T19:57:00.027720574Z" level=warning msg="unknown status" status=0 Apr 20 19:57:00.076380 systemd[1]: Started cri-containerd-54c6b42922bf5b9031d3f8d7454f339017c2e17aaf05317c673f5a78f690205c.scope - libcontainer container 54c6b42922bf5b9031d3f8d7454f339017c2e17aaf05317c673f5a78f690205c. Apr 20 19:57:00.812143 containerd[1659]: time="2026-04-20T19:57:00.736171381Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 20 19:57:00.852911 containerd[1659]: time="2026-04-20T19:57:00.829440359Z" level=error msg="ttrpc: received message on inactive stream" stream=5 Apr 20 19:57:00.890000 audit[6988]: AUDIT1101 pid=6988 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:57:00.953476 sshd[6988]: Accepted publickey for core from 10.0.0.1 port 52166 ssh2: RSA SHA256:ZNIzts6V4KYKlrJxXaosrimCRlmsV/+NkZ5UtjwHrjE Apr 20 19:57:00.962000 audit[6988]: AUDIT1103 pid=6988 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:57:01.081000 audit[6988]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc626d73b0 a2=3 a3=0 items=0 ppid=1 pid=6988 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=65 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:57:01.081000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:57:01.194100 kernel: kauditd_printk_skb: 6 callbacks suppressed Apr 20 19:57:01.145823 sshd-session[6988]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 19:57:01.194513 kernel: audit: type=1101 audit(1776715020.890:1277): pid=6988 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:57:01.194611 kernel: audit: type=1103 audit(1776715020.962:1278): pid=6988 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:57:01.194717 kernel: audit: type=1006 audit(1776715021.081:1279): pid=6988 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=65 res=1 Apr 20 19:57:01.194748 kernel: audit: type=1300 audit(1776715021.081:1279): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc626d73b0 a2=3 a3=0 items=0 ppid=1 pid=6988 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=65 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:57:01.194804 kernel: audit: type=1327 audit(1776715021.081:1279): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:57:01.319654 kubelet[3163]: E0420 19:57:01.318166 3163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 20 19:57:02.191242 systemd-logind[1627]: New session '65' of user 'core' with class 'user' and type 'tty'. Apr 20 19:57:02.347271 kubelet[3163]: E0420 19:57:02.328464 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.694s" Apr 20 19:57:02.479054 systemd[1]: Started session-65.scope - Session 65 of User core. Apr 20 19:57:02.988428 kubelet[3163]: E0420 19:57:02.988266 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.Secret: Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/secrets?fieldSelector=metadata.name%3Dnode-certs&resourceVersion=2406\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="object-\"calico-system\"/\"node-certs\"" type="*v1.Secret" Apr 20 19:57:04.391000 audit[6988]: AUDIT1105 pid=6988 uid=0 auid=500 ses=65 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:57:04.495805 kernel: audit: type=1105 audit(1776715024.391:1280): pid=6988 uid=0 auid=500 ses=65 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:57:04.851019 kubelet[3163]: E0420 19:57:04.294720 3163 secret.go:189] Couldn't get secret calico-system/node-certs: failed to sync secret cache: timed out waiting for the condition Apr 20 19:57:04.955000 audit[7019]: AUDIT1103 pid=7019 uid=0 auid=500 ses=65 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:57:05.159039 kernel: audit: type=1103 audit(1776715024.955:1281): pid=7019 uid=0 auid=500 ses=65 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:57:05.848778 kubelet[3163]: E0420 19:57:05.847645 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/071d23f6-a94b-4165-9229-2d0570b516d8-node-certs podName:071d23f6-a94b-4165-9229-2d0570b516d8 nodeName:}" failed. No retries permitted until 2026-04-20 19:59:07.454273349 +0000 UTC m=+3045.548523058 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "node-certs" (UniqueName: "kubernetes.io/secret/071d23f6-a94b-4165-9229-2d0570b516d8-node-certs") pod "calico-node-g9fs5" (UID: "071d23f6-a94b-4165-9229-2d0570b516d8") : failed to sync secret cache: timed out waiting for the condition Apr 20 19:57:05.931926 kubelet[3163]: E0420 19:57:05.925998 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=2825\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 20 19:57:06.021000 audit: BPF prog-id=230 op=LOAD Apr 20 19:57:06.025000 audit: BPF prog-id=231 op=LOAD Apr 20 19:57:06.046000 audit: BPF prog-id=232 op=LOAD Apr 20 19:57:06.046000 audit[6977]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000218240 a2=98 a3=0 items=0 ppid=2849 pid=6977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:57:06.046000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534633662343239323262663562393033316433663864373435346633 Apr 20 19:57:06.047000 audit: BPF prog-id=232 op=UNLOAD Apr 20 19:57:06.047000 audit[6977]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2849 pid=6977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:57:06.047000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534633662343239323262663562393033316433663864373435346633 Apr 20 19:57:06.082000 audit: BPF prog-id=233 op=LOAD Apr 20 19:57:06.082000 audit[6977]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000218490 a2=98 a3=0 items=0 ppid=2849 pid=6977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:57:06.082000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534633662343239323262663562393033316433663864373435346633 Apr 20 19:57:06.097000 audit: BPF prog-id=234 op=LOAD Apr 20 19:57:06.097000 audit[6977]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000218220 a2=98 a3=0 items=0 ppid=2849 pid=6977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:57:06.097000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534633662343239323262663562393033316433663864373435346633 Apr 20 19:57:06.150000 audit: BPF prog-id=234 op=UNLOAD Apr 20 19:57:06.150000 audit[6977]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2849 pid=6977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:57:06.150000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534633662343239323262663562393033316433663864373435346633 Apr 20 19:57:06.187000 audit: BPF prog-id=233 op=UNLOAD Apr 20 19:57:06.187000 audit[6977]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2849 pid=6977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:57:06.187000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534633662343239323262663562393033316433663864373435346633 Apr 20 19:57:06.187000 audit: BPF prog-id=235 op=LOAD Apr 20 19:57:06.187000 audit[6977]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0002186f0 a2=98 a3=0 items=0 ppid=2849 pid=6977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:57:06.187000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534633662343239323262663562393033316433663864373435346633 Apr 20 19:57:06.298186 kernel: audit: type=1334 audit(1776715026.021:1282): prog-id=230 op=LOAD Apr 20 19:57:06.336934 kubelet[3163]: I0420 19:57:06.024094 3163 status_manager.go:895] "Failed to get status for pod" podUID="5ef51a6b32499d3d1e531fb8b3a83d4f" pod="kube-system/kube-apiserver-localhost" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 19:57:06.341815 kernel: audit: type=1334 audit(1776715026.025:1283): prog-id=231 op=LOAD Apr 20 19:57:06.361225 kernel: audit: type=1334 audit(1776715026.046:1284): prog-id=232 op=LOAD Apr 20 19:57:06.393366 kernel: audit: type=1300 audit(1776715026.046:1284): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000218240 a2=98 a3=0 items=0 ppid=2849 pid=6977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:57:06.404294 kernel: audit: type=1327 audit(1776715026.046:1284): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534633662343239323262663562393033316433663864373435346633 Apr 20 19:57:06.404689 kernel: audit: type=1334 audit(1776715026.047:1285): prog-id=232 op=UNLOAD Apr 20 19:57:06.404750 kernel: audit: type=1300 audit(1776715026.047:1285): arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2849 pid=6977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:57:06.217000 audit: BPF prog-id=236 op=LOAD Apr 20 19:57:06.217000 audit[6956]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0000dc240 a2=98 a3=0 items=0 ppid=5396 pid=6956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:57:06.217000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331666130623562343265353266613635346534653430613735646231 Apr 20 19:57:06.405777 kernel: audit: type=1327 audit(1776715026.047:1285): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534633662343239323262663562393033316433663864373435346633 Apr 20 19:57:06.405803 kubelet[3163]: I0420 19:57:06.048416 3163 status_manager.go:895] "Failed to get status for pod" podUID="071d23f6-a94b-4165-9229-2d0570b516d8" pod="calico-system/calico-node-g9fs5" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-node-g9fs5\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 19:57:06.406106 kernel: audit: type=1334 audit(1776715026.082:1286): prog-id=233 op=LOAD Apr 20 19:57:06.406138 kernel: audit: type=1300 audit(1776715026.082:1286): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000218490 a2=98 a3=0 items=0 ppid=2849 pid=6977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:57:06.406369 kubelet[3163]: I0420 19:57:06.406300 3163 status_manager.go:895] "Failed to get status for pod" podUID="dfb0b7d2-b28d-4433-9fba-0074dfdf81ee" pod="calico-system/calico-apiserver-84684997fc-zpm5v" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-apiserver-84684997fc-zpm5v\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 19:57:06.417000 audit: BPF prog-id=236 op=UNLOAD Apr 20 19:57:06.417000 audit[6956]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5396 pid=6956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:57:06.417000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331666130623562343265353266613635346534653430613735646231 Apr 20 19:57:06.417000 audit: BPF prog-id=237 op=LOAD Apr 20 19:57:06.417000 audit[6956]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0000dc490 a2=98 a3=0 items=0 ppid=5396 pid=6956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:57:06.417000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331666130623562343265353266613635346534653430613735646231 Apr 20 19:57:06.417000 audit: BPF prog-id=238 op=LOAD Apr 20 19:57:06.417000 audit[6956]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0000dc220 a2=98 a3=0 items=0 ppid=5396 pid=6956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:57:06.417000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331666130623562343265353266613635346534653430613735646231 Apr 20 19:57:06.418000 audit: BPF prog-id=238 op=UNLOAD Apr 20 19:57:06.418000 audit[6956]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5396 pid=6956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:57:06.418000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331666130623562343265353266613635346534653430613735646231 Apr 20 19:57:06.418000 audit: BPF prog-id=237 op=UNLOAD Apr 20 19:57:06.418000 audit[6956]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5396 pid=6956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:57:06.418000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331666130623562343265353266613635346534653430613735646231 Apr 20 19:57:06.418000 audit: BPF prog-id=239 op=LOAD Apr 20 19:57:06.418000 audit[6956]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0000dc6f0 a2=98 a3=0 items=0 ppid=5396 pid=6956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:57:06.418000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331666130623562343265353266613635346534653430613735646231 Apr 20 19:57:06.449526 kubelet[3163]: E0420 19:57:06.418597 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.974s" Apr 20 19:57:08.087027 containerd[1659]: time="2026-04-20T19:57:07.949672884Z" level=error msg="get state for 31fa0b5b42e52fa654e4e40a75db16fac90293fdf9e236b87985c2e9f4e819b2" error="context deadline exceeded" Apr 20 19:57:08.087027 containerd[1659]: time="2026-04-20T19:57:08.084617658Z" level=warning msg="unknown status" status=0 Apr 20 19:57:08.306093 kubelet[3163]: E0420 19:57:08.260426 3163 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/events/kube-apiserver-localhost.18a826e4a66a3a4e\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-localhost.18a826e4a66a3a4e kube-system 2637 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-localhost,UID:5ef51a6b32499d3d1e531fb8b3a83d4f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://10.0.0.14:6443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 19:20:16 +0000 UTC,LastTimestamp:2026-04-20 19:21:09.946906552 +0000 UTC m=+768.041156259,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 19:57:08.689092 kubelet[3163]: E0420 19:57:08.666305 3163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 20 19:57:09.320841 kubelet[3163]: E0420 19:57:09.320730 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.245s" Apr 20 19:57:10.446208 containerd[1659]: time="2026-04-20T19:57:10.426860790Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 20 19:57:14.546480 sshd[7019]: Connection closed by 10.0.0.1 port 52166 Apr 20 19:57:14.680998 sshd-session[6988]: pam_unix(sshd:session): session closed for user core Apr 20 19:57:15.151000 audit[6988]: AUDIT1106 pid=6988 uid=0 auid=500 ses=65 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:57:15.222000 audit[6988]: AUDIT1104 pid=6988 uid=0 auid=500 ses=65 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:57:15.556076 kernel: kauditd_printk_skb: 34 callbacks suppressed Apr 20 19:57:15.638312 kernel: audit: type=1106 audit(1776715035.151:1298): pid=6988 uid=0 auid=500 ses=65 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:57:15.698462 kernel: audit: type=1104 audit(1776715035.222:1299): pid=6988 uid=0 auid=500 ses=65 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:57:16.127743 containerd[1659]: time="2026-04-20T19:57:16.127700900Z" level=info msg="StartContainer for \"6c1d610fa13cf975d72818ef3ba28644ae87099c5273d909f3b984c9ff12b616\" returns successfully" Apr 20 19:57:16.128712 systemd[1]: sshd@63-4110-10.0.0.14:22-10.0.0.1:52166.service: Deactivated successfully. Apr 20 19:57:16.129000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@63-4110-10.0.0.14:22-10.0.0.1:52166 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:57:16.176814 kernel: audit: type=1131 audit(1776715036.129:1300): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@63-4110-10.0.0.14:22-10.0.0.1:52166 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:57:16.167122 systemd[1]: sshd@63-4110-10.0.0.14:22-10.0.0.1:52166.service: Consumed 2.531s CPU time, 4.1M memory peak. Apr 20 19:57:17.205414 systemd[1]: session-65.scope: Deactivated successfully. Apr 20 19:57:17.470988 systemd[1]: session-65.scope: Consumed 7.011s CPU time, 16.1M memory peak. Apr 20 19:57:17.923067 systemd-logind[1627]: Session 65 logged out. Waiting for processes to exit. Apr 20 19:57:17.944287 systemd-logind[1627]: Removed session 65. Apr 20 19:57:19.615970 kubelet[3163]: E0420 19:57:19.615857 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=2901\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 20 19:57:19.616722 kubelet[3163]: E0420 19:57:19.616236 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.14:6443/api/v1/namespaces/tigera-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1537\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="object-\"tigera-operator\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 20 19:57:19.616722 kubelet[3163]: E0420 19:57:19.616443 3163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 20 19:57:19.619853 kubelet[3163]: I0420 19:57:19.619279 3163 status_manager.go:895] "Failed to get status for pod" podUID="dfb0b7d2-b28d-4433-9fba-0074dfdf81ee" pod="calico-system/calico-apiserver-84684997fc-zpm5v" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-apiserver-84684997fc-zpm5v\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 19:57:19.619853 kubelet[3163]: I0420 19:57:19.619479 3163 status_manager.go:895] "Failed to get status for pod" podUID="5ef51a6b32499d3d1e531fb8b3a83d4f" pod="kube-system/kube-apiserver-localhost" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 19:57:19.619853 kubelet[3163]: I0420 19:57:19.619714 3163 status_manager.go:895] "Failed to get status for pod" podUID="071d23f6-a94b-4165-9229-2d0570b516d8" pod="calico-system/calico-node-g9fs5" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-node-g9fs5\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 19:57:19.622148 kubelet[3163]: E0420 19:57:19.619484 3163 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/events/kube-apiserver-localhost.18a826e4a66a3a4e\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-localhost.18a826e4a66a3a4e kube-system 2637 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-localhost,UID:5ef51a6b32499d3d1e531fb8b3a83d4f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://10.0.0.14:6443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 19:20:16 +0000 UTC,LastTimestamp:2026-04-20 19:21:09.946906552 +0000 UTC m=+768.041156259,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 19:57:21.150043 containerd[1659]: time="2026-04-20T19:57:21.148842101Z" level=error msg="ExecSync for \"38809e5cd6af13351ccdf029282f28389cf4d54a3e7b12c6f1dbc5b71d29ce3a\" failed" error="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" Apr 20 19:57:21.814089 systemd[1]: Started sshd@64-8221-10.0.0.14:22-10.0.0.1:59496.service - OpenSSH per-connection server daemon (10.0.0.1:59496). Apr 20 19:57:21.898000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@64-8221-10.0.0.14:22-10.0.0.1:59496 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:57:22.434390 kernel: audit: type=1130 audit(1776715041.898:1301): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@64-8221-10.0.0.14:22-10.0.0.1:59496 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:57:22.836178 kubelet[3163]: E0420 19:57:22.805320 3163 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" containerID="38809e5cd6af13351ccdf029282f28389cf4d54a3e7b12c6f1dbc5b71d29ce3a" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 20 19:57:23.687476 containerd[1659]: time="2026-04-20T19:57:23.672390545Z" level=info msg="StartContainer for \"54c6b42922bf5b9031d3f8d7454f339017c2e17aaf05317c673f5a78f690205c\" returns successfully" Apr 20 19:57:24.993322 kubelet[3163]: I0420 19:57:24.993282 3163 scope.go:117] "RemoveContainer" containerID="13a5307cebf8e24a8cf569f8b508b92d0a2c1da1c970849479297704cecf330d" Apr 20 19:57:25.446975 kubelet[3163]: E0420 19:57:24.993108 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="15.672s" Apr 20 19:57:27.142460 containerd[1659]: time="2026-04-20T19:57:27.120416036Z" level=info msg="StartContainer for \"31fa0b5b42e52fa654e4e40a75db16fac90293fdf9e236b87985c2e9f4e819b2\" returns successfully" Apr 20 19:57:27.474598 containerd[1659]: time="2026-04-20T19:57:27.454810416Z" level=error msg="ExecSync for \"38809e5cd6af13351ccdf029282f28389cf4d54a3e7b12c6f1dbc5b71d29ce3a\" failed" error="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" Apr 20 19:57:29.208522 kubelet[3163]: E0420 19:57:29.198444 3163 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" containerID="38809e5cd6af13351ccdf029282f28389cf4d54a3e7b12c6f1dbc5b71d29ce3a" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 20 19:57:29.635510 kubelet[3163]: I0420 19:57:29.627339 3163 status_manager.go:895] "Failed to get status for pod" podUID="5ef51a6b32499d3d1e531fb8b3a83d4f" pod="kube-system/kube-apiserver-localhost" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 19:57:29.969956 kubelet[3163]: E0420 19:57:29.947446 3163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 20 19:57:33.095696 kubelet[3163]: I0420 19:57:32.682267 3163 status_manager.go:895] "Failed to get status for pod" podUID="071d23f6-a94b-4165-9229-2d0570b516d8" pod="calico-system/calico-node-g9fs5" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-node-g9fs5\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 19:57:35.532122 kubelet[3163]: E0420 19:57:34.642005 3163 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/events/kube-apiserver-localhost.18a826e4a66a3a4e\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-localhost.18a826e4a66a3a4e kube-system 2637 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-localhost,UID:5ef51a6b32499d3d1e531fb8b3a83d4f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://10.0.0.14:6443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 19:20:16 +0000 UTC,LastTimestamp:2026-04-20 19:21:09.946906552 +0000 UTC m=+768.041156259,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 19:57:35.578270 kubelet[3163]: E0420 19:57:35.575917 3163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 19:57:35.632000 audit[7072]: AUDIT1101 pid=7072 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:57:35.787749 sshd[7072]: Accepted publickey for core from 10.0.0.1 port 59496 ssh2: RSA SHA256:ZNIzts6V4KYKlrJxXaosrimCRlmsV/+NkZ5UtjwHrjE Apr 20 19:57:36.062381 kernel: audit: type=1101 audit(1776715055.632:1302): pid=7072 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:57:36.212000 audit[7072]: AUDIT1103 pid=7072 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:57:36.278000 audit[7072]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffd650e140 a2=3 a3=0 items=0 ppid=1 pid=7072 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=66 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:57:36.278000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:57:36.736890 kernel: audit: type=1103 audit(1776715056.212:1303): pid=7072 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:57:36.753130 kernel: audit: type=1006 audit(1776715056.278:1304): pid=7072 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=66 res=1 Apr 20 19:57:36.896485 kernel: audit: type=1300 audit(1776715056.278:1304): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffd650e140 a2=3 a3=0 items=0 ppid=1 pid=7072 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=66 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:57:36.986981 kernel: audit: type=1327 audit(1776715056.278:1304): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:57:36.957508 sshd-session[7072]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 19:57:37.874649 kubelet[3163]: I0420 19:57:37.836804 3163 status_manager.go:895] "Failed to get status for pod" podUID="dfb0b7d2-b28d-4433-9fba-0074dfdf81ee" pod="calico-system/calico-apiserver-84684997fc-zpm5v" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-apiserver-84684997fc-zpm5v\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 19:57:39.976908 containerd[1659]: time="2026-04-20T19:57:39.924728785Z" level=error msg="ExecSync for \"38809e5cd6af13351ccdf029282f28389cf4d54a3e7b12c6f1dbc5b71d29ce3a\" failed" error="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" Apr 20 19:57:40.691378 systemd-logind[1627]: New session '66' of user 'core' with class 'user' and type 'tty'. Apr 20 19:57:41.485252 systemd[1]: Started session-66.scope - Session 66 of User core. Apr 20 19:57:43.620164 containerd[1659]: time="2026-04-20T19:57:43.619981679Z" level=error msg="StopContainer for \"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\" failed" error="rpc error: code = Canceled desc = an error occurs during waiting for container \"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\" to be killed: wait container \"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\": context canceled" Apr 20 19:57:43.732000 audit[7072]: AUDIT1105 pid=7072 uid=0 auid=500 ses=66 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:57:44.342830 kernel: audit: type=1105 audit(1776715063.732:1305): pid=7072 uid=0 auid=500 ses=66 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:57:44.517000 audit[7086]: AUDIT1103 pid=7086 uid=0 auid=500 ses=66 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:57:44.568889 kernel: audit: type=1103 audit(1776715064.517:1306): pid=7086 uid=0 auid=500 ses=66 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:57:46.246418 kubelet[3163]: E0420 19:57:46.071481 3163 log.go:32] "StopContainer from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729" Apr 20 19:57:48.515104 containerd[1659]: time="2026-04-20T19:57:48.379927214Z" level=info msg="TaskExit event container_id:\"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\" id:\"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\" pid:2981 exit_status:1 exited_at:{seconds:1776712820 nanos:249587616}" Apr 20 19:57:49.334486 containerd[1659]: time="2026-04-20T19:57:48.744779644Z" level=info msg="container event discarded" container=d60bfc28bae39dd4c39466e0fffee6553b16b69bc14ddc6752a782b3abc019c6 type=CONTAINER_DELETED_EVENT Apr 20 19:57:49.360435 kubelet[3163]: E0420 19:57:48.809120 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.Secret: Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/secrets?fieldSelector=metadata.name%3Dcalico-apiserver-certs&resourceVersion=2406\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="object-\"calico-system\"/\"calico-apiserver-certs\"" type="*v1.Secret" Apr 20 19:57:49.653659 kubelet[3163]: E0420 19:57:48.652249 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1537\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 20 19:57:50.873507 kubelet[3163]: E0420 19:57:50.864137 3163 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" containerID="38809e5cd6af13351ccdf029282f28389cf4d54a3e7b12c6f1dbc5b71d29ce3a" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 20 19:57:51.677149 containerd[1659]: time="2026-04-20T19:57:51.676494088Z" level=info msg="RemoveContainer for \"13a5307cebf8e24a8cf569f8b508b92d0a2c1da1c970849479297704cecf330d\"" Apr 20 19:57:53.425171 kubelet[3163]: E0420 19:57:52.615522 3163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 20 19:57:53.806507 kubelet[3163]: E0420 19:57:49.265858 3163 kuberuntime_container.go:863] "Container termination failed with gracePeriod" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="kube-system/kube-scheduler-localhost" podUID="33fee6ba1581201eda98a989140db110" containerName="kube-scheduler" containerID="containerd://ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729" gracePeriod=30 Apr 20 19:57:59.295314 containerd[1659]: time="2026-04-20T19:57:59.292215599Z" level=error msg="Failed to handle backOff event container_id:\"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\" id:\"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\" pid:2981 exit_status:1 exited_at:{seconds:1776712820 nanos:249587616} for ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 20 19:58:00.078948 containerd[1659]: time="2026-04-20T19:57:59.643407600Z" level=error msg="ttrpc: received message on inactive stream" stream=303 Apr 20 19:58:00.078948 containerd[1659]: time="2026-04-20T19:57:59.931461013Z" level=error msg="ttrpc: received message on inactive stream" stream=307 Apr 20 19:58:04.198136 kubelet[3163]: E0420 19:57:59.968245 3163 kuberuntime_manager.go:1176] "killContainer for pod failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerName="kube-scheduler" containerID={"Type":"containerd","ID":"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729"} pod="kube-system/kube-scheduler-localhost" Apr 20 19:58:05.555146 containerd[1659]: time="2026-04-20T19:58:05.554888451Z" level=info msg="TaskExit event container_id:\"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\" id:\"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\" pid:2993 exit_status:1 exited_at:{seconds:1776712815 nanos:539071424}" Apr 20 19:58:06.592156 containerd[1659]: time="2026-04-20T19:58:05.980903175Z" level=info msg="RemoveContainer for \"13a5307cebf8e24a8cf569f8b508b92d0a2c1da1c970849479297704cecf330d\" returns successfully" Apr 20 19:58:08.187411 kubelet[3163]: E0420 19:58:08.039422 3163 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillContainer\" for \"kube-scheduler\" with KillContainerError: \"rpc error: code = DeadlineExceeded desc = context deadline exceeded\"" pod="kube-system/kube-scheduler-localhost" podUID="33fee6ba1581201eda98a989140db110" Apr 20 19:58:15.830131 containerd[1659]: time="2026-04-20T19:58:15.718406029Z" level=error msg="Failed to handle backOff event container_id:\"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\" id:\"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\" pid:2993 exit_status:1 exited_at:{seconds:1776712815 nanos:539071424} for d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 20 19:58:18.361185 containerd[1659]: time="2026-04-20T19:58:18.299072798Z" level=error msg="StopContainer for \"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\" failed" error="rpc error: code = DeadlineExceeded desc = an error occurs during waiting for container \"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\" to be killed: wait container \"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\": context deadline exceeded" Apr 20 19:58:18.928327 containerd[1659]: time="2026-04-20T19:58:18.897355584Z" level=error msg="ttrpc: received message on inactive stream" stream=293 Apr 20 19:58:18.928327 containerd[1659]: time="2026-04-20T19:58:18.899433680Z" level=error msg="ttrpc: received message on inactive stream" stream=297 Apr 20 19:58:21.238829 containerd[1659]: time="2026-04-20T19:58:21.229262646Z" level=error msg="ExecSync for \"38809e5cd6af13351ccdf029282f28389cf4d54a3e7b12c6f1dbc5b71d29ce3a\" failed" error="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" Apr 20 19:58:22.517049 kubelet[3163]: E0420 19:58:22.514347 3163 secret.go:189] Couldn't get secret calico-system/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Apr 20 19:58:24.294478 kubelet[3163]: E0420 19:58:22.954906 3163 log.go:32] "StopContainer from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf" Apr 20 19:58:25.149112 kubelet[3163]: E0420 19:58:23.453282 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=2825\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 20 19:58:26.820834 kubelet[3163]: E0420 19:58:22.985104 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.Secret: Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/secrets?fieldSelector=metadata.name%3Dnode-certs&resourceVersion=2406\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"calico-system\"/\"node-certs\"" type="*v1.Secret" Apr 20 19:58:27.972519 kubelet[3163]: E0420 19:58:27.191444 3163 kuberuntime_container.go:863] "Container termination failed with gracePeriod" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="kube-system/kube-controller-manager-localhost" podUID="e9ca41790ae21be9f4cbd451ade0acec" containerName="kube-controller-manager" containerID="containerd://d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf" gracePeriod=30 Apr 20 19:58:32.456137 kubelet[3163]: E0420 19:58:32.452180 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=2901\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 20 19:58:34.314391 kubelet[3163]: E0420 19:58:29.296151 3163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 20 19:58:35.978221 kubelet[3163]: E0420 19:58:34.020129 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dfb0b7d2-b28d-4433-9fba-0074dfdf81ee-calico-apiserver-certs podName:dfb0b7d2-b28d-4433-9fba-0074dfdf81ee nodeName:}" failed. No retries permitted until 2026-04-20 20:00:33.151102035 +0000 UTC m=+3131.245351970 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/dfb0b7d2-b28d-4433-9fba-0074dfdf81ee-calico-apiserver-certs") pod "calico-apiserver-84684997fc-zpm5v" (UID: "dfb0b7d2-b28d-4433-9fba-0074dfdf81ee") : failed to sync secret cache: timed out waiting for the condition Apr 20 19:58:37.075045 kubelet[3163]: E0420 19:58:30.879139 3163 kuberuntime_manager.go:1176] "killContainer for pod failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerName="kube-controller-manager" containerID={"Type":"containerd","ID":"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf"} pod="kube-system/kube-controller-manager-localhost" Apr 20 19:58:37.448857 kubelet[3163]: E0420 19:58:32.490672 3163 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" containerID="38809e5cd6af13351ccdf029282f28389cf4d54a3e7b12c6f1dbc5b71d29ce3a" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 20 19:58:37.872457 kubelet[3163]: E0420 19:58:13.793221 3163 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/events/kube-apiserver-localhost.18a826e4a66a3a4e\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{kube-apiserver-localhost.18a826e4a66a3a4e kube-system 2637 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-localhost,UID:5ef51a6b32499d3d1e531fb8b3a83d4f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://10.0.0.14:6443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 19:20:16 +0000 UTC,LastTimestamp:2026-04-20 19:21:09.946906552 +0000 UTC m=+768.041156259,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 19:58:38.436942 kubelet[3163]: I0420 19:58:38.420823 3163 status_manager.go:895] "Failed to get status for pod" podUID="5ef51a6b32499d3d1e531fb8b3a83d4f" pod="kube-system/kube-apiserver-localhost" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": net/http: TLS handshake timeout" Apr 20 19:58:42.649018 kubelet[3163]: E0420 19:58:40.242362 3163 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillContainer\" for \"kube-controller-manager\" with KillContainerError: \"rpc error: code = DeadlineExceeded desc = context deadline exceeded\"" pod="kube-system/kube-controller-manager-localhost" podUID="e9ca41790ae21be9f4cbd451ade0acec" Apr 20 19:58:48.468745 containerd[1659]: time="2026-04-20T19:58:48.463330954Z" level=error msg="ExecSync for \"38809e5cd6af13351ccdf029282f28389cf4d54a3e7b12c6f1dbc5b71d29ce3a\" failed" error="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" Apr 20 19:58:50.755386 kubelet[3163]: I0420 19:58:50.749176 3163 request.go:752] "Waited before sending request" delay="1.186264532s" reason="retries: 2, retry-after: 1s - retry-reason: due to retryable error, error: Get \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dkube-proxy&resourceVersion=3198&timeout=36m27s&timeoutSeconds=2187&watch=true\": net/http: TLS handshake timeout" verb="GET" URL="https://10.0.0.14:6443/api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dkube-proxy&resourceVersion=3198&timeout=36m27s&timeoutSeconds=2187&watch=true" Apr 20 19:58:56.737278 sshd[7086]: Connection closed by 10.0.0.1 port 59496 Apr 20 19:58:57.270000 audit[7072]: AUDIT1106 pid=7072 uid=0 auid=500 ses=66 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:58:57.460000 audit[7072]: AUDIT1104 pid=7072 uid=0 auid=500 ses=66 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:58:57.752713 kernel: audit: type=1106 audit(1776715137.270:1307): pid=7072 uid=0 auid=500 ses=66 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:58:56.798155 sshd-session[7072]: pam_unix(sshd:session): session closed for user core Apr 20 19:58:58.092519 kernel: audit: type=1104 audit(1776715137.460:1308): pid=7072 uid=0 auid=500 ses=66 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:58:58.440596 systemd[1]: sshd@64-8221-10.0.0.14:22-10.0.0.1:59496.service: Deactivated successfully. Apr 20 19:58:58.681000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@64-8221-10.0.0.14:22-10.0.0.1:59496 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:58:59.389835 kernel: audit: type=1131 audit(1776715138.681:1309): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@64-8221-10.0.0.14:22-10.0.0.1:59496 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:58:58.722489 systemd[1]: sshd@64-8221-10.0.0.14:22-10.0.0.1:59496.service: Consumed 5.162s CPU time, 4.1M memory peak. Apr 20 19:58:59.278258 systemd[1]: session-66.scope: Deactivated successfully. Apr 20 19:58:59.439502 systemd[1]: session-66.scope: Consumed 34.166s CPU time, 17.9M memory peak. Apr 20 19:59:00.135917 kubelet[3163]: E0420 19:59:00.087384 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.Secret: Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/secrets?fieldSelector=metadata.name%3Dcalico-apiserver-certs&resourceVersion=2406\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"calico-system\"/\"calico-apiserver-certs\"" type="*v1.Secret" Apr 20 19:59:00.298310 systemd-logind[1627]: Session 66 logged out. Waiting for processes to exit. Apr 20 19:59:01.520315 systemd-logind[1627]: Removed session 66. Apr 20 19:59:01.849360 kubelet[3163]: E0420 19:59:00.130770 3163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 20 19:59:04.708672 kubelet[3163]: E0420 19:59:04.660359 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1537\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 20 19:59:04.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@65-12300-10.0.0.14:22-10.0.0.1:60688 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:59:05.667759 kernel: audit: type=1130 audit(1776715144.964:1310): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@65-12300-10.0.0.14:22-10.0.0.1:60688 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 19:59:04.778488 systemd[1]: Started sshd@65-12300-10.0.0.14:22-10.0.0.1:60688.service - OpenSSH per-connection server daemon (10.0.0.1:60688). Apr 20 19:59:06.564066 kubelet[3163]: E0420 19:59:06.549487 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.14:6443/api/v1/namespaces/tigera-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1537\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"tigera-operator\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 20 19:59:07.378806 kubelet[3163]: E0420 19:59:03.543389 3163 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/events/kube-apiserver-localhost.18a826e4a66a3a4e\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{kube-apiserver-localhost.18a826e4a66a3a4e kube-system 2637 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-localhost,UID:5ef51a6b32499d3d1e531fb8b3a83d4f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://10.0.0.14:6443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 19:20:16 +0000 UTC,LastTimestamp:2026-04-20 19:21:09.946906552 +0000 UTC m=+768.041156259,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 19:59:08.778008 kubelet[3163]: E0420 19:59:07.838296 3163 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" containerID="38809e5cd6af13351ccdf029282f28389cf4d54a3e7b12c6f1dbc5b71d29ce3a" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 20 19:59:11.525602 kubelet[3163]: E0420 19:59:09.795402 3163 token_manager.go:124] "Couldn't update token" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/serviceaccounts/kube-proxy/token\": net/http: TLS handshake timeout" cacheKey="\"kube-proxy\"/\"kube-system\"/[]string(nil)/3607/v1.BoundObjectReference{Kind:\"Pod\", APIVersion:\"v1\", Name:\"kube-proxy-c6mkn\", UID:\"526e8f89-8d32-4504-b20c-956610c7bb82\"}" Apr 20 19:59:14.480966 kubelet[3163]: I0420 19:59:10.848441 3163 status_manager.go:895] "Failed to get status for pod" podUID="071d23f6-a94b-4165-9229-2d0570b516d8" pod="calico-system/calico-node-g9fs5" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-node-g9fs5\": net/http: TLS handshake timeout" Apr 20 19:59:18.147417 kubelet[3163]: E0420 19:59:15.262334 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1m47.629s" Apr 20 19:59:27.312775 containerd[1659]: time="2026-04-20T19:59:27.310510824Z" level=info msg="TaskExit event container_id:\"bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf\" id:\"bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf\" pid:3768 exit_status:1 exited_at:{seconds:1776712842 nanos:981099034}" Apr 20 19:59:29.865114 kubelet[3163]: E0420 19:59:21.092306 3163 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:59:30.486000 audit[7111]: AUDIT1101 pid=7111 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:59:30.655887 kernel: audit: type=1101 audit(1776715170.486:1311): pid=7111 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:59:30.933383 sshd[7111]: Accepted publickey for core from 10.0.0.1 port 60688 ssh2: RSA SHA256:ZNIzts6V4KYKlrJxXaosrimCRlmsV/+NkZ5UtjwHrjE Apr 20 19:59:32.077000 audit[7111]: AUDIT1103 pid=7111 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:59:32.078000 audit[7111]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd65428890 a2=3 a3=0 items=0 ppid=1 pid=7111 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=67 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:59:32.078000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:59:32.081094 sshd-session[7111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 19:59:32.811344 kernel: audit: type=1103 audit(1776715172.077:1312): pid=7111 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:59:32.818440 kernel: audit: type=1006 audit(1776715172.078:1313): pid=7111 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=67 res=1 Apr 20 19:59:32.826493 kernel: audit: type=1300 audit(1776715172.078:1313): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd65428890 a2=3 a3=0 items=0 ppid=1 pid=7111 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=67 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 19:59:32.839113 kernel: audit: type=1327 audit(1776715172.078:1313): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 19:59:33.627504 kubelet[3163]: E0420 19:59:33.619456 3163 projected.go:194] Error preparing data for projected volume kube-api-access-6ncsk for pod kube-system/kube-proxy-c6mkn: failed to sync configmap cache: timed out waiting for the condition Apr 20 19:59:36.644430 systemd-logind[1627]: New session '67' of user 'core' with class 'user' and type 'tty'. Apr 20 19:59:37.765480 systemd[1]: Started session-67.scope - Session 67 of User core. Apr 20 19:59:38.594945 containerd[1659]: time="2026-04-20T19:59:38.593955400Z" level=error msg="Failed to handle backOff event container_id:\"bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf\" id:\"bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf\" pid:3768 exit_status:1 exited_at:{seconds:1776712842 nanos:981099034} for bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 20 19:59:39.170192 containerd[1659]: time="2026-04-20T19:59:38.643113810Z" level=error msg="ttrpc: received message on inactive stream" stream=219 Apr 20 19:59:39.170192 containerd[1659]: time="2026-04-20T19:59:38.747843168Z" level=error msg="ttrpc: received message on inactive stream" stream=223 Apr 20 19:59:41.251000 audit[7111]: AUDIT1105 pid=7111 uid=0 auid=500 ses=67 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:59:41.534302 kernel: audit: type=1105 audit(1776715181.251:1314): pid=7111 uid=0 auid=500 ses=67 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:59:42.553000 audit[7121]: AUDIT1103 pid=7121 uid=0 auid=500 ses=67 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:59:42.705470 kernel: audit: type=1103 audit(1776715182.553:1315): pid=7121 uid=0 auid=500 ses=67 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 19:59:43.088257 kubelet[3163]: E0420 19:59:43.074478 3163 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" Apr 20 19:59:46.377234 kubelet[3163]: E0420 19:59:46.374644 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/526e8f89-8d32-4504-b20c-956610c7bb82-kube-api-access-6ncsk podName:526e8f89-8d32-4504-b20c-956610c7bb82 nodeName:}" failed. No retries permitted until 2026-04-20 20:01:48.319352433 +0000 UTC m=+3206.413687712 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-6ncsk" (UniqueName: "kubernetes.io/projected/526e8f89-8d32-4504-b20c-956610c7bb82-kube-api-access-6ncsk") pod "kube-proxy-c6mkn" (UID: "526e8f89-8d32-4504-b20c-956610c7bb82") : failed to sync configmap cache: timed out waiting for the condition Apr 20 19:59:47.850134 containerd[1659]: time="2026-04-20T19:59:47.825909559Z" level=error msg="ExecSync for \"38809e5cd6af13351ccdf029282f28389cf4d54a3e7b12c6f1dbc5b71d29ce3a\" failed" error="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" Apr 20 19:59:48.584218 kubelet[3163]: E0420 19:59:46.280463 3163 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" Apr 20 19:59:50.087936 kubelet[3163]: I0420 19:59:50.085521 3163 request.go:752] "Waited before sending request" delay="3.547664142s" reason="client-side throttling, not priority and fairness" verb="PATCH" URL="https://10.0.0.14:6443/api/v1/namespaces/kube-system/events/kube-apiserver-localhost.18a826e4a66a3a4e" Apr 20 19:59:53.032092 kubelet[3163]: E0420 19:59:51.856694 3163 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" Apr 20 19:59:53.599353 kubelet[3163]: E0420 19:59:49.972591 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=2825\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 20 19:59:54.089168 kubelet[3163]: E0420 19:59:54.040528 3163 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" Apr 20 19:59:54.089168 kubelet[3163]: E0420 19:59:54.059913 3163 secret.go:189] Couldn't get secret calico-system/node-certs: failed to sync secret cache: timed out waiting for the condition Apr 20 19:59:56.619228 kubelet[3163]: E0420 19:59:56.615835 3163 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" Apr 20 20:00:00.086229 kubelet[3163]: E0420 20:00:00.081023 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/071d23f6-a94b-4165-9229-2d0570b516d8-node-certs podName:071d23f6-a94b-4165-9229-2d0570b516d8 nodeName:}" failed. No retries permitted until 2026-04-20 20:02:00.60741718 +0000 UTC m=+3218.701666886 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "node-certs" (UniqueName: "kubernetes.io/secret/071d23f6-a94b-4165-9229-2d0570b516d8-node-certs") pod "calico-node-g9fs5" (UID: "071d23f6-a94b-4165-9229-2d0570b516d8") : failed to sync secret cache: timed out waiting for the condition Apr 20 20:00:02.941969 kubelet[3163]: E0420 20:00:02.880104 3163 kubelet.go:2460] "Skipping pod synchronization" err="container runtime is down" Apr 20 20:00:05.767777 kubelet[3163]: E0420 20:00:05.764896 3163 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" containerID="38809e5cd6af13351ccdf029282f28389cf4d54a3e7b12c6f1dbc5b71d29ce3a" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 20 20:00:07.473717 kubelet[3163]: E0420 20:00:05.997421 3163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 20 20:00:08.695508 kubelet[3163]: E0420 20:00:05.127098 3163 token_manager.go:124] "Couldn't update token" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/tigera-operator/serviceaccounts/tigera-operator/token\": net/http: TLS handshake timeout" cacheKey="\"tigera-operator\"/\"tigera-operator\"/[]string(nil)/3607/v1.BoundObjectReference{Kind:\"Pod\", APIVersion:\"v1\", Name:\"tigera-operator-6bf85f8dd-hvgdj\", UID:\"22f1ff03-de8a-48db-b03e-54fdbe0d3d5f\"}" Apr 20 20:00:09.101673 kubelet[3163]: E0420 20:00:07.460197 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.Secret: Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/secrets?fieldSelector=metadata.name%3Dnode-certs&resourceVersion=2406\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"calico-system\"/\"node-certs\"" type="*v1.Secret" Apr 20 20:00:10.464641 kubelet[3163]: E0420 20:00:10.447525 3163 cri_stats_provider.go:468] "Failed to get the info of the filesystem with mountpoint" err="cannot find filesystem info for device \"/dev/vda9\"" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Apr 20 20:00:11.245211 kubelet[3163]: E0420 20:00:11.242188 3163 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="get filesystem info: Failed to get the info of the filesystem with mountpoint: cannot find filesystem info for device \"/dev/vda9\"" Apr 20 20:00:12.083285 kubelet[3163]: E0420 20:00:10.635272 3163 projected.go:289] Couldn't get configMap tigera-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 20 20:00:12.586160 kubelet[3163]: E0420 20:00:12.582650 3163 projected.go:194] Error preparing data for projected volume kube-api-access-qj2d9 for pod tigera-operator/tigera-operator-6bf85f8dd-hvgdj: failed to sync configmap cache: timed out waiting for the condition Apr 20 20:00:13.559998 kubelet[3163]: E0420 20:00:13.523962 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=2901\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 20 20:00:14.517139 kubelet[3163]: E0420 20:00:14.510118 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/22f1ff03-de8a-48db-b03e-54fdbe0d3d5f-kube-api-access-qj2d9 podName:22f1ff03-de8a-48db-b03e-54fdbe0d3d5f nodeName:}" failed. No retries permitted until 2026-04-20 20:02:14.592582266 +0000 UTC m=+3232.686831982 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-qj2d9" (UniqueName: "kubernetes.io/projected/22f1ff03-de8a-48db-b03e-54fdbe0d3d5f-kube-api-access-qj2d9") pod "tigera-operator-6bf85f8dd-hvgdj" (UID: "22f1ff03-de8a-48db-b03e-54fdbe0d3d5f") : failed to sync configmap cache: timed out waiting for the condition Apr 20 20:00:15.391564 kubelet[3163]: I0420 20:00:13.542284 3163 status_manager.go:895] "Failed to get status for pod" podUID="dfb0b7d2-b28d-4433-9fba-0074dfdf81ee" pod="calico-system/calico-apiserver-84684997fc-zpm5v" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-apiserver-84684997fc-zpm5v\": net/http: TLS handshake timeout" Apr 20 20:00:16.677817 containerd[1659]: time="2026-04-20T20:00:16.580508221Z" level=error msg="ExecSync for \"38809e5cd6af13351ccdf029282f28389cf4d54a3e7b12c6f1dbc5b71d29ce3a\" failed" error="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" Apr 20 20:00:18.685251 kubelet[3163]: E0420 20:00:18.674386 3163 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" containerID="38809e5cd6af13351ccdf029282f28389cf4d54a3e7b12c6f1dbc5b71d29ce3a" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 20 20:00:19.737922 kubelet[3163]: E0420 20:00:15.795325 3163 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/events/kube-apiserver-localhost.18a826e4a66a3a4e\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{kube-apiserver-localhost.18a826e4a66a3a4e kube-system 2637 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-localhost,UID:5ef51a6b32499d3d1e531fb8b3a83d4f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://10.0.0.14:6443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 19:20:16 +0000 UTC,LastTimestamp:2026-04-20 19:21:09.946906552 +0000 UTC m=+768.041156259,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 20:00:22.455642 containerd[1659]: time="2026-04-20T20:00:22.455295807Z" level=error msg="ExecSync for \"38809e5cd6af13351ccdf029282f28389cf4d54a3e7b12c6f1dbc5b71d29ce3a\" failed" error="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" Apr 20 20:00:23.464405 kubelet[3163]: E0420 20:00:23.454738 3163 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" containerID="38809e5cd6af13351ccdf029282f28389cf4d54a3e7b12c6f1dbc5b71d29ce3a" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 20 20:00:24.465919 containerd[1659]: time="2026-04-20T20:00:24.381970703Z" level=error msg="ExecSync for \"38809e5cd6af13351ccdf029282f28389cf4d54a3e7b12c6f1dbc5b71d29ce3a\" failed" error="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" Apr 20 20:00:24.742011 kubelet[3163]: I0420 20:00:24.366440 3163 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-20T20:00:24Z","lastTransitionTime":"2026-04-20T20:00:24Z","reason":"KubeletNotReady","message":"PLEG is not healthy: pleg was last seen active 3m14.020191996s ago; threshold is 3m0s"} Apr 20 20:00:25.665381 kubelet[3163]: E0420 20:00:25.664374 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1537\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 20 20:00:26.921217 kubelet[3163]: E0420 20:00:26.919264 3163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 20 20:00:28.142232 kubelet[3163]: E0420 20:00:27.376473 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.14:6443/api/v1/namespaces/tigera-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1537\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"tigera-operator\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 20 20:00:28.499231 kubelet[3163]: E0420 20:00:26.377085 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.Secret: Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/secrets?fieldSelector=metadata.name%3Dcalico-apiserver-certs&resourceVersion=2406\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"calico-system\"/\"calico-apiserver-certs\"" type="*v1.Secret" Apr 20 20:00:29.692407 kubelet[3163]: E0420 20:00:29.681002 3163 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" containerID="38809e5cd6af13351ccdf029282f28389cf4d54a3e7b12c6f1dbc5b71d29ce3a" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 20 20:00:33.565024 sshd[7121]: Connection closed by 10.0.0.1 port 60688 Apr 20 20:00:33.952743 sshd-session[7111]: pam_unix(sshd:session): session closed for user core Apr 20 20:00:34.433000 audit[7111]: AUDIT1106 pid=7111 uid=0 auid=500 ses=67 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:00:34.510000 audit[7111]: AUDIT1104 pid=7111 uid=0 auid=500 ses=67 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:00:34.772011 kernel: audit: type=1106 audit(1776715234.433:1316): pid=7111 uid=0 auid=500 ses=67 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:00:34.822196 kernel: audit: type=1104 audit(1776715234.510:1317): pid=7111 uid=0 auid=500 ses=67 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:00:35.388443 systemd[1]: sshd@65-12300-10.0.0.14:22-10.0.0.1:60688.service: Deactivated successfully. Apr 20 20:00:35.591000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@65-12300-10.0.0.14:22-10.0.0.1:60688 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:00:35.934327 kernel: audit: type=1131 audit(1776715235.591:1318): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@65-12300-10.0.0.14:22-10.0.0.1:60688 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:00:35.618776 systemd[1]: sshd@65-12300-10.0.0.14:22-10.0.0.1:60688.service: Consumed 8.271s CPU time, 4.3M memory peak. Apr 20 20:00:36.820939 systemd[1]: session-67.scope: Deactivated successfully. Apr 20 20:00:37.037070 systemd[1]: session-67.scope: Consumed 24.120s CPU time, 17.8M memory peak. Apr 20 20:00:38.125729 systemd-logind[1627]: Session 67 logged out. Waiting for processes to exit. Apr 20 20:00:42.131038 systemd[1]: Started sshd@66-4111-10.0.0.14:22-10.0.0.1:54082.service - OpenSSH per-connection server daemon (10.0.0.1:54082). Apr 20 20:00:42.195000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@66-4111-10.0.0.14:22-10.0.0.1:54082 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:00:42.659390 kernel: audit: type=1130 audit(1776715242.195:1319): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@66-4111-10.0.0.14:22-10.0.0.1:54082 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:00:42.792503 systemd-logind[1627]: Removed session 67. Apr 20 20:00:43.480520 containerd[1659]: time="2026-04-20T20:00:43.417435802Z" level=error msg="ExecSync for \"38809e5cd6af13351ccdf029282f28389cf4d54a3e7b12c6f1dbc5b71d29ce3a\" failed" error="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" Apr 20 20:00:44.280457 kubelet[3163]: I0420 20:00:43.928396 3163 request.go:752] "Waited before sending request" delay="1.506824672s" reason="retries: 5, retry-after: 1s - retry-reason: due to retryable error, error: Get \"https://10.0.0.14:6443/api/v1/pods?allowWatchBookmarks=true&fieldSelector=spec.nodeName%3Dlocalhost&resourceVersion=3222&timeoutSeconds=381&watch=true\": net/http: TLS handshake timeout" verb="GET" URL="https://10.0.0.14:6443/api/v1/pods?allowWatchBookmarks=true&fieldSelector=spec.nodeName%3Dlocalhost&resourceVersion=3222&timeoutSeconds=381&watch=true" Apr 20 20:00:44.667047 kubelet[3163]: E0420 20:00:44.345699 3163 secret.go:189] Couldn't get secret calico-system/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Apr 20 20:00:44.667047 kubelet[3163]: E0420 20:00:44.381576 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dfb0b7d2-b28d-4433-9fba-0074dfdf81ee-calico-apiserver-certs podName:dfb0b7d2-b28d-4433-9fba-0074dfdf81ee nodeName:}" failed. No retries permitted until 2026-04-20 20:02:46.369514629 +0000 UTC m=+3264.463764380 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/dfb0b7d2-b28d-4433-9fba-0074dfdf81ee-calico-apiserver-certs") pod "calico-apiserver-84684997fc-zpm5v" (UID: "dfb0b7d2-b28d-4433-9fba-0074dfdf81ee") : failed to sync secret cache: timed out waiting for the condition Apr 20 20:00:44.667047 kubelet[3163]: I0420 20:00:42.320025 3163 status_manager.go:895] "Failed to get status for pod" podUID="5ef51a6b32499d3d1e531fb8b3a83d4f" pod="kube-system/kube-apiserver-localhost" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": net/http: TLS handshake timeout" Apr 20 20:00:45.131252 kubelet[3163]: E0420 20:00:45.119787 3163 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/events/kube-apiserver-localhost.18a826e4a66a3a4e\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{kube-apiserver-localhost.18a826e4a66a3a4e kube-system 2637 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-localhost,UID:5ef51a6b32499d3d1e531fb8b3a83d4f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://10.0.0.14:6443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 19:20:16 +0000 UTC,LastTimestamp:2026-04-20 19:21:09.946906552 +0000 UTC m=+768.041156259,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 20:00:45.131252 kubelet[3163]: E0420 20:00:45.119958 3163 event.go:307] "Unable to write event (retry limit exceeded!)" event="&Event{ObjectMeta:{kube-apiserver-localhost.18a826f1190cd7b8 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-localhost,UID:5ef51a6b32499d3d1e531fb8b3a83d4f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://10.0.0.14:6443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 19:21:09.946906552 +0000 UTC m=+768.041156259,LastTimestamp:2026-04-20 19:21:09.946906552 +0000 UTC m=+768.041156259,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 20:00:47.458864 kubelet[3163]: E0420 20:00:47.448048 3163 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" containerID="38809e5cd6af13351ccdf029282f28389cf4d54a3e7b12c6f1dbc5b71d29ce3a" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 20 20:00:50.448912 kubelet[3163]: E0420 20:00:50.433278 3163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 20 20:00:51.323423 containerd[1659]: time="2026-04-20T20:00:51.321249137Z" level=error msg="ExecSync for \"38809e5cd6af13351ccdf029282f28389cf4d54a3e7b12c6f1dbc5b71d29ce3a\" failed" error="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" Apr 20 20:00:56.157923 kubelet[3163]: E0420 20:00:56.139571 3163 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" containerID="38809e5cd6af13351ccdf029282f28389cf4d54a3e7b12c6f1dbc5b71d29ce3a" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 20 20:00:56.157923 kubelet[3163]: E0420 20:00:56.152486 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=2825\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 20 20:00:56.576502 kubelet[3163]: E0420 20:00:56.082490 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T20:00:24Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T20:00:24Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T20:00:24Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T20:00:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-20T20:00:24Z\\\",\\\"message\\\":\\\"PLEG is not healthy: pleg was last seen active 3m14.020191996s ago; threshold is 3m0s\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"localhost\": Patch \"https://10.0.0.14:6443/api/v1/nodes/localhost/status?timeout=10s\": context deadline exceeded" Apr 20 20:00:57.466814 containerd[1659]: time="2026-04-20T20:00:57.449762055Z" level=error msg="ExecSync for \"38809e5cd6af13351ccdf029282f28389cf4d54a3e7b12c6f1dbc5b71d29ce3a\" failed" error="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" Apr 20 20:01:01.556093 kubelet[3163]: E0420 20:01:01.544838 3163 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/events/kube-scheduler-localhost.18a826e9780e1db1\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{kube-scheduler-localhost.18a826e9780e1db1 kube-system 3218 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-scheduler-localhost,UID:33fee6ba1581201eda98a989140db110,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://127.0.0.1:10259/readyz\": dial tcp 127.0.0.1:10259: connect: connection refused,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 19:20:37 +0000 UTC,LastTimestamp:2026-04-20 19:21:10.772475171 +0000 UTC m=+768.866724894,Count:7,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 20:01:05.594098 kubelet[3163]: E0420 20:01:05.589824 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.Secret: Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/secrets?fieldSelector=metadata.name%3Dnode-certs&resourceVersion=2406\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"calico-system\"/\"node-certs\"" type="*v1.Secret" Apr 20 20:01:06.170000 audit[7144]: AUDIT1101 pid=7144 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:01:06.456565 kernel: audit: type=1101 audit(1776715266.170:1320): pid=7144 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:01:06.518033 sshd[7144]: Accepted publickey for core from 10.0.0.1 port 54082 ssh2: RSA SHA256:ZNIzts6V4KYKlrJxXaosrimCRlmsV/+NkZ5UtjwHrjE Apr 20 20:01:07.314000 audit[7144]: AUDIT1103 pid=7144 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:01:07.438000 audit[7144]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc4369a070 a2=3 a3=0 items=0 ppid=1 pid=7144 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=68 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 20:01:07.438000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 20:01:07.749707 kernel: audit: type=1103 audit(1776715267.314:1321): pid=7144 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:01:07.821205 kernel: audit: type=1006 audit(1776715267.438:1322): pid=7144 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=68 res=1 Apr 20 20:01:07.882040 kernel: audit: type=1300 audit(1776715267.438:1322): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc4369a070 a2=3 a3=0 items=0 ppid=1 pid=7144 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=68 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 20:01:07.898426 kernel: audit: type=1327 audit(1776715267.438:1322): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 20:01:07.930986 sshd-session[7144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 20:01:08.349496 kubelet[3163]: I0420 20:01:08.346460 3163 request.go:752] "Waited before sending request" delay="1.492368471s" reason="retries: 6, retry-after: 1s - retry-reason: due to retryable error, error: Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dtigera-ca-bundle&resourceVersion=3198&timeout=43m45s&timeoutSeconds=2625&watch=true\": net/http: TLS handshake timeout" verb="GET" URL="https://10.0.0.14:6443/api/v1/namespaces/calico-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dtigera-ca-bundle&resourceVersion=3198&timeout=43m45s&timeoutSeconds=2625&watch=true" Apr 20 20:01:11.547842 systemd-logind[1627]: New session '68' of user 'core' with class 'user' and type 'tty'. Apr 20 20:01:12.351281 systemd[1]: Started session-68.scope - Session 68 of User core. Apr 20 20:01:16.272000 audit[7144]: AUDIT1105 pid=7144 uid=0 auid=500 ses=68 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:01:16.805761 kernel: audit: type=1105 audit(1776715276.272:1323): pid=7144 uid=0 auid=500 ses=68 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:01:17.141061 kubelet[3163]: I0420 20:01:11.293013 3163 status_manager.go:895] "Failed to get status for pod" podUID="071d23f6-a94b-4165-9229-2d0570b516d8" pod="calico-system/calico-node-g9fs5" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-node-g9fs5\": net/http: TLS handshake timeout" Apr 20 20:01:17.368000 audit[7155]: AUDIT1103 pid=7155 uid=0 auid=500 ses=68 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:01:18.597392 kernel: audit: type=1103 audit(1776715277.368:1324): pid=7155 uid=0 auid=500 ses=68 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:01:18.735900 kubelet[3163]: E0420 20:01:18.321205 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.14:6443/api/v1/nodes/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Apr 20 20:01:20.283279 kubelet[3163]: E0420 20:01:20.183478 3163 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" containerID="38809e5cd6af13351ccdf029282f28389cf4d54a3e7b12c6f1dbc5b71d29ce3a" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 20 20:01:24.167626 kubelet[3163]: E0420 20:01:13.335379 3163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 20 20:01:29.981453 containerd[1659]: time="2026-04-20T20:01:29.853338897Z" level=info msg="container event discarded" container=38809e5cd6af13351ccdf029282f28389cf4d54a3e7b12c6f1dbc5b71d29ce3a type=CONTAINER_STOPPED_EVENT Apr 20 20:01:30.341413 containerd[1659]: time="2026-04-20T20:01:30.144890233Z" level=info msg="container event discarded" container=13a5307cebf8e24a8cf569f8b508b92d0a2c1da1c970849479297704cecf330d type=CONTAINER_STOPPED_EVENT Apr 20 20:01:30.421229 containerd[1659]: time="2026-04-20T20:01:30.419406248Z" level=info msg="container event discarded" container=292f0d8812e0e22d6b9c42f42298e77dbbf1a3ff614a20049631a83923de267a type=CONTAINER_STOPPED_EVENT Apr 20 20:01:32.643153 containerd[1659]: time="2026-04-20T20:01:32.593455514Z" level=info msg="container event discarded" container=6c1d610fa13cf975d72818ef3ba28644ae87099c5273d909f3b984c9ff12b616 type=CONTAINER_CREATED_EVENT Apr 20 20:01:33.273467 containerd[1659]: time="2026-04-20T20:01:33.271943079Z" level=info msg="container event discarded" container=7ad53f3fafdaae0a232b3f26afb202844744846718e9540c77ad7513e837efd9 type=CONTAINER_DELETED_EVENT Apr 20 20:01:35.828929 containerd[1659]: time="2026-04-20T20:01:35.743137365Z" level=info msg="container event discarded" container=094aeb199e5141e10c4aa1ca00e31f3c2ea5db40bdffe7260f4ac4067e20028a type=CONTAINER_DELETED_EVENT Apr 20 20:01:38.162681 containerd[1659]: time="2026-04-20T20:01:38.141326221Z" level=info msg="container event discarded" container=31fa0b5b42e52fa654e4e40a75db16fac90293fdf9e236b87985c2e9f4e819b2 type=CONTAINER_CREATED_EVENT Apr 20 20:01:38.762258 containerd[1659]: time="2026-04-20T20:01:38.340224842Z" level=error msg="ExecSync for \"38809e5cd6af13351ccdf029282f28389cf4d54a3e7b12c6f1dbc5b71d29ce3a\" failed" error="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" Apr 20 20:01:41.744781 containerd[1659]: time="2026-04-20T20:01:41.721914056Z" level=info msg="container event discarded" container=336c04308422bb47be47dbabd9a54c52608564fda7f37c62a62203892575bd65 type=CONTAINER_DELETED_EVENT Apr 20 20:01:42.757179 containerd[1659]: time="2026-04-20T20:01:42.747354824Z" level=info msg="container event discarded" container=54c6b42922bf5b9031d3f8d7454f339017c2e17aaf05317c673f5a78f690205c type=CONTAINER_CREATED_EVENT Apr 20 20:01:51.134787 kubelet[3163]: E0420 20:01:51.107351 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.14:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded" Apr 20 20:01:53.354104 kubelet[3163]: E0420 20:01:53.351484 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=2901\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 20 20:01:54.024259 kubelet[3163]: E0420 20:01:53.308477 3163 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" containerID="38809e5cd6af13351ccdf029282f28389cf4d54a3e7b12c6f1dbc5b71d29ce3a" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 20 20:01:54.831152 kubelet[3163]: E0420 20:01:50.143289 3163 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/events/kube-scheduler-localhost.18a826e9780e1db1\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{kube-scheduler-localhost.18a826e9780e1db1 kube-system 3218 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-scheduler-localhost,UID:33fee6ba1581201eda98a989140db110,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://127.0.0.1:10259/readyz\": dial tcp 127.0.0.1:10259: connect: connection refused,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 19:20:37 +0000 UTC,LastTimestamp:2026-04-20 19:21:10.772475171 +0000 UTC m=+768.866724894,Count:7,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 20:01:55.332771 kubelet[3163]: E0420 20:01:55.330931 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.14:6443/api/v1/namespaces/tigera-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1537\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"tigera-operator\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 20 20:01:55.799366 kubelet[3163]: E0420 20:01:55.524839 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1537\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 20 20:01:55.999138 kubelet[3163]: E0420 20:01:54.667946 3163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 20 20:01:56.932051 containerd[1659]: time="2026-04-20T20:01:56.931966603Z" level=error msg="ExecSync for \"38809e5cd6af13351ccdf029282f28389cf4d54a3e7b12c6f1dbc5b71d29ce3a\" failed" error="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" Apr 20 20:01:57.498455 kubelet[3163]: E0420 20:01:57.495835 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1m47.544s" Apr 20 20:01:57.930894 kubelet[3163]: E0420 20:01:57.924701 3163 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" containerID="38809e5cd6af13351ccdf029282f28389cf4d54a3e7b12c6f1dbc5b71d29ce3a" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 20 20:01:57.930894 kubelet[3163]: I0420 20:01:57.929760 3163 status_manager.go:895] "Failed to get status for pod" podUID="dfb0b7d2-b28d-4433-9fba-0074dfdf81ee" pod="calico-system/calico-apiserver-84684997fc-zpm5v" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-apiserver-84684997fc-zpm5v\": net/http: TLS handshake timeout" Apr 20 20:01:58.491071 containerd[1659]: time="2026-04-20T20:01:58.490978384Z" level=error msg="ExecSync for \"38809e5cd6af13351ccdf029282f28389cf4d54a3e7b12c6f1dbc5b71d29ce3a\" failed" error="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" Apr 20 20:01:58.506403 kubelet[3163]: E0420 20:01:58.506063 3163 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" containerID="38809e5cd6af13351ccdf029282f28389cf4d54a3e7b12c6f1dbc5b71d29ce3a" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 20 20:01:58.513099 containerd[1659]: time="2026-04-20T20:01:58.509224954Z" level=error msg="ExecSync for \"38809e5cd6af13351ccdf029282f28389cf4d54a3e7b12c6f1dbc5b71d29ce3a\" failed" error="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" Apr 20 20:01:58.514608 kubelet[3163]: E0420 20:01:58.514445 3163 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" containerID="38809e5cd6af13351ccdf029282f28389cf4d54a3e7b12c6f1dbc5b71d29ce3a" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 20 20:01:58.516754 containerd[1659]: time="2026-04-20T20:01:58.516692759Z" level=error msg="ExecSync for \"38809e5cd6af13351ccdf029282f28389cf4d54a3e7b12c6f1dbc5b71d29ce3a\" failed" error="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" Apr 20 20:01:58.516857 kubelet[3163]: E0420 20:01:58.516816 3163 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" containerID="38809e5cd6af13351ccdf029282f28389cf4d54a3e7b12c6f1dbc5b71d29ce3a" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 20 20:01:58.517447 containerd[1659]: time="2026-04-20T20:01:58.517321054Z" level=error msg="ExecSync for \"38809e5cd6af13351ccdf029282f28389cf4d54a3e7b12c6f1dbc5b71d29ce3a\" failed" error="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" Apr 20 20:01:58.538769 kubelet[3163]: E0420 20:01:58.538625 3163 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" containerID="38809e5cd6af13351ccdf029282f28389cf4d54a3e7b12c6f1dbc5b71d29ce3a" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 20 20:01:58.541136 kubelet[3163]: E0420 20:01:58.540009 3163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:01:58.573926 kubelet[3163]: E0420 20:01:58.573144 3163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:01:58.593522 containerd[1659]: time="2026-04-20T20:01:58.589483223Z" level=info msg="StopContainer for \"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\" with timeout 30 (s)" Apr 20 20:01:58.683117 containerd[1659]: time="2026-04-20T20:01:58.592447895Z" level=info msg="StopContainer for \"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\" with timeout 30 (s)" Apr 20 20:01:58.683117 containerd[1659]: time="2026-04-20T20:01:58.682927165Z" level=info msg="Skipping the sending of signal terminated to container \"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\" because a prior stop with timeout>0 request already sent the signal" Apr 20 20:01:58.722355 containerd[1659]: time="2026-04-20T20:01:58.719291936Z" level=info msg="Skipping the sending of signal terminated to container \"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\" because a prior stop with timeout>0 request already sent the signal" Apr 20 20:02:01.249935 kubelet[3163]: E0420 20:02:01.246874 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.14:6443/api/v1/nodes/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Apr 20 20:02:01.465522 kubelet[3163]: E0420 20:02:01.276250 3163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:02:02.281013 kubelet[3163]: E0420 20:02:02.279124 3163 secret.go:189] Couldn't get secret calico-system/node-certs: failed to sync secret cache: timed out waiting for the condition Apr 20 20:02:02.524148 kubelet[3163]: E0420 20:02:02.495061 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/071d23f6-a94b-4165-9229-2d0570b516d8-node-certs podName:071d23f6-a94b-4165-9229-2d0570b516d8 nodeName:}" failed. No retries permitted until 2026-04-20 20:04:04.300800364 +0000 UTC m=+3342.395050061 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "node-certs" (UniqueName: "kubernetes.io/secret/071d23f6-a94b-4165-9229-2d0570b516d8-node-certs") pod "calico-node-g9fs5" (UID: "071d23f6-a94b-4165-9229-2d0570b516d8") : failed to sync secret cache: timed out waiting for the condition Apr 20 20:02:02.684254 kubelet[3163]: E0420 20:02:02.683823 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.745s" Apr 20 20:02:02.703086 kubelet[3163]: I0420 20:02:02.684434 3163 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 20 20:02:03.265835 sshd[7155]: Connection closed by 10.0.0.1 port 54082 Apr 20 20:02:03.345911 sshd-session[7144]: pam_unix(sshd:session): session closed for user core Apr 20 20:02:03.885000 audit[7144]: AUDIT1106 pid=7144 uid=0 auid=500 ses=68 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:02:03.926000 audit[7144]: AUDIT1104 pid=7144 uid=0 auid=500 ses=68 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:02:04.155323 kernel: audit: type=1106 audit(1776715323.885:1325): pid=7144 uid=0 auid=500 ses=68 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:02:04.213488 kernel: audit: type=1104 audit(1776715323.926:1326): pid=7144 uid=0 auid=500 ses=68 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:02:04.350125 systemd[1]: sshd@66-4111-10.0.0.14:22-10.0.0.1:54082.service: Deactivated successfully. Apr 20 20:02:04.519000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@66-4111-10.0.0.14:22-10.0.0.1:54082 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:02:04.649414 kernel: audit: type=1131 audit(1776715324.519:1327): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@66-4111-10.0.0.14:22-10.0.0.1:54082 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:02:04.548305 systemd[1]: sshd@66-4111-10.0.0.14:22-10.0.0.1:54082.service: Consumed 7.430s CPU time, 4.4M memory peak. Apr 20 20:02:04.940389 systemd[1]: session-68.scope: Deactivated successfully. Apr 20 20:02:05.098745 systemd[1]: session-68.scope: Consumed 22.293s CPU time, 19.5M memory peak. Apr 20 20:02:05.291456 kubelet[3163]: E0420 20:02:05.117175 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.Secret: Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/secrets?fieldSelector=metadata.name%3Dcalico-apiserver-certs&resourceVersion=2406\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"calico-system\"/\"calico-apiserver-certs\"" type="*v1.Secret" Apr 20 20:02:05.144170 systemd-logind[1627]: Session 68 logged out. Waiting for processes to exit. Apr 20 20:02:05.478657 systemd-logind[1627]: Removed session 68. Apr 20 20:02:05.895125 kubelet[3163]: E0420 20:02:05.887051 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.861s" Apr 20 20:02:06.329277 kubelet[3163]: E0420 20:02:06.316934 3163 token_manager.go:124] "Couldn't update token" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/serviceaccounts/kube-proxy/token\": net/http: TLS handshake timeout" cacheKey="\"kube-proxy\"/\"kube-system\"/[]string(nil)/3607/v1.BoundObjectReference{Kind:\"Pod\", APIVersion:\"v1\", Name:\"kube-proxy-c6mkn\", UID:\"526e8f89-8d32-4504-b20c-956610c7bb82\"}" Apr 20 20:02:07.378630 kubelet[3163]: E0420 20:02:07.373261 3163 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 20 20:02:07.491617 kubelet[3163]: E0420 20:02:07.379864 3163 projected.go:194] Error preparing data for projected volume kube-api-access-6ncsk for pod kube-system/kube-proxy-c6mkn: failed to sync configmap cache: timed out waiting for the condition Apr 20 20:02:07.598651 kubelet[3163]: E0420 20:02:07.598433 3163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:02:07.953120 kubelet[3163]: E0420 20:02:07.947280 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/526e8f89-8d32-4504-b20c-956610c7bb82-kube-api-access-6ncsk podName:526e8f89-8d32-4504-b20c-956610c7bb82 nodeName:}" failed. No retries permitted until 2026-04-20 20:04:09.698521112 +0000 UTC m=+3347.792770832 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-6ncsk" (UniqueName: "kubernetes.io/projected/526e8f89-8d32-4504-b20c-956610c7bb82-kube-api-access-6ncsk") pod "kube-proxy-c6mkn" (UID: "526e8f89-8d32-4504-b20c-956610c7bb82") : failed to sync configmap cache: timed out waiting for the condition Apr 20 20:02:08.370001 kubelet[3163]: E0420 20:02:08.322106 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=2825\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 20 20:02:08.849687 kubelet[3163]: I0420 20:02:08.846951 3163 status_manager.go:895] "Failed to get status for pod" podUID="5ef51a6b32499d3d1e531fb8b3a83d4f" pod="kube-system/kube-apiserver-localhost" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": net/http: TLS handshake timeout" Apr 20 20:02:08.929000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@67-4112-10.0.0.14:22-10.0.0.1:40174 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:02:08.930155 systemd[1]: Started sshd@67-4112-10.0.0.14:22-10.0.0.1:40174.service - OpenSSH per-connection server daemon (10.0.0.1:40174). Apr 20 20:02:09.072811 kernel: audit: type=1130 audit(1776715328.929:1328): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@67-4112-10.0.0.14:22-10.0.0.1:40174 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:02:09.629379 kubelet[3163]: E0420 20:02:09.595174 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.281s" Apr 20 20:02:11.023986 kubelet[3163]: E0420 20:02:11.023718 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.428s" Apr 20 20:02:11.622748 kubelet[3163]: E0420 20:02:11.622180 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.14:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded" Apr 20 20:02:11.988815 kubelet[3163]: E0420 20:02:11.946065 3163 kubelet_node_status.go:535] "Unable to update node status" err="update node status exceeds retry count" Apr 20 20:02:12.781065 containerd[1659]: time="2026-04-20T20:02:12.768077420Z" level=info msg="container event discarded" container=6c1d610fa13cf975d72818ef3ba28644ae87099c5273d909f3b984c9ff12b616 type=CONTAINER_STARTED_EVENT Apr 20 20:02:13.174896 kubelet[3163]: E0420 20:02:13.138342 3163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:02:15.351033 kubelet[3163]: E0420 20:02:15.350477 3163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 20 20:02:15.574083 kubelet[3163]: E0420 20:02:15.554362 3163 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/events/kube-scheduler-localhost.18a826e9780e1db1\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{kube-scheduler-localhost.18a826e9780e1db1 kube-system 3218 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-scheduler-localhost,UID:33fee6ba1581201eda98a989140db110,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://127.0.0.1:10259/readyz\": dial tcp 127.0.0.1:10259: connect: connection refused,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 19:20:37 +0000 UTC,LastTimestamp:2026-04-20 19:21:10.772475171 +0000 UTC m=+768.866724894,Count:7,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 20:02:15.726525 kubelet[3163]: E0420 20:02:15.726327 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.Secret: Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/secrets?fieldSelector=metadata.name%3Dnode-certs&resourceVersion=2406\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"calico-system\"/\"node-certs\"" type="*v1.Secret" Apr 20 20:02:16.645000 audit[7204]: AUDIT1101 pid=7204 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:02:17.012976 kernel: audit: type=1101 audit(1776715336.645:1329): pid=7204 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:02:17.045202 sshd[7204]: Accepted publickey for core from 10.0.0.1 port 40174 ssh2: RSA SHA256:ZNIzts6V4KYKlrJxXaosrimCRlmsV/+NkZ5UtjwHrjE Apr 20 20:02:18.060000 audit[7204]: AUDIT1103 pid=7204 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:02:18.126000 audit[7204]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe60296480 a2=3 a3=0 items=0 ppid=1 pid=7204 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=69 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 20:02:18.126000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 20:02:18.629062 kernel: audit: type=1103 audit(1776715338.060:1330): pid=7204 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:02:18.686395 kernel: audit: type=1006 audit(1776715338.126:1331): pid=7204 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=69 res=1 Apr 20 20:02:18.821297 kernel: audit: type=1300 audit(1776715338.126:1331): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe60296480 a2=3 a3=0 items=0 ppid=1 pid=7204 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=69 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 20:02:18.823325 kernel: audit: type=1327 audit(1776715338.126:1331): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 20:02:18.823115 sshd-session[7204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 20:02:19.657716 containerd[1659]: time="2026-04-20T20:02:19.657224378Z" level=info msg="container event discarded" container=54c6b42922bf5b9031d3f8d7454f339017c2e17aaf05317c673f5a78f690205c type=CONTAINER_STARTED_EVENT Apr 20 20:02:21.256133 kubelet[3163]: I0420 20:02:21.057002 3163 status_manager.go:895] "Failed to get status for pod" podUID="071d23f6-a94b-4165-9229-2d0570b516d8" pod="calico-system/calico-node-g9fs5" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-node-g9fs5\": net/http: TLS handshake timeout" Apr 20 20:02:22.854890 systemd-logind[1627]: New session '69' of user 'core' with class 'user' and type 'tty'. Apr 20 20:02:23.146984 systemd[1]: Started session-69.scope - Session 69 of User core. Apr 20 20:02:23.432000 audit[7204]: AUDIT1105 pid=7204 uid=0 auid=500 ses=69 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:02:23.563000 audit[7211]: AUDIT1103 pid=7211 uid=0 auid=500 ses=69 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:02:23.648908 kernel: audit: type=1105 audit(1776715343.432:1332): pid=7204 uid=0 auid=500 ses=69 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:02:23.654158 kernel: audit: type=1103 audit(1776715343.563:1333): pid=7211 uid=0 auid=500 ses=69 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:02:24.255484 kubelet[3163]: E0420 20:02:24.253827 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="13.034s" Apr 20 20:02:25.341112 containerd[1659]: time="2026-04-20T20:02:25.193493893Z" level=info msg="container event discarded" container=31fa0b5b42e52fa654e4e40a75db16fac90293fdf9e236b87985c2e9f4e819b2 type=CONTAINER_STARTED_EVENT Apr 20 20:02:26.719771 kubelet[3163]: E0420 20:02:26.717268 3163 token_manager.go:124] "Couldn't update token" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/tigera-operator/serviceaccounts/tigera-operator/token\": net/http: TLS handshake timeout" cacheKey="\"tigera-operator\"/\"tigera-operator\"/[]string(nil)/3607/v1.BoundObjectReference{Kind:\"Pod\", APIVersion:\"v1\", Name:\"tigera-operator-6bf85f8dd-hvgdj\", UID:\"22f1ff03-de8a-48db-b03e-54fdbe0d3d5f\"}" Apr 20 20:02:28.581318 kubelet[3163]: E0420 20:02:28.579021 3163 projected.go:289] Couldn't get configMap tigera-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 20 20:02:28.774990 containerd[1659]: time="2026-04-20T20:02:28.773832312Z" level=info msg="Kill container \"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\"" Apr 20 20:02:29.180386 containerd[1659]: time="2026-04-20T20:02:28.886448105Z" level=info msg="Kill container \"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\"" Apr 20 20:02:29.882066 kubelet[3163]: E0420 20:02:29.287245 3163 projected.go:194] Error preparing data for projected volume kube-api-access-qj2d9 for pod tigera-operator/tigera-operator-6bf85f8dd-hvgdj: failed to sync configmap cache: timed out waiting for the condition Apr 20 20:02:34.218886 kubelet[3163]: E0420 20:02:33.195384 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/22f1ff03-de8a-48db-b03e-54fdbe0d3d5f-kube-api-access-qj2d9 podName:22f1ff03-de8a-48db-b03e-54fdbe0d3d5f nodeName:}" failed. No retries permitted until 2026-04-20 20:04:33.478800235 +0000 UTC m=+3371.573049942 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-qj2d9" (UniqueName: "kubernetes.io/projected/22f1ff03-de8a-48db-b03e-54fdbe0d3d5f-kube-api-access-qj2d9") pod "tigera-operator-6bf85f8dd-hvgdj" (UID: "22f1ff03-de8a-48db-b03e-54fdbe0d3d5f") : failed to sync configmap cache: timed out waiting for the condition Apr 20 20:02:36.600236 kubelet[3163]: I0420 20:02:36.242457 3163 status_manager.go:895] "Failed to get status for pod" podUID="dfb0b7d2-b28d-4433-9fba-0074dfdf81ee" pod="calico-system/calico-apiserver-84684997fc-zpm5v" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-apiserver-84684997fc-zpm5v\": net/http: TLS handshake timeout" Apr 20 20:02:37.216009 kubelet[3163]: E0420 20:02:37.214270 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T20:02:24Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T20:02:24Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T20:02:24Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T20:02:24Z\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"localhost\": Patch \"https://10.0.0.14:6443/api/v1/nodes/localhost/status?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 20 20:02:39.495825 kubelet[3163]: E0420 20:02:37.733874 3163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="7s" Apr 20 20:02:40.189234 kubelet[3163]: I0420 20:02:37.739519 3163 request.go:752] "Waited before sending request" delay="1.754539946s" reason="retries: 9, retry-after: 1s - retry-reason: due to retryable error, error: Get \"https://10.0.0.14:6443/api/v1/nodes?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dlocalhost&resourceVersion=3211&timeout=9m9s&timeoutSeconds=549&watch=true\": net/http: TLS handshake timeout" verb="GET" URL="https://10.0.0.14:6443/api/v1/nodes?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dlocalhost&resourceVersion=3211&timeout=9m9s&timeoutSeconds=549&watch=true" Apr 20 20:02:40.580728 containerd[1659]: time="2026-04-20T20:02:40.498167348Z" level=error msg="get state for 8146fe3f3e0af47161632bee53d54773f219499c8c9f4ffb34b4fe7cde3fa71b" error="context deadline exceeded" Apr 20 20:02:40.580728 containerd[1659]: time="2026-04-20T20:02:40.498294616Z" level=warning msg="unknown status" status=0 Apr 20 20:02:40.891492 containerd[1659]: time="2026-04-20T20:02:40.843109319Z" level=error msg="ttrpc: received message on inactive stream" stream=55 Apr 20 20:02:44.673769 containerd[1659]: time="2026-04-20T20:02:44.592297021Z" level=error msg="ExecSync for \"6c1d610fa13cf975d72818ef3ba28644ae87099c5273d909f3b984c9ff12b616\" failed" error="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" Apr 20 20:02:45.204381 kubelet[3163]: E0420 20:02:39.300668 3163 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/events/kube-scheduler-localhost.18a826e9780e1db1\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{kube-scheduler-localhost.18a826e9780e1db1 kube-system 3218 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-scheduler-localhost,UID:33fee6ba1581201eda98a989140db110,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://127.0.0.1:10259/readyz\": dial tcp 127.0.0.1:10259: connect: connection refused,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 19:20:37 +0000 UTC,LastTimestamp:2026-04-20 19:21:10.772475171 +0000 UTC m=+768.866724894,Count:7,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 20:02:46.288417 sshd[7211]: Connection closed by 10.0.0.1 port 40174 Apr 20 20:02:46.498165 sshd-session[7204]: pam_unix(sshd:session): session closed for user core Apr 20 20:02:46.952000 audit[7204]: AUDIT1106 pid=7204 uid=0 auid=500 ses=69 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:02:47.014000 audit[7204]: AUDIT1104 pid=7204 uid=0 auid=500 ses=69 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:02:47.587722 kernel: audit: type=1106 audit(1776715366.952:1334): pid=7204 uid=0 auid=500 ses=69 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:02:47.674061 kernel: audit: type=1104 audit(1776715367.014:1335): pid=7204 uid=0 auid=500 ses=69 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:02:48.033872 systemd[1]: sshd@67-4112-10.0.0.14:22-10.0.0.1:40174.service: Deactivated successfully. Apr 20 20:02:48.230000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@67-4112-10.0.0.14:22-10.0.0.1:40174 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:02:48.630135 kernel: audit: type=1131 audit(1776715368.230:1336): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@67-4112-10.0.0.14:22-10.0.0.1:40174 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:02:48.259288 systemd[1]: sshd@67-4112-10.0.0.14:22-10.0.0.1:40174.service: Consumed 3.196s CPU time, 4.4M memory peak. Apr 20 20:02:48.631148 systemd[1]: session-69.scope: Deactivated successfully. Apr 20 20:02:48.823635 systemd[1]: session-69.scope: Consumed 8.609s CPU time, 17.9M memory peak. Apr 20 20:02:49.409088 systemd-logind[1627]: Session 69 logged out. Waiting for processes to exit. Apr 20 20:02:49.659482 systemd-logind[1627]: Removed session 69. Apr 20 20:02:53.295911 systemd[1]: Started sshd@68-8222-10.0.0.14:22-10.0.0.1:38752.service - OpenSSH per-connection server daemon (10.0.0.1:38752). Apr 20 20:02:53.472000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@68-8222-10.0.0.14:22-10.0.0.1:38752 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:02:53.545446 kernel: audit: type=1130 audit(1776715373.472:1337): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@68-8222-10.0.0.14:22-10.0.0.1:38752 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:02:54.321658 kubelet[3163]: E0420 20:02:53.884352 3163 secret.go:189] Couldn't get secret calico-system/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Apr 20 20:02:55.191594 kubelet[3163]: E0420 20:02:53.166461 3163 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" containerID="6c1d610fa13cf975d72818ef3ba28644ae87099c5273d909f3b984c9ff12b616" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 20 20:02:56.154272 kubelet[3163]: E0420 20:02:56.151610 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.14:6443/api/v1/nodes/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Apr 20 20:02:59.928980 kubelet[3163]: E0420 20:02:58.879780 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1537\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 20 20:03:00.934159 containerd[1659]: time="2026-04-20T20:03:00.765036167Z" level=info msg="TaskExit event container_id:\"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\" id:\"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\" pid:2981 exit_status:1 exited_at:{seconds:1776712820 nanos:249587616}" Apr 20 20:03:01.473989 kubelet[3163]: E0420 20:03:01.295429 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.14:6443/api/v1/namespaces/tigera-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1537\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"tigera-operator\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 20 20:03:02.154075 kubelet[3163]: E0420 20:03:02.103134 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dfb0b7d2-b28d-4433-9fba-0074dfdf81ee-calico-apiserver-certs podName:dfb0b7d2-b28d-4433-9fba-0074dfdf81ee nodeName:}" failed. No retries permitted until 2026-04-20 20:04:58.130037649 +0000 UTC m=+3396.224292466 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/dfb0b7d2-b28d-4433-9fba-0074dfdf81ee-calico-apiserver-certs") pod "calico-apiserver-84684997fc-zpm5v" (UID: "dfb0b7d2-b28d-4433-9fba-0074dfdf81ee") : failed to sync secret cache: timed out waiting for the condition Apr 20 20:03:06.345523 kubelet[3163]: E0420 20:03:02.818505 3163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 20 20:03:06.940893 containerd[1659]: time="2026-04-20T20:03:06.244499288Z" level=info msg="container event discarded" container=13a5307cebf8e24a8cf569f8b508b92d0a2c1da1c970849479297704cecf330d type=CONTAINER_DELETED_EVENT Apr 20 20:03:09.006416 systemd[1]: cri-containerd-31fa0b5b42e52fa654e4e40a75db16fac90293fdf9e236b87985c2e9f4e819b2.scope: Deactivated successfully. Apr 20 20:03:09.029000 audit: BPF prog-id=239 op=UNLOAD Apr 20 20:03:09.035000 audit: BPF prog-id=231 op=UNLOAD Apr 20 20:03:09.070147 systemd[1]: cri-containerd-31fa0b5b42e52fa654e4e40a75db16fac90293fdf9e236b87985c2e9f4e819b2.scope: Consumed 25.607s CPU time, 19.5M memory peak. Apr 20 20:03:09.320217 kernel: audit: type=1334 audit(1776715389.029:1338): prog-id=239 op=UNLOAD Apr 20 20:03:09.377897 kernel: audit: type=1334 audit(1776715389.035:1339): prog-id=231 op=UNLOAD Apr 20 20:03:09.675955 containerd[1659]: time="2026-04-20T20:03:09.651884901Z" level=info msg="received container exit event container_id:\"31fa0b5b42e52fa654e4e40a75db16fac90293fdf9e236b87985c2e9f4e819b2\" id:\"31fa0b5b42e52fa654e4e40a75db16fac90293fdf9e236b87985c2e9f4e819b2\" pid:7004 exit_status:1 exited_at:{seconds:1776715389 nanos:506884042}" Apr 20 20:03:10.438724 containerd[1659]: time="2026-04-20T20:03:10.418130037Z" level=error msg="Failed to handle backOff event container_id:\"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\" id:\"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\" pid:2981 exit_status:1 exited_at:{seconds:1776712820 nanos:249587616} for ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 20 20:03:10.930004 containerd[1659]: time="2026-04-20T20:03:10.926785264Z" level=error msg="ttrpc: received message on inactive stream" stream=321 Apr 20 20:03:10.982656 containerd[1659]: time="2026-04-20T20:03:10.930192958Z" level=error msg="ttrpc: received message on inactive stream" stream=325 Apr 20 20:03:11.747362 kubelet[3163]: E0420 20:03:11.642965 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.14:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded" Apr 20 20:03:12.574000 audit[7243]: AUDIT1101 pid=7243 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:03:12.881000 audit[7243]: AUDIT1103 pid=7243 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:03:12.927750 sshd[7243]: Accepted publickey for core from 10.0.0.1 port 38752 ssh2: RSA SHA256:ZNIzts6V4KYKlrJxXaosrimCRlmsV/+NkZ5UtjwHrjE Apr 20 20:03:12.916000 audit[7243]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc80103e80 a2=3 a3=0 items=0 ppid=1 pid=7243 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=70 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 20:03:12.916000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 20:03:13.081485 kernel: audit: type=1101 audit(1776715392.574:1340): pid=7243 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:03:13.096426 kernel: audit: type=1103 audit(1776715392.881:1341): pid=7243 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:03:13.091200 sshd-session[7243]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 20:03:13.345220 kernel: audit: type=1006 audit(1776715392.916:1342): pid=7243 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=70 res=1 Apr 20 20:03:13.368725 kernel: audit: type=1300 audit(1776715392.916:1342): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc80103e80 a2=3 a3=0 items=0 ppid=1 pid=7243 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=70 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 20:03:13.369348 kernel: audit: type=1327 audit(1776715392.916:1342): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 20:03:13.491520 kubelet[3163]: E0420 20:03:13.490126 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.Secret: Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/secrets?fieldSelector=metadata.name%3Dcalico-apiserver-certs&resourceVersion=2406\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"calico-system\"/\"calico-apiserver-certs\"" type="*v1.Secret" Apr 20 20:03:13.491520 kubelet[3163]: E0420 20:03:13.196524 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="46.282s" Apr 20 20:03:13.651191 kubelet[3163]: I0420 20:03:11.838443 3163 status_manager.go:895] "Failed to get status for pod" podUID="5ef51a6b32499d3d1e531fb8b3a83d4f" pod="kube-system/kube-apiserver-localhost" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": net/http: TLS handshake timeout" Apr 20 20:03:14.372091 kubelet[3163]: E0420 20:03:12.925753 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=2901\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 20 20:03:14.604320 systemd-logind[1627]: New session '70' of user 'core' with class 'user' and type 'tty'. Apr 20 20:03:14.751956 systemd[1]: Started session-70.scope - Session 70 of User core. Apr 20 20:03:16.580000 audit[7243]: AUDIT1105 pid=7243 uid=0 auid=500 ses=70 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:03:16.646436 kernel: audit: type=1105 audit(1776715396.580:1343): pid=7243 uid=0 auid=500 ses=70 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:03:16.660437 containerd[1659]: time="2026-04-20T20:03:16.610920887Z" level=info msg="TaskExit event container_id:\"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\" id:\"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\" pid:2993 exit_status:1 exited_at:{seconds:1776712815 nanos:539071424}" Apr 20 20:03:16.812000 audit[7255]: AUDIT1103 pid=7255 uid=0 auid=500 ses=70 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:03:16.866325 kernel: audit: type=1103 audit(1776715396.812:1344): pid=7255 uid=0 auid=500 ses=70 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:03:18.053128 systemd[1]: cri-containerd-54c6b42922bf5b9031d3f8d7454f339017c2e17aaf05317c673f5a78f690205c.scope: Deactivated successfully. Apr 20 20:03:18.240000 audit: BPF prog-id=235 op=UNLOAD Apr 20 20:03:18.286000 audit: BPF prog-id=230 op=UNLOAD Apr 20 20:03:18.913820 kernel: audit: type=1334 audit(1776715398.240:1345): prog-id=235 op=UNLOAD Apr 20 20:03:18.346162 systemd[1]: cri-containerd-54c6b42922bf5b9031d3f8d7454f339017c2e17aaf05317c673f5a78f690205c.scope: Consumed 5min 13.532s CPU time, 208.6M memory peak, 1.1M read from disk. Apr 20 20:03:18.971028 kernel: audit: type=1334 audit(1776715398.286:1346): prog-id=230 op=UNLOAD Apr 20 20:03:19.722293 containerd[1659]: time="2026-04-20T20:03:19.502521966Z" level=info msg="received container exit event container_id:\"54c6b42922bf5b9031d3f8d7454f339017c2e17aaf05317c673f5a78f690205c\" id:\"54c6b42922bf5b9031d3f8d7454f339017c2e17aaf05317c673f5a78f690205c\" pid:7012 exit_status:255 exited_at:{seconds:1776715399 nanos:78895607}" Apr 20 20:03:20.059503 containerd[1659]: time="2026-04-20T20:03:19.723378687Z" level=error msg="failed to handle container TaskExit event container_id:\"31fa0b5b42e52fa654e4e40a75db16fac90293fdf9e236b87985c2e9f4e819b2\" id:\"31fa0b5b42e52fa654e4e40a75db16fac90293fdf9e236b87985c2e9f4e819b2\" pid:7004 exit_status:1 exited_at:{seconds:1776715389 nanos:506884042}" error="failed to stop container: context deadline exceeded" Apr 20 20:03:21.875293 containerd[1659]: time="2026-04-20T20:03:21.858456730Z" level=error msg="ttrpc: received message on inactive stream" stream=29 Apr 20 20:03:22.327627 kubelet[3163]: E0420 20:03:15.355739 3163 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/events/kube-scheduler-localhost.18a826e9780e1db1\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{kube-scheduler-localhost.18a826e9780e1db1 kube-system 3218 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-scheduler-localhost,UID:33fee6ba1581201eda98a989140db110,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://127.0.0.1:10259/readyz\": dial tcp 127.0.0.1:10259: connect: connection refused,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 19:20:37 +0000 UTC,LastTimestamp:2026-04-20 19:21:10.772475171 +0000 UTC m=+768.866724894,Count:7,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 20:03:26.578190 containerd[1659]: time="2026-04-20T20:03:26.575500118Z" level=error msg="Failed to handle backOff event container_id:\"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\" id:\"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\" pid:2993 exit_status:1 exited_at:{seconds:1776712815 nanos:539071424} for d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 20 20:03:26.734071 containerd[1659]: time="2026-04-20T20:03:26.589077930Z" level=info msg="TaskExit event container_id:\"31fa0b5b42e52fa654e4e40a75db16fac90293fdf9e236b87985c2e9f4e819b2\" id:\"31fa0b5b42e52fa654e4e40a75db16fac90293fdf9e236b87985c2e9f4e819b2\" pid:7004 exit_status:1 exited_at:{seconds:1776715389 nanos:506884042}" Apr 20 20:03:27.883852 containerd[1659]: time="2026-04-20T20:03:27.881135779Z" level=error msg="ttrpc: received message on inactive stream" stream=311 Apr 20 20:03:28.148417 containerd[1659]: time="2026-04-20T20:03:28.135208910Z" level=error msg="ttrpc: received message on inactive stream" stream=313 Apr 20 20:03:29.766902 containerd[1659]: time="2026-04-20T20:03:29.765446579Z" level=error msg="failed to handle container TaskExit event container_id:\"54c6b42922bf5b9031d3f8d7454f339017c2e17aaf05317c673f5a78f690205c\" id:\"54c6b42922bf5b9031d3f8d7454f339017c2e17aaf05317c673f5a78f690205c\" pid:7012 exit_status:255 exited_at:{seconds:1776715399 nanos:78895607}" error="failed to stop container: context deadline exceeded" Apr 20 20:03:30.530223 containerd[1659]: time="2026-04-20T20:03:30.528347002Z" level=error msg="ttrpc: received message on inactive stream" stream=31 Apr 20 20:03:30.530223 containerd[1659]: time="2026-04-20T20:03:30.529232746Z" level=error msg="ttrpc: received message on inactive stream" stream=29 Apr 20 20:03:31.931397 kubelet[3163]: E0420 20:03:31.925051 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.Secret: Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/secrets?fieldSelector=metadata.name%3Dnode-certs&resourceVersion=2406\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="object-\"calico-system\"/\"node-certs\"" type="*v1.Secret" Apr 20 20:03:32.292413 kubelet[3163]: E0420 20:03:32.084944 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.14:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded" Apr 20 20:03:32.449907 kubelet[3163]: E0420 20:03:32.081378 3163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" interval="7s" Apr 20 20:03:33.110975 kubelet[3163]: I0420 20:03:33.076127 3163 status_manager.go:895] "Failed to get status for pod" podUID="071d23f6-a94b-4165-9229-2d0570b516d8" pod="calico-system/calico-node-g9fs5" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-node-g9fs5\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:03:34.305434 kubelet[3163]: E0420 20:03:34.291863 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=2825\": dial tcp 10.0.0.14:6443: connect: connection refused - error from a previous attempt: read tcp 10.0.0.14:59656->10.0.0.14:6443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 20 20:03:34.513006 kubelet[3163]: E0420 20:03:34.504385 3163 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/events/kube-scheduler-localhost.18a826e9780e1db1\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-scheduler-localhost.18a826e9780e1db1 kube-system 3218 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-scheduler-localhost,UID:33fee6ba1581201eda98a989140db110,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://127.0.0.1:10259/readyz\": dial tcp 127.0.0.1:10259: connect: connection refused,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 19:20:37 +0000 UTC,LastTimestamp:2026-04-20 19:21:10.772475171 +0000 UTC m=+768.866724894,Count:7,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 20:03:36.867934 containerd[1659]: time="2026-04-20T20:03:36.862829881Z" level=error msg="Failed to handle backOff event container_id:\"31fa0b5b42e52fa654e4e40a75db16fac90293fdf9e236b87985c2e9f4e819b2\" id:\"31fa0b5b42e52fa654e4e40a75db16fac90293fdf9e236b87985c2e9f4e819b2\" pid:7004 exit_status:1 exited_at:{seconds:1776715389 nanos:506884042} for 31fa0b5b42e52fa654e4e40a75db16fac90293fdf9e236b87985c2e9f4e819b2" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 20 20:03:37.028649 containerd[1659]: time="2026-04-20T20:03:36.924185008Z" level=info msg="TaskExit event container_id:\"54c6b42922bf5b9031d3f8d7454f339017c2e17aaf05317c673f5a78f690205c\" id:\"54c6b42922bf5b9031d3f8d7454f339017c2e17aaf05317c673f5a78f690205c\" pid:7012 exit_status:255 exited_at:{seconds:1776715399 nanos:78895607}" Apr 20 20:03:37.862299 containerd[1659]: time="2026-04-20T20:03:37.853298450Z" level=error msg="ttrpc: received message on inactive stream" stream=43 Apr 20 20:03:37.931716 containerd[1659]: time="2026-04-20T20:03:37.930054367Z" level=error msg="ttrpc: received message on inactive stream" stream=41 Apr 20 20:03:38.208478 kubelet[3163]: E0420 20:03:38.092068 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.14:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:03:38.208478 kubelet[3163]: E0420 20:03:38.097366 3163 kubelet_node_status.go:535] "Unable to update node status" err="update node status exceeds retry count" Apr 20 20:03:38.494049 kubelet[3163]: I0420 20:03:38.095428 3163 status_manager.go:895] "Failed to get status for pod" podUID="dfb0b7d2-b28d-4433-9fba-0074dfdf81ee" pod="calico-system/calico-apiserver-84684997fc-zpm5v" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-apiserver-84684997fc-zpm5v\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:03:43.332249 sshd[7255]: Connection closed by 10.0.0.1 port 38752 Apr 20 20:03:43.395766 sshd-session[7243]: pam_unix(sshd:session): session closed for user core Apr 20 20:03:44.069000 audit[7243]: AUDIT1106 pid=7243 uid=0 auid=500 ses=70 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:03:44.130000 audit[7243]: AUDIT1104 pid=7243 uid=0 auid=500 ses=70 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:03:44.899791 kernel: audit: type=1106 audit(1776715424.069:1347): pid=7243 uid=0 auid=500 ses=70 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:03:45.035133 kernel: audit: type=1104 audit(1776715424.130:1348): pid=7243 uid=0 auid=500 ses=70 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:03:45.180120 systemd[1]: sshd@68-8222-10.0.0.14:22-10.0.0.1:38752.service: Deactivated successfully. Apr 20 20:03:45.397000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@68-8222-10.0.0.14:22-10.0.0.1:38752 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:03:45.403321 systemd[1]: sshd@68-8222-10.0.0.14:22-10.0.0.1:38752.service: Consumed 4.859s CPU time, 4.1M memory peak. Apr 20 20:03:45.786630 kernel: audit: type=1131 audit(1776715425.397:1349): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@68-8222-10.0.0.14:22-10.0.0.1:38752 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:03:46.251695 kubelet[3163]: E0420 20:03:45.540018 3163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 20 20:03:46.252851 systemd[1]: session-70.scope: Deactivated successfully. Apr 20 20:03:46.253633 systemd[1]: session-70.scope: Consumed 17.736s CPU time, 19.3M memory peak. Apr 20 20:03:46.457891 systemd-logind[1627]: Session 70 logged out. Waiting for processes to exit. Apr 20 20:03:46.665227 kubelet[3163]: I0420 20:03:46.300135 3163 status_manager.go:895] "Failed to get status for pod" podUID="5ef51a6b32499d3d1e531fb8b3a83d4f" pod="kube-system/kube-apiserver-localhost" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:03:46.974392 containerd[1659]: time="2026-04-20T20:03:46.952635030Z" level=error msg="Failed to handle backOff event container_id:\"54c6b42922bf5b9031d3f8d7454f339017c2e17aaf05317c673f5a78f690205c\" id:\"54c6b42922bf5b9031d3f8d7454f339017c2e17aaf05317c673f5a78f690205c\" pid:7012 exit_status:255 exited_at:{seconds:1776715399 nanos:78895607} for 54c6b42922bf5b9031d3f8d7454f339017c2e17aaf05317c673f5a78f690205c" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 20 20:03:46.974392 containerd[1659]: time="2026-04-20T20:03:46.958037240Z" level=info msg="TaskExit event container_id:\"31fa0b5b42e52fa654e4e40a75db16fac90293fdf9e236b87985c2e9f4e819b2\" id:\"31fa0b5b42e52fa654e4e40a75db16fac90293fdf9e236b87985c2e9f4e819b2\" pid:7004 exit_status:1 exited_at:{seconds:1776715389 nanos:506884042}" Apr 20 20:03:47.024186 systemd-logind[1627]: Removed session 70. Apr 20 20:03:47.891997 containerd[1659]: time="2026-04-20T20:03:47.821054999Z" level=error msg="ttrpc: received message on inactive stream" stream=39 Apr 20 20:03:47.969319 containerd[1659]: time="2026-04-20T20:03:47.965402286Z" level=error msg="ttrpc: received message on inactive stream" stream=41 Apr 20 20:03:49.449071 kubelet[3163]: I0420 20:03:49.056523 3163 status_manager.go:895] "Failed to get status for pod" podUID="071d23f6-a94b-4165-9229-2d0570b516d8" pod="calico-system/calico-node-g9fs5" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-node-g9fs5\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:03:50.282504 systemd[1]: Started sshd@69-8223-10.0.0.14:22-10.0.0.1:60006.service - OpenSSH per-connection server daemon (10.0.0.1:60006). Apr 20 20:03:50.390000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@69-8223-10.0.0.14:22-10.0.0.1:60006 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:03:50.746353 kernel: audit: type=1130 audit(1776715430.390:1350): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@69-8223-10.0.0.14:22-10.0.0.1:60006 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:03:50.814178 kubelet[3163]: E0420 20:03:49.147239 3163 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/events/kube-scheduler-localhost.18a826e9780e1db1\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-scheduler-localhost.18a826e9780e1db1 kube-system 3218 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-scheduler-localhost,UID:33fee6ba1581201eda98a989140db110,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://127.0.0.1:10259/readyz\": dial tcp 127.0.0.1:10259: connect: connection refused,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 19:20:37 +0000 UTC,LastTimestamp:2026-04-20 19:21:10.772475171 +0000 UTC m=+768.866724894,Count:7,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 20:03:51.354616 kubelet[3163]: E0420 20:03:50.961330 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1537\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 20 20:03:52.654745 kubelet[3163]: E0420 20:03:52.380842 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=2901\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 20 20:03:54.473092 kubelet[3163]: I0420 20:03:54.460031 3163 status_manager.go:895] "Failed to get status for pod" podUID="dfb0b7d2-b28d-4433-9fba-0074dfdf81ee" pod="calico-system/calico-apiserver-84684997fc-zpm5v" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-apiserver-84684997fc-zpm5v\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:03:56.999645 containerd[1659]: time="2026-04-20T20:03:56.978232747Z" level=error msg="Failed to handle backOff event container_id:\"31fa0b5b42e52fa654e4e40a75db16fac90293fdf9e236b87985c2e9f4e819b2\" id:\"31fa0b5b42e52fa654e4e40a75db16fac90293fdf9e236b87985c2e9f4e819b2\" pid:7004 exit_status:1 exited_at:{seconds:1776715389 nanos:506884042} for 31fa0b5b42e52fa654e4e40a75db16fac90293fdf9e236b87985c2e9f4e819b2" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 20 20:03:57.626358 containerd[1659]: time="2026-04-20T20:03:57.458422146Z" level=info msg="TaskExit event container_id:\"54c6b42922bf5b9031d3f8d7454f339017c2e17aaf05317c673f5a78f690205c\" id:\"54c6b42922bf5b9031d3f8d7454f339017c2e17aaf05317c673f5a78f690205c\" pid:7012 exit_status:255 exited_at:{seconds:1776715399 nanos:78895607}" Apr 20 20:03:58.501067 containerd[1659]: time="2026-04-20T20:03:58.500738698Z" level=error msg="ttrpc: received message on inactive stream" stream=53 Apr 20 20:03:58.595470 containerd[1659]: time="2026-04-20T20:03:58.592647454Z" level=error msg="ttrpc: received message on inactive stream" stream=49 Apr 20 20:03:59.060391 kubelet[3163]: E0420 20:03:59.047618 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.Secret: Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/secrets?fieldSelector=metadata.name%3Dcalico-apiserver-certs&resourceVersion=2406\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="object-\"calico-system\"/\"calico-apiserver-certs\"" type="*v1.Secret" Apr 20 20:03:59.288076 kubelet[3163]: E0420 20:03:58.679202 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.14:6443/api/v1/namespaces/tigera-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1537\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="object-\"tigera-operator\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 20 20:03:59.413170 containerd[1659]: time="2026-04-20T20:03:59.372310258Z" level=error msg="ExecSync for \"6c1d610fa13cf975d72818ef3ba28644ae87099c5273d909f3b984c9ff12b616\" failed" error="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" Apr 20 20:03:59.958811 kubelet[3163]: E0420 20:03:59.679389 3163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 20 20:04:01.372174 kubelet[3163]: E0420 20:04:01.352161 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="47.86s" Apr 20 20:04:01.478999 kubelet[3163]: E0420 20:04:01.070135 3163 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" containerID="6c1d610fa13cf975d72818ef3ba28644ae87099c5273d909f3b984c9ff12b616" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 20 20:04:01.942249 kubelet[3163]: I0420 20:04:01.900875 3163 status_manager.go:895] "Failed to get status for pod" podUID="071d23f6-a94b-4165-9229-2d0570b516d8" pod="calico-system/calico-node-g9fs5" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-node-g9fs5\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:04:02.163386 kubelet[3163]: E0420 20:04:02.072249 3163 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/events/kube-scheduler-localhost.18a826e9780e1db1\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-scheduler-localhost.18a826e9780e1db1 kube-system 3218 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-scheduler-localhost,UID:33fee6ba1581201eda98a989140db110,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://127.0.0.1:10259/readyz\": dial tcp 127.0.0.1:10259: connect: connection refused,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 19:20:37 +0000 UTC,LastTimestamp:2026-04-20 19:21:10.772475171 +0000 UTC m=+768.866724894,Count:7,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 20:04:03.142088 kubelet[3163]: E0420 20:04:03.135443 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T20:03:57Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T20:03:57Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T20:03:57Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T20:03:57Z\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"localhost\": Patch \"https://10.0.0.14:6443/api/v1/nodes/localhost/status?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:04:03.237323 kubelet[3163]: I0420 20:04:03.232761 3163 status_manager.go:895] "Failed to get status for pod" podUID="dfb0b7d2-b28d-4433-9fba-0074dfdf81ee" pod="calico-system/calico-apiserver-84684997fc-zpm5v" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-apiserver-84684997fc-zpm5v\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:04:03.920192 kubelet[3163]: I0420 20:04:03.899871 3163 status_manager.go:895] "Failed to get status for pod" podUID="5ef51a6b32499d3d1e531fb8b3a83d4f" pod="kube-system/kube-apiserver-localhost" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:04:04.076945 kubelet[3163]: E0420 20:04:04.000713 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.14:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:04:04.359207 kubelet[3163]: E0420 20:04:04.339228 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.14:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:04:04.817986 kubelet[3163]: E0420 20:04:04.816027 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.14:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:04:04.862359 kubelet[3163]: E0420 20:04:04.861427 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.5s" Apr 20 20:04:04.877000 audit[7321]: AUDIT1101 pid=7321 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:04:04.910974 kernel: audit: type=1101 audit(1776715444.877:1351): pid=7321 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:04:04.928458 sshd[7321]: Accepted publickey for core from 10.0.0.1 port 60006 ssh2: RSA SHA256:ZNIzts6V4KYKlrJxXaosrimCRlmsV/+NkZ5UtjwHrjE Apr 20 20:04:04.989876 kubelet[3163]: I0420 20:04:04.925967 3163 status_manager.go:895] "Failed to get status for pod" podUID="dfb0b7d2-b28d-4433-9fba-0074dfdf81ee" pod="calico-system/calico-apiserver-84684997fc-zpm5v" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-apiserver-84684997fc-zpm5v\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:04:04.989876 kubelet[3163]: E0420 20:04:04.934126 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.14:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:04:04.989876 kubelet[3163]: E0420 20:04:04.950436 3163 kubelet_node_status.go:535] "Unable to update node status" err="update node status exceeds retry count" Apr 20 20:04:05.046000 audit[7321]: AUDIT1103 pid=7321 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:04:05.064000 audit[7321]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcda98c9b0 a2=3 a3=0 items=0 ppid=1 pid=7321 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=71 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 20:04:05.064000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 20:04:05.237998 kernel: audit: type=1103 audit(1776715445.046:1352): pid=7321 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:04:05.242660 kernel: audit: type=1006 audit(1776715445.064:1353): pid=7321 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=71 res=1 Apr 20 20:04:05.240082 sshd-session[7321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 20:04:05.289881 kernel: audit: type=1300 audit(1776715445.064:1353): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcda98c9b0 a2=3 a3=0 items=0 ppid=1 pid=7321 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=71 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 20:04:05.290039 kubelet[3163]: I0420 20:04:05.242108 3163 status_manager.go:895] "Failed to get status for pod" podUID="5ef51a6b32499d3d1e531fb8b3a83d4f" pod="kube-system/kube-apiserver-localhost" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:04:05.290446 kernel: audit: type=1327 audit(1776715445.064:1353): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 20:04:05.290671 kubelet[3163]: I0420 20:04:05.288490 3163 status_manager.go:895] "Failed to get status for pod" podUID="071d23f6-a94b-4165-9229-2d0570b516d8" pod="calico-system/calico-node-g9fs5" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-node-g9fs5\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:04:05.520199 kubelet[3163]: I0420 20:04:05.498116 3163 status_manager.go:895] "Failed to get status for pod" podUID="071d23f6-a94b-4165-9229-2d0570b516d8" pod="calico-system/calico-node-g9fs5" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-node-g9fs5\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:04:05.941170 kubelet[3163]: I0420 20:04:05.832731 3163 status_manager.go:895] "Failed to get status for pod" podUID="dfb0b7d2-b28d-4433-9fba-0074dfdf81ee" pod="calico-system/calico-apiserver-84684997fc-zpm5v" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-apiserver-84684997fc-zpm5v\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:04:05.990635 systemd-logind[1627]: New session '71' of user 'core' with class 'user' and type 'tty'. Apr 20 20:04:06.159100 systemd[1]: Started session-71.scope - Session 71 of User core. Apr 20 20:04:06.331000 audit[7321]: AUDIT1105 pid=7321 uid=0 auid=500 ses=71 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:04:06.463500 kernel: audit: type=1105 audit(1776715446.331:1354): pid=7321 uid=0 auid=500 ses=71 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:04:06.652000 audit[7350]: AUDIT1103 pid=7350 uid=0 auid=500 ses=71 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:04:06.700385 kernel: audit: type=1103 audit(1776715446.652:1355): pid=7350 uid=0 auid=500 ses=71 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:04:06.747362 kubelet[3163]: I0420 20:04:06.664986 3163 status_manager.go:895] "Failed to get status for pod" podUID="5ef51a6b32499d3d1e531fb8b3a83d4f" pod="kube-system/kube-apiserver-localhost" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:04:07.079120 kubelet[3163]: E0420 20:04:06.987073 3163 secret.go:189] Couldn't get secret calico-system/node-certs: failed to sync secret cache: timed out waiting for the condition Apr 20 20:04:07.299059 containerd[1659]: time="2026-04-20T20:04:07.276255322Z" level=error msg="failed to delete task" error="context deadline exceeded" id=54c6b42922bf5b9031d3f8d7454f339017c2e17aaf05317c673f5a78f690205c Apr 20 20:04:07.457263 containerd[1659]: time="2026-04-20T20:04:07.447840596Z" level=error msg="Failed to handle backOff event container_id:\"54c6b42922bf5b9031d3f8d7454f339017c2e17aaf05317c673f5a78f690205c\" id:\"54c6b42922bf5b9031d3f8d7454f339017c2e17aaf05317c673f5a78f690205c\" pid:7012 exit_status:255 exited_at:{seconds:1776715399 nanos:78895607} for 54c6b42922bf5b9031d3f8d7454f339017c2e17aaf05317c673f5a78f690205c" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded" Apr 20 20:04:07.457263 containerd[1659]: time="2026-04-20T20:04:07.448061045Z" level=info msg="TaskExit event container_id:\"31fa0b5b42e52fa654e4e40a75db16fac90293fdf9e236b87985c2e9f4e819b2\" id:\"31fa0b5b42e52fa654e4e40a75db16fac90293fdf9e236b87985c2e9f4e819b2\" pid:7004 exit_status:1 exited_at:{seconds:1776715389 nanos:506884042}" Apr 20 20:04:07.651509 kubelet[3163]: E0420 20:04:07.647122 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.776s" Apr 20 20:04:09.626402 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-54c6b42922bf5b9031d3f8d7454f339017c2e17aaf05317c673f5a78f690205c-rootfs.mount: Deactivated successfully. Apr 20 20:04:10.375236 containerd[1659]: time="2026-04-20T20:04:10.368207741Z" level=error msg="ttrpc: received message on inactive stream" stream=57 Apr 20 20:04:11.176845 kubelet[3163]: E0420 20:04:11.167422 3163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 20 20:04:11.947166 kubelet[3163]: E0420 20:04:11.938796 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/071d23f6-a94b-4165-9229-2d0570b516d8-node-certs podName:071d23f6-a94b-4165-9229-2d0570b516d8 nodeName:}" failed. No retries permitted until 2026-04-20 20:06:13.501972411 +0000 UTC m=+3471.596222124 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "node-certs" (UniqueName: "kubernetes.io/secret/071d23f6-a94b-4165-9229-2d0570b516d8-node-certs") pod "calico-node-g9fs5" (UID: "071d23f6-a94b-4165-9229-2d0570b516d8") : failed to sync secret cache: timed out waiting for the condition Apr 20 20:04:16.632123 kubelet[3163]: E0420 20:04:16.389176 3163 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/events/kube-scheduler-localhost.18a826e9780e1db1\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-scheduler-localhost.18a826e9780e1db1 kube-system 3218 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-scheduler-localhost,UID:33fee6ba1581201eda98a989140db110,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://127.0.0.1:10259/readyz\": dial tcp 127.0.0.1:10259: connect: connection refused,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 19:20:37 +0000 UTC,LastTimestamp:2026-04-20 19:21:10.772475171 +0000 UTC m=+768.866724894,Count:7,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 20:04:17.506400 containerd[1659]: time="2026-04-20T20:04:17.482596718Z" level=error msg="Failed to handle backOff event container_id:\"31fa0b5b42e52fa654e4e40a75db16fac90293fdf9e236b87985c2e9f4e819b2\" id:\"31fa0b5b42e52fa654e4e40a75db16fac90293fdf9e236b87985c2e9f4e819b2\" pid:7004 exit_status:1 exited_at:{seconds:1776715389 nanos:506884042} for 31fa0b5b42e52fa654e4e40a75db16fac90293fdf9e236b87985c2e9f4e819b2" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 20 20:04:17.506400 containerd[1659]: time="2026-04-20T20:04:17.483214054Z" level=info msg="TaskExit event container_id:\"54c6b42922bf5b9031d3f8d7454f339017c2e17aaf05317c673f5a78f690205c\" id:\"54c6b42922bf5b9031d3f8d7454f339017c2e17aaf05317c673f5a78f690205c\" pid:7012 exit_status:255 exited_at:{seconds:1776715399 nanos:78895607}" Apr 20 20:04:19.552076 containerd[1659]: time="2026-04-20T20:04:19.546450565Z" level=error msg="ttrpc: received message on inactive stream" stream=61 Apr 20 20:04:20.362695 containerd[1659]: time="2026-04-20T20:04:20.357158282Z" level=error msg="ttrpc: received message on inactive stream" stream=65 Apr 20 20:04:20.982109 kubelet[3163]: E0420 20:04:20.516420 3163 token_manager.go:124] "Couldn't update token" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/serviceaccounts/kube-proxy/token\": dial tcp 10.0.0.14:6443: connect: connection refused" cacheKey="\"kube-proxy\"/\"kube-system\"/[]string(nil)/3607/v1.BoundObjectReference{Kind:\"Pod\", APIVersion:\"v1\", Name:\"kube-proxy-c6mkn\", UID:\"526e8f89-8d32-4504-b20c-956610c7bb82\"}" Apr 20 20:04:23.225020 kubelet[3163]: E0420 20:04:23.200701 3163 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 20 20:04:24.253214 kubelet[3163]: E0420 20:04:24.250395 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=2825\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 20 20:04:24.865247 kubelet[3163]: E0420 20:04:24.194288 3163 projected.go:194] Error preparing data for projected volume kube-api-access-6ncsk for pod kube-system/kube-proxy-c6mkn: failed to sync configmap cache: timed out waiting for the condition Apr 20 20:04:26.568198 kubelet[3163]: E0420 20:04:26.535206 3163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 20 20:04:27.702222 containerd[1659]: time="2026-04-20T20:04:27.692199612Z" level=error msg="Failed to handle backOff event container_id:\"54c6b42922bf5b9031d3f8d7454f339017c2e17aaf05317c673f5a78f690205c\" id:\"54c6b42922bf5b9031d3f8d7454f339017c2e17aaf05317c673f5a78f690205c\" pid:7012 exit_status:255 exited_at:{seconds:1776715399 nanos:78895607} for 54c6b42922bf5b9031d3f8d7454f339017c2e17aaf05317c673f5a78f690205c" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 20 20:04:28.159068 containerd[1659]: time="2026-04-20T20:04:27.833274968Z" level=info msg="TaskExit event container_id:\"31fa0b5b42e52fa654e4e40a75db16fac90293fdf9e236b87985c2e9f4e819b2\" id:\"31fa0b5b42e52fa654e4e40a75db16fac90293fdf9e236b87985c2e9f4e819b2\" pid:7004 exit_status:1 exited_at:{seconds:1776715389 nanos:506884042}" Apr 20 20:04:28.368441 kubelet[3163]: E0420 20:04:28.275245 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/526e8f89-8d32-4504-b20c-956610c7bb82-kube-api-access-6ncsk podName:526e8f89-8d32-4504-b20c-956610c7bb82 nodeName:}" failed. No retries permitted until 2026-04-20 20:06:29.887102657 +0000 UTC m=+3487.981352383 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-6ncsk" (UniqueName: "kubernetes.io/projected/526e8f89-8d32-4504-b20c-956610c7bb82-kube-api-access-6ncsk") pod "kube-proxy-c6mkn" (UID: "526e8f89-8d32-4504-b20c-956610c7bb82") : failed to sync configmap cache: timed out waiting for the condition Apr 20 20:04:29.037291 containerd[1659]: time="2026-04-20T20:04:28.974389498Z" level=error msg="ttrpc: received message on inactive stream" stream=63 Apr 20 20:04:29.284391 containerd[1659]: time="2026-04-20T20:04:29.088003007Z" level=error msg="ttrpc: received message on inactive stream" stream=67 Apr 20 20:04:29.284391 containerd[1659]: time="2026-04-20T20:04:29.175278343Z" level=error msg="StopContainer for \"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\" failed" error="rpc error: code = Canceled desc = an error occurs during waiting for container \"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\" to be killed: wait container \"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\": context canceled" Apr 20 20:04:29.284391 containerd[1659]: time="2026-04-20T20:04:29.207084542Z" level=error msg="StopContainer for \"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\" failed" error="rpc error: code = Canceled desc = an error occurs during waiting for container \"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\" to be killed: wait container \"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\": context canceled" Apr 20 20:04:30.823452 kubelet[3163]: I0420 20:04:29.410851 3163 status_manager.go:895] "Failed to get status for pod" podUID="5ef51a6b32499d3d1e531fb8b3a83d4f" pod="kube-system/kube-apiserver-localhost" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:04:31.303181 kubelet[3163]: E0420 20:04:31.302138 3163 log.go:32] "StopContainer from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729" Apr 20 20:04:32.394161 kubelet[3163]: E0420 20:04:31.366401 3163 log.go:32] "StopContainer from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf" Apr 20 20:04:32.670174 kubelet[3163]: E0420 20:04:32.477888 3163 kuberuntime_container.go:863] "Container termination failed with gracePeriod" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="kube-system/kube-controller-manager-localhost" podUID="e9ca41790ae21be9f4cbd451ade0acec" containerName="kube-controller-manager" containerID="containerd://d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf" gracePeriod=30 Apr 20 20:04:33.168274 kubelet[3163]: E0420 20:04:33.160774 3163 kuberuntime_manager.go:1176] "killContainer for pod failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerName="kube-controller-manager" containerID={"Type":"containerd","ID":"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf"} pod="kube-system/kube-controller-manager-localhost" Apr 20 20:04:33.290495 kubelet[3163]: E0420 20:04:33.289119 3163 kuberuntime_container.go:863] "Container termination failed with gracePeriod" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="kube-system/kube-scheduler-localhost" podUID="33fee6ba1581201eda98a989140db110" containerName="kube-scheduler" containerID="containerd://ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729" gracePeriod=30 Apr 20 20:04:33.385169 kubelet[3163]: E0420 20:04:33.289515 3163 kuberuntime_manager.go:1176] "killContainer for pod failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerName="kube-scheduler" containerID={"Type":"containerd","ID":"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729"} pod="kube-system/kube-scheduler-localhost" Apr 20 20:04:33.469364 kubelet[3163]: E0420 20:04:32.330294 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.Secret: Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/secrets?fieldSelector=metadata.name%3Dnode-certs&resourceVersion=2406\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="object-\"calico-system\"/\"node-certs\"" type="*v1.Secret" Apr 20 20:04:33.788006 kubelet[3163]: E0420 20:04:33.385440 3163 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillContainer\" for \"kube-controller-manager\" with KillContainerError: \"rpc error: code = DeadlineExceeded desc = context deadline exceeded\"" pod="kube-system/kube-controller-manager-localhost" podUID="e9ca41790ae21be9f4cbd451ade0acec" Apr 20 20:04:33.854008 kubelet[3163]: E0420 20:04:33.301177 3163 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillContainer\" for \"kube-scheduler\" with KillContainerError: \"rpc error: code = DeadlineExceeded desc = context deadline exceeded\"" pod="kube-system/kube-scheduler-localhost" podUID="33fee6ba1581201eda98a989140db110" Apr 20 20:04:34.359472 containerd[1659]: time="2026-04-20T20:04:34.353632923Z" level=error msg="ExecSync for \"6c1d610fa13cf975d72818ef3ba28644ae87099c5273d909f3b984c9ff12b616\" failed" error="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" Apr 20 20:04:34.568999 kubelet[3163]: E0420 20:04:34.564920 3163 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/events/kube-scheduler-localhost.18a826e9780e1db1\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-scheduler-localhost.18a826e9780e1db1 kube-system 3218 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-scheduler-localhost,UID:33fee6ba1581201eda98a989140db110,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://127.0.0.1:10259/readyz\": dial tcp 127.0.0.1:10259: connect: connection refused,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 19:20:37 +0000 UTC,LastTimestamp:2026-04-20 19:21:10.772475171 +0000 UTC m=+768.866724894,Count:7,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 20:04:36.595870 kubelet[3163]: I0420 20:04:36.590483 3163 status_manager.go:895] "Failed to get status for pod" podUID="071d23f6-a94b-4165-9229-2d0570b516d8" pod="calico-system/calico-node-g9fs5" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-node-g9fs5\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:04:37.291854 sshd[7350]: Connection closed by 10.0.0.1 port 60006 Apr 20 20:04:37.392947 sshd-session[7321]: pam_unix(sshd:session): session closed for user core Apr 20 20:04:37.635000 audit[7321]: AUDIT1106 pid=7321 uid=0 auid=500 ses=71 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:04:37.665000 audit[7321]: AUDIT1104 pid=7321 uid=0 auid=500 ses=71 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:04:37.956144 kernel: audit: type=1106 audit(1776715477.635:1356): pid=7321 uid=0 auid=500 ses=71 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:04:37.976910 containerd[1659]: time="2026-04-20T20:04:37.890477659Z" level=error msg="Failed to handle backOff event container_id:\"31fa0b5b42e52fa654e4e40a75db16fac90293fdf9e236b87985c2e9f4e819b2\" id:\"31fa0b5b42e52fa654e4e40a75db16fac90293fdf9e236b87985c2e9f4e819b2\" pid:7004 exit_status:1 exited_at:{seconds:1776715389 nanos:506884042} for 31fa0b5b42e52fa654e4e40a75db16fac90293fdf9e236b87985c2e9f4e819b2" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 20 20:04:37.976910 containerd[1659]: time="2026-04-20T20:04:37.947894513Z" level=info msg="TaskExit event container_id:\"54c6b42922bf5b9031d3f8d7454f339017c2e17aaf05317c673f5a78f690205c\" id:\"54c6b42922bf5b9031d3f8d7454f339017c2e17aaf05317c673f5a78f690205c\" pid:7012 exit_status:255 exited_at:{seconds:1776715399 nanos:78895607}" Apr 20 20:04:38.118359 kernel: audit: type=1104 audit(1776715477.665:1357): pid=7321 uid=0 auid=500 ses=71 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:04:38.662868 systemd[1]: sshd@69-8223-10.0.0.14:22-10.0.0.1:60006.service: Deactivated successfully. Apr 20 20:04:38.815000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@69-8223-10.0.0.14:22-10.0.0.1:60006 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:04:39.004909 kernel: audit: type=1131 audit(1776715478.815:1358): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@69-8223-10.0.0.14:22-10.0.0.1:60006 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:04:38.827291 systemd[1]: sshd@69-8223-10.0.0.14:22-10.0.0.1:60006.service: Consumed 5.865s CPU time, 4.3M memory peak. Apr 20 20:04:39.157290 containerd[1659]: time="2026-04-20T20:04:39.140002091Z" level=error msg="ttrpc: received message on inactive stream" stream=71 Apr 20 20:04:39.157290 containerd[1659]: time="2026-04-20T20:04:39.140056278Z" level=error msg="ttrpc: received message on inactive stream" stream=75 Apr 20 20:04:39.140090 systemd[1]: session-71.scope: Deactivated successfully. Apr 20 20:04:39.143002 systemd[1]: session-71.scope: Consumed 21.140s CPU time, 19M memory peak. Apr 20 20:04:39.325143 systemd-logind[1627]: Session 71 logged out. Waiting for processes to exit. Apr 20 20:04:39.570205 kubelet[3163]: E0420 20:04:38.350438 3163 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" containerID="6c1d610fa13cf975d72818ef3ba28644ae87099c5273d909f3b984c9ff12b616" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 20 20:04:40.019842 systemd-logind[1627]: Removed session 71. Apr 20 20:04:40.584174 kubelet[3163]: E0420 20:04:40.579191 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=2901\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 20 20:04:40.859876 kubelet[3163]: E0420 20:04:40.793198 3163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 20 20:04:40.990063 kubelet[3163]: I0420 20:04:40.793264 3163 status_manager.go:895] "Failed to get status for pod" podUID="dfb0b7d2-b28d-4433-9fba-0074dfdf81ee" pod="calico-system/calico-apiserver-84684997fc-zpm5v" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-apiserver-84684997fc-zpm5v\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:04:41.023898 kubelet[3163]: E0420 20:04:40.970857 3163 token_manager.go:124] "Couldn't update token" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/tigera-operator/serviceaccounts/tigera-operator/token\": dial tcp 10.0.0.14:6443: connect: connection refused" cacheKey="\"tigera-operator\"/\"tigera-operator\"/[]string(nil)/3607/v1.BoundObjectReference{Kind:\"Pod\", APIVersion:\"v1\", Name:\"tigera-operator-6bf85f8dd-hvgdj\", UID:\"22f1ff03-de8a-48db-b03e-54fdbe0d3d5f\"}" Apr 20 20:04:41.335610 kubelet[3163]: E0420 20:04:41.333745 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="33.568s" Apr 20 20:04:42.191909 kubelet[3163]: E0420 20:04:42.189824 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.14:6443/api/v1/namespaces/tigera-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1537\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="object-\"tigera-operator\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 20 20:04:42.287369 kubelet[3163]: E0420 20:04:42.274426 3163 projected.go:289] Couldn't get configMap tigera-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 20 20:04:42.340382 kubelet[3163]: E0420 20:04:42.291442 3163 projected.go:194] Error preparing data for projected volume kube-api-access-qj2d9 for pod tigera-operator/tigera-operator-6bf85f8dd-hvgdj: failed to sync configmap cache: timed out waiting for the condition Apr 20 20:04:42.798656 kubelet[3163]: E0420 20:04:42.797779 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/22f1ff03-de8a-48db-b03e-54fdbe0d3d5f-kube-api-access-qj2d9 podName:22f1ff03-de8a-48db-b03e-54fdbe0d3d5f nodeName:}" failed. No retries permitted until 2026-04-20 20:06:44.753280412 +0000 UTC m=+3502.847530130 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-qj2d9" (UniqueName: "kubernetes.io/projected/22f1ff03-de8a-48db-b03e-54fdbe0d3d5f-kube-api-access-qj2d9") pod "tigera-operator-6bf85f8dd-hvgdj" (UID: "22f1ff03-de8a-48db-b03e-54fdbe0d3d5f") : failed to sync configmap cache: timed out waiting for the condition Apr 20 20:04:44.212927 systemd[1]: Started sshd@70-8224-10.0.0.14:22-10.0.0.1:34866.service - OpenSSH per-connection server daemon (10.0.0.1:34866). Apr 20 20:04:44.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@70-8224-10.0.0.14:22-10.0.0.1:34866 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:04:44.617073 kernel: audit: type=1130 audit(1776715484.321:1359): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@70-8224-10.0.0.14:22-10.0.0.1:34866 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:04:44.654030 kubelet[3163]: I0420 20:04:44.549207 3163 status_manager.go:895] "Failed to get status for pod" podUID="dfb0b7d2-b28d-4433-9fba-0074dfdf81ee" pod="calico-system/calico-apiserver-84684997fc-zpm5v" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-apiserver-84684997fc-zpm5v\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:04:44.987116 kubelet[3163]: E0420 20:04:44.986825 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T20:04:36Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T20:04:36Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T20:04:36Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T20:04:36Z\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"localhost\": Patch \"https://10.0.0.14:6443/api/v1/nodes/localhost/status?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:04:44.987116 kubelet[3163]: I0420 20:04:44.987076 3163 status_manager.go:895] "Failed to get status for pod" podUID="5ef51a6b32499d3d1e531fb8b3a83d4f" pod="kube-system/kube-apiserver-localhost" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:04:45.300294 kubelet[3163]: E0420 20:04:45.282137 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1537\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 20 20:04:45.674936 kubelet[3163]: E0420 20:04:45.659151 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.14:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:04:45.727760 kubelet[3163]: I0420 20:04:45.674007 3163 status_manager.go:895] "Failed to get status for pod" podUID="071d23f6-a94b-4165-9229-2d0570b516d8" pod="calico-system/calico-node-g9fs5" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-node-g9fs5\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:04:45.879312 kubelet[3163]: E0420 20:04:45.878623 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.14:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:04:45.941399 kubelet[3163]: E0420 20:04:45.877288 3163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:04:46.074138 kubelet[3163]: E0420 20:04:46.052978 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.14:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:04:46.334290 kubelet[3163]: E0420 20:04:46.041178 3163 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/events/kube-scheduler-localhost.18a826e9780e1db1\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-scheduler-localhost.18a826e9780e1db1 kube-system 3218 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-scheduler-localhost,UID:33fee6ba1581201eda98a989140db110,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://127.0.0.1:10259/readyz\": dial tcp 127.0.0.1:10259: connect: connection refused,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 19:20:37 +0000 UTC,LastTimestamp:2026-04-20 19:21:10.772475171 +0000 UTC m=+768.866724894,Count:7,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 20:04:46.443978 kubelet[3163]: E0420 20:04:46.365043 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.Secret: Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/secrets?fieldSelector=metadata.name%3Dcalico-apiserver-certs&resourceVersion=2406\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="object-\"calico-system\"/\"calico-apiserver-certs\"" type="*v1.Secret" Apr 20 20:04:48.025291 containerd[1659]: time="2026-04-20T20:04:48.021262899Z" level=error msg="failed to delete task" error="context deadline exceeded" id=54c6b42922bf5b9031d3f8d7454f339017c2e17aaf05317c673f5a78f690205c Apr 20 20:04:48.097212 containerd[1659]: time="2026-04-20T20:04:48.083200619Z" level=error msg="Failed to handle backOff event container_id:\"54c6b42922bf5b9031d3f8d7454f339017c2e17aaf05317c673f5a78f690205c\" id:\"54c6b42922bf5b9031d3f8d7454f339017c2e17aaf05317c673f5a78f690205c\" pid:7012 exit_status:255 exited_at:{seconds:1776715399 nanos:78895607} for 54c6b42922bf5b9031d3f8d7454f339017c2e17aaf05317c673f5a78f690205c" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded" Apr 20 20:04:48.186213 containerd[1659]: time="2026-04-20T20:04:48.093317324Z" level=info msg="TaskExit event container_id:\"bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf\" id:\"bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf\" pid:3768 exit_status:1 exited_at:{seconds:1776712842 nanos:981099034}" Apr 20 20:04:48.752173 containerd[1659]: time="2026-04-20T20:04:48.742177895Z" level=error msg="ttrpc: received message on inactive stream" stream=81 Apr 20 20:04:49.169234 kubelet[3163]: E0420 20:04:48.456654 3163 token_manager.go:124] "Couldn't update token" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/serviceaccounts/csi-node-driver/token\": dial tcp 10.0.0.14:6443: connect: connection refused" cacheKey="\"csi-node-driver\"/\"calico-system\"/[]string(nil)/3607/v1.BoundObjectReference{Kind:\"Pod\", APIVersion:\"v1\", Name:\"csi-node-driver-5h6vg\", UID:\"9f02930c-961c-4c4b-8334-b61cbd5c3d20\"}" Apr 20 20:04:49.233280 kubelet[3163]: E0420 20:04:49.091699 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.14:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:04:49.624087 kubelet[3163]: E0420 20:04:49.291311 3163 kubelet_node_status.go:535] "Unable to update node status" err="update node status exceeds retry count" Apr 20 20:04:50.183007 kubelet[3163]: E0420 20:04:50.167073 3163 token_manager.go:124] "Couldn't update token" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/serviceaccounts/calico-node/token\": dial tcp 10.0.0.14:6443: connect: connection refused" cacheKey="\"calico-node\"/\"calico-system\"/[]string(nil)/3607/v1.BoundObjectReference{Kind:\"Pod\", APIVersion:\"v1\", Name:\"calico-node-g9fs5\", UID:\"071d23f6-a94b-4165-9229-2d0570b516d8\"}" Apr 20 20:04:50.322944 kubelet[3163]: E0420 20:04:49.804527 3163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 20 20:04:51.243375 kubelet[3163]: I0420 20:04:51.240917 3163 status_manager.go:895] "Failed to get status for pod" podUID="5ef51a6b32499d3d1e531fb8b3a83d4f" pod="kube-system/kube-apiserver-localhost" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:04:51.438255 kubelet[3163]: E0420 20:04:51.341233 3163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:04:51.635217 containerd[1659]: time="2026-04-20T20:04:51.586143746Z" level=info msg="StopContainer for \"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\" with timeout 30 (s)" Apr 20 20:04:51.788097 containerd[1659]: time="2026-04-20T20:04:51.787723931Z" level=info msg="Skipping the sending of signal terminated to container \"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\" because a prior stop with timeout>0 request already sent the signal" Apr 20 20:04:52.078188 kubelet[3163]: I0420 20:04:51.973979 3163 status_manager.go:895] "Failed to get status for pod" podUID="071d23f6-a94b-4165-9229-2d0570b516d8" pod="calico-system/calico-node-g9fs5" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-node-g9fs5\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:04:52.360949 kubelet[3163]: E0420 20:04:52.287076 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="10.894s" Apr 20 20:04:53.598000 audit[7426]: AUDIT1101 pid=7426 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:04:53.715859 kernel: audit: type=1101 audit(1776715493.598:1360): pid=7426 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:04:53.725042 sshd[7426]: Accepted publickey for core from 10.0.0.1 port 34866 ssh2: RSA SHA256:ZNIzts6V4KYKlrJxXaosrimCRlmsV/+NkZ5UtjwHrjE Apr 20 20:04:53.840000 audit[7426]: AUDIT1103 pid=7426 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:04:54.012000 audit[7426]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff30afd130 a2=3 a3=0 items=0 ppid=1 pid=7426 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=72 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 20:04:54.012000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 20:04:54.185320 kernel: audit: type=1103 audit(1776715493.840:1361): pid=7426 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:04:54.185206 sshd-session[7426]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 20:04:54.272265 kernel: audit: type=1006 audit(1776715494.012:1362): pid=7426 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=72 res=1 Apr 20 20:04:54.291595 kernel: audit: type=1300 audit(1776715494.012:1362): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff30afd130 a2=3 a3=0 items=0 ppid=1 pid=7426 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=72 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 20:04:54.452228 kernel: audit: type=1327 audit(1776715494.012:1362): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 20:04:54.461136 kubelet[3163]: I0420 20:04:54.440295 3163 status_manager.go:895] "Failed to get status for pod" podUID="dfb0b7d2-b28d-4433-9fba-0074dfdf81ee" pod="calico-system/calico-apiserver-84684997fc-zpm5v" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-apiserver-84684997fc-zpm5v\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:04:56.282038 systemd-logind[1627]: New session '72' of user 'core' with class 'user' and type 'tty'. Apr 20 20:04:57.422484 systemd[1]: Started session-72.scope - Session 72 of User core. Apr 20 20:04:57.865170 containerd[1659]: time="2026-04-20T20:04:57.853018859Z" level=info msg="StopContainer for \"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\" with timeout 30 (s)" Apr 20 20:04:58.479372 containerd[1659]: time="2026-04-20T20:04:58.456120966Z" level=error msg="get state for bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf" error="context deadline exceeded" Apr 20 20:04:58.479372 containerd[1659]: time="2026-04-20T20:04:58.461932988Z" level=warning msg="unknown status" status=0 Apr 20 20:04:58.692505 containerd[1659]: time="2026-04-20T20:04:58.583651243Z" level=error msg="ttrpc: received message on inactive stream" stream=239 Apr 20 20:04:58.692505 containerd[1659]: time="2026-04-20T20:04:58.601095113Z" level=error msg="failed to delete task" error="context deadline exceeded" id=bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf Apr 20 20:04:58.692505 containerd[1659]: time="2026-04-20T20:04:58.688153488Z" level=error msg="Failed to handle backOff event container_id:\"bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf\" id:\"bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf\" pid:3768 exit_status:1 exited_at:{seconds:1776712842 nanos:981099034} for bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded" Apr 20 20:04:58.861407 containerd[1659]: time="2026-04-20T20:04:58.850411736Z" level=info msg="Skipping the sending of signal terminated to container \"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\" because a prior stop with timeout>0 request already sent the signal" Apr 20 20:04:58.862817 containerd[1659]: time="2026-04-20T20:04:58.862367652Z" level=info msg="TaskExit event container_id:\"31fa0b5b42e52fa654e4e40a75db16fac90293fdf9e236b87985c2e9f4e819b2\" id:\"31fa0b5b42e52fa654e4e40a75db16fac90293fdf9e236b87985c2e9f4e819b2\" pid:7004 exit_status:1 exited_at:{seconds:1776715389 nanos:506884042}" Apr 20 20:04:59.151000 audit[7426]: AUDIT1105 pid=7426 uid=0 auid=500 ses=72 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:04:59.346061 kernel: audit: type=1105 audit(1776715499.151:1363): pid=7426 uid=0 auid=500 ses=72 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:04:59.457674 containerd[1659]: time="2026-04-20T20:04:58.891222430Z" level=error msg="failed to drain init process bab9bd1af0573a35e57c30cd12122b24cc656fc1002adaf3fd353192b94cf9bf io" error="context deadline exceeded" runtime=io.containerd.runc.v2 Apr 20 20:04:59.907000 audit[7461]: AUDIT1103 pid=7461 uid=0 auid=500 ses=72 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:05:00.082298 kernel: audit: type=1103 audit(1776715499.907:1364): pid=7461 uid=0 auid=500 ses=72 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:05:00.566526 containerd[1659]: time="2026-04-20T20:05:00.565289813Z" level=error msg="ttrpc: received message on inactive stream" stream=241 Apr 20 20:05:04.276295 kubelet[3163]: E0420 20:05:03.258189 3163 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/events/kube-scheduler-localhost.18a826e9780e1db1\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-scheduler-localhost.18a826e9780e1db1 kube-system 3218 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-scheduler-localhost,UID:33fee6ba1581201eda98a989140db110,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://127.0.0.1:10259/readyz\": dial tcp 127.0.0.1:10259: connect: connection refused,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 19:20:37 +0000 UTC,LastTimestamp:2026-04-20 19:21:10.772475171 +0000 UTC m=+768.866724894,Count:7,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 20:05:05.059865 containerd[1659]: time="2026-04-20T20:05:04.858440031Z" level=info msg="StopContainer for \"6c1d610fa13cf975d72818ef3ba28644ae87099c5273d909f3b984c9ff12b616\" with timeout 2 (s)" Apr 20 20:05:06.741830 containerd[1659]: time="2026-04-20T20:05:06.699299738Z" level=info msg="Stop container \"6c1d610fa13cf975d72818ef3ba28644ae87099c5273d909f3b984c9ff12b616\" with signal terminated" Apr 20 20:05:07.179140 kubelet[3163]: E0420 20:05:05.025130 3163 event.go:307] "Unable to write event (retry limit exceeded!)" event="&Event{ObjectMeta:{kube-scheduler-localhost.18a826f14a420523 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-scheduler-localhost,UID:33fee6ba1581201eda98a989140db110,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://127.0.0.1:10259/readyz\": dial tcp 127.0.0.1:10259: connect: connection refused,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 19:21:10.772475171 +0000 UTC m=+768.866724894,LastTimestamp:2026-04-20 19:21:10.772475171 +0000 UTC m=+768.866724894,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 20:05:08.858141 containerd[1659]: time="2026-04-20T20:05:08.857429936Z" level=error msg="Failed to handle backOff event container_id:\"31fa0b5b42e52fa654e4e40a75db16fac90293fdf9e236b87985c2e9f4e819b2\" id:\"31fa0b5b42e52fa654e4e40a75db16fac90293fdf9e236b87985c2e9f4e819b2\" pid:7004 exit_status:1 exited_at:{seconds:1776715389 nanos:506884042} for 31fa0b5b42e52fa654e4e40a75db16fac90293fdf9e236b87985c2e9f4e819b2" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 20 20:05:09.290634 containerd[1659]: time="2026-04-20T20:05:08.938308232Z" level=info msg="TaskExit event container_id:\"54c6b42922bf5b9031d3f8d7454f339017c2e17aaf05317c673f5a78f690205c\" id:\"54c6b42922bf5b9031d3f8d7454f339017c2e17aaf05317c673f5a78f690205c\" pid:7012 exit_status:255 exited_at:{seconds:1776715399 nanos:78895607}" Apr 20 20:05:09.793378 kubelet[3163]: E0420 20:05:08.174982 3163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 20 20:05:10.727837 containerd[1659]: time="2026-04-20T20:05:10.693049091Z" level=error msg="ttrpc: received message on inactive stream" stream=83 Apr 20 20:05:10.900179 containerd[1659]: time="2026-04-20T20:05:10.787296824Z" level=error msg="ttrpc: received message on inactive stream" stream=85 Apr 20 20:05:14.498968 kubelet[3163]: I0420 20:05:14.479317 3163 status_manager.go:895] "Failed to get status for pod" podUID="5ef51a6b32499d3d1e531fb8b3a83d4f" pod="kube-system/kube-apiserver-localhost" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:05:15.146272 kubelet[3163]: E0420 20:05:15.139413 3163 secret.go:189] Couldn't get secret calico-system/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Apr 20 20:05:19.227953 kubelet[3163]: E0420 20:05:19.227640 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dfb0b7d2-b28d-4433-9fba-0074dfdf81ee-calico-apiserver-certs podName:dfb0b7d2-b28d-4433-9fba-0074dfdf81ee nodeName:}" failed. No retries permitted until 2026-04-20 20:07:20.509971157 +0000 UTC m=+3538.604220953 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/dfb0b7d2-b28d-4433-9fba-0074dfdf81ee-calico-apiserver-certs") pod "calico-apiserver-84684997fc-zpm5v" (UID: "dfb0b7d2-b28d-4433-9fba-0074dfdf81ee") : failed to sync secret cache: timed out waiting for the condition Apr 20 20:05:19.874208 containerd[1659]: time="2026-04-20T20:05:19.796492124Z" level=error msg="Failed to handle backOff event container_id:\"54c6b42922bf5b9031d3f8d7454f339017c2e17aaf05317c673f5a78f690205c\" id:\"54c6b42922bf5b9031d3f8d7454f339017c2e17aaf05317c673f5a78f690205c\" pid:7012 exit_status:255 exited_at:{seconds:1776715399 nanos:78895607} for 54c6b42922bf5b9031d3f8d7454f339017c2e17aaf05317c673f5a78f690205c" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 20 20:05:21.951070 containerd[1659]: time="2026-04-20T20:05:21.863223468Z" level=info msg="Kill container \"ef8528e97d1e32a3f9ec36ed719195755e4cddd6b686223cdae0d82ad7e5a729\"" Apr 20 20:05:22.591790 kubelet[3163]: I0420 20:05:21.993894 3163 status_manager.go:895] "Failed to get status for pod" podUID="071d23f6-a94b-4165-9229-2d0570b516d8" pod="calico-system/calico-node-g9fs5" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-node-g9fs5\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:05:23.365823 containerd[1659]: time="2026-04-20T20:05:23.359506826Z" level=error msg="ttrpc: received message on inactive stream" stream=91 Apr 20 20:05:23.820433 containerd[1659]: time="2026-04-20T20:05:23.491498404Z" level=error msg="ttrpc: received message on inactive stream" stream=89 Apr 20 20:05:24.269165 kubelet[3163]: E0420 20:05:22.318482 3163 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/events/kube-scheduler-localhost.18a826e9b6ceb5d3\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-scheduler-localhost.18a826e9b6ceb5d3 kube-system 3221 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-scheduler-localhost,UID:33fee6ba1581201eda98a989140db110,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 19:20:38 +0000 UTC,LastTimestamp:2026-04-20 19:21:14.833055697 +0000 UTC m=+772.927305422,Count:6,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 20:05:27.363136 kubelet[3163]: E0420 20:05:26.650334 3163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 20 20:05:29.258182 containerd[1659]: time="2026-04-20T20:05:29.249240416Z" level=info msg="Kill container \"d233337fed035d7d792516e10a336ab873891f1034d423fb98bec6f4fcd77fdf\"" Apr 20 20:05:30.889509 kubelet[3163]: I0420 20:05:28.631441 3163 status_manager.go:895] "Failed to get status for pod" podUID="dfb0b7d2-b28d-4433-9fba-0074dfdf81ee" pod="calico-system/calico-apiserver-84684997fc-zpm5v" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-apiserver-84684997fc-zpm5v\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:05:31.432470 containerd[1659]: time="2026-04-20T20:05:31.229619337Z" level=info msg="Kill container \"6c1d610fa13cf975d72818ef3ba28644ae87099c5273d909f3b984c9ff12b616\"" Apr 20 20:05:31.851606 kubelet[3163]: E0420 20:05:30.771152 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=2825\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 20 20:05:32.788980 sshd[7461]: Connection closed by 10.0.0.1 port 34866 Apr 20 20:05:32.892243 sshd-session[7426]: pam_unix(sshd:session): session closed for user core Apr 20 20:05:33.015000 audit[7426]: AUDIT1106 pid=7426 uid=0 auid=500 ses=72 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:05:33.030099 kernel: audit: type=1106 audit(1776715533.015:1365): pid=7426 uid=0 auid=500 ses=72 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:05:33.016000 audit[7426]: AUDIT1104 pid=7426 uid=0 auid=500 ses=72 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:05:33.197275 kernel: audit: type=1104 audit(1776715533.016:1366): pid=7426 uid=0 auid=500 ses=72 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:05:33.621183 systemd[1]: sshd@70-8224-10.0.0.14:22-10.0.0.1:34866.service: Deactivated successfully. Apr 20 20:05:33.752000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@70-8224-10.0.0.14:22-10.0.0.1:34866 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:05:33.878945 kernel: audit: type=1131 audit(1776715533.752:1367): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@70-8224-10.0.0.14:22-10.0.0.1:34866 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:05:33.771348 systemd[1]: sshd@70-8224-10.0.0.14:22-10.0.0.1:34866.service: Consumed 4.279s CPU time, 4.1M memory peak. Apr 20 20:05:34.137342 systemd[1]: session-72.scope: Deactivated successfully. Apr 20 20:05:34.247511 systemd[1]: session-72.scope: Consumed 23.105s CPU time, 16.1M memory peak. Apr 20 20:05:34.735158 systemd-logind[1627]: Session 72 logged out. Waiting for processes to exit. Apr 20 20:05:35.068343 kubelet[3163]: E0420 20:05:34.942460 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.14:6443/api/v1/namespaces/tigera-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1537\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="object-\"tigera-operator\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 20 20:05:35.230038 systemd-logind[1627]: Removed session 72. Apr 20 20:05:35.462084 kubelet[3163]: E0420 20:05:35.461789 3163 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/events/kube-scheduler-localhost.18a826e9b6ceb5d3\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-scheduler-localhost.18a826e9b6ceb5d3 kube-system 3221 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-scheduler-localhost,UID:33fee6ba1581201eda98a989140db110,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 19:20:38 +0000 UTC,LastTimestamp:2026-04-20 19:21:14.833055697 +0000 UTC m=+772.927305422,Count:6,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 20:05:35.525594 kubelet[3163]: E0420 20:05:35.494297 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1537\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 20 20:05:35.622950 kubelet[3163]: I0420 20:05:35.538925 3163 status_manager.go:895] "Failed to get status for pod" podUID="5ef51a6b32499d3d1e531fb8b3a83d4f" pod="kube-system/kube-apiserver-localhost" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:05:35.622950 kubelet[3163]: E0420 20:05:35.542101 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.Secret: Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/secrets?fieldSelector=metadata.name%3Dnode-certs&resourceVersion=2406\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="object-\"calico-system\"/\"node-certs\"" type="*v1.Secret" Apr 20 20:05:35.622950 kubelet[3163]: E0420 20:05:35.542196 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T20:05:23Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T20:05:23Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T20:05:23Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T20:05:23Z\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"localhost\": Patch \"https://10.0.0.14:6443/api/v1/nodes/localhost/status?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:05:35.622950 kubelet[3163]: E0420 20:05:35.620982 3163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 20 20:05:35.690124 kubelet[3163]: E0420 20:05:35.623165 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=2901\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 20 20:05:35.784659 kubelet[3163]: I0420 20:05:35.718440 3163 status_manager.go:895] "Failed to get status for pod" podUID="071d23f6-a94b-4165-9229-2d0570b516d8" pod="calico-system/calico-node-g9fs5" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-node-g9fs5\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:05:35.960430 kubelet[3163]: E0420 20:05:35.953136 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.14:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:05:35.960430 kubelet[3163]: E0420 20:05:35.953734 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="43.42s" Apr 20 20:05:36.484486 kubelet[3163]: E0420 20:05:36.480191 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.14:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:05:36.646292 kubelet[3163]: I0420 20:05:36.491181 3163 status_manager.go:895] "Failed to get status for pod" podUID="dfb0b7d2-b28d-4433-9fba-0074dfdf81ee" pod="calico-system/calico-apiserver-84684997fc-zpm5v" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-apiserver-84684997fc-zpm5v\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:05:37.199169 kubelet[3163]: E0420 20:05:37.192728 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.14:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:05:37.625431 kubelet[3163]: I0420 20:05:37.620778 3163 status_manager.go:895] "Failed to get status for pod" podUID="5ef51a6b32499d3d1e531fb8b3a83d4f" pod="kube-system/kube-apiserver-localhost" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:05:37.686482 kubelet[3163]: E0420 20:05:37.684201 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.14:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:05:37.860348 kubelet[3163]: E0420 20:05:37.857756 3163 kubelet_node_status.go:535] "Unable to update node status" err="update node status exceeds retry count" Apr 20 20:05:37.945721 kubelet[3163]: I0420 20:05:37.936279 3163 status_manager.go:895] "Failed to get status for pod" podUID="071d23f6-a94b-4165-9229-2d0570b516d8" pod="calico-system/calico-node-g9fs5" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-node-g9fs5\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:05:38.152641 kubelet[3163]: E0420 20:05:38.133315 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.Secret: Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/secrets?fieldSelector=metadata.name%3Dcalico-apiserver-certs&resourceVersion=2406\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="object-\"calico-system\"/\"calico-apiserver-certs\"" type="*v1.Secret" Apr 20 20:05:38.360227 kubelet[3163]: I0420 20:05:38.303847 3163 status_manager.go:895] "Failed to get status for pod" podUID="dfb0b7d2-b28d-4433-9fba-0074dfdf81ee" pod="calico-system/calico-apiserver-84684997fc-zpm5v" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-apiserver-84684997fc-zpm5v\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:05:38.802973 containerd[1659]: time="2026-04-20T20:05:38.802317710Z" level=info msg="received container exit event container_id:\"6c1d610fa13cf975d72818ef3ba28644ae87099c5273d909f3b984c9ff12b616\" id:\"6c1d610fa13cf975d72818ef3ba28644ae87099c5273d909f3b984c9ff12b616\" pid:6959 exit_status:137 exited_at:{seconds:1776715538 nanos:799284062}" Apr 20 20:05:39.142430 systemd[1]: cri-containerd-6c1d610fa13cf975d72818ef3ba28644ae87099c5273d909f3b984c9ff12b616.scope: Deactivated successfully. Apr 20 20:05:39.195000 audit: BPF prog-id=229 op=UNLOAD Apr 20 20:05:39.260038 kernel: audit: type=1334 audit(1776715539.195:1368): prog-id=229 op=UNLOAD Apr 20 20:05:39.248414 systemd[1]: cri-containerd-6c1d610fa13cf975d72818ef3ba28644ae87099c5273d909f3b984c9ff12b616.scope: Consumed 1min 36.844s CPU time, 48.6M memory peak, 660K read from disk, 4K written to disk. Apr 20 20:05:39.287105 kubelet[3163]: E0420 20:05:39.196499 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.241s" Apr 20 20:05:39.394313 kubelet[3163]: E0420 20:05:39.393900 3163 kuberuntime_container.go:741] "PreStop hook failed" err="command '/bin/calico-node -shutdown' exited with 137: " pod="calico-system/calico-node-g9fs5" podUID="071d23f6-a94b-4165-9229-2d0570b516d8" containerName="calico-node" containerID="containerd://6c1d610fa13cf975d72818ef3ba28644ae87099c5273d909f3b984c9ff12b616" Apr 20 20:05:39.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@71-8225-10.0.0.14:22-10.0.0.1:55950 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:05:39.395430 systemd[1]: Started sshd@71-8225-10.0.0.14:22-10.0.0.1:55950.service - OpenSSH per-connection server daemon (10.0.0.1:55950). Apr 20 20:05:39.441734 kernel: audit: type=1130 audit(1776715539.394:1369): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@71-8225-10.0.0.14:22-10.0.0.1:55950 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:05:39.984758 containerd[1659]: time="2026-04-20T20:05:39.984695858Z" level=error msg="ExecSync for \"6c1d610fa13cf975d72818ef3ba28644ae87099c5273d909f3b984c9ff12b616\" failed" error="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" Apr 20 20:05:39.986184 kubelet[3163]: E0420 20:05:39.986065 3163 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" containerID="6c1d610fa13cf975d72818ef3ba28644ae87099c5273d909f3b984c9ff12b616" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 20 20:05:41.370782 containerd[1659]: time="2026-04-20T20:05:41.365221927Z" level=info msg="TaskExit event container_id:\"31fa0b5b42e52fa654e4e40a75db16fac90293fdf9e236b87985c2e9f4e819b2\" id:\"31fa0b5b42e52fa654e4e40a75db16fac90293fdf9e236b87985c2e9f4e819b2\" pid:7004 exit_status:1 exited_at:{seconds:1776715389 nanos:506884042}" Apr 20 20:05:42.651434 kubelet[3163]: E0420 20:05:42.651328 3163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 20 20:05:43.038993 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6c1d610fa13cf975d72818ef3ba28644ae87099c5273d909f3b984c9ff12b616-rootfs.mount: Deactivated successfully. Apr 20 20:05:43.221000 audit[7539]: AUDIT1101 pid=7539 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:05:43.282916 kernel: audit: type=1101 audit(1776715543.221:1370): pid=7539 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:05:43.371981 sshd[7539]: Accepted publickey for core from 10.0.0.1 port 55950 ssh2: RSA SHA256:ZNIzts6V4KYKlrJxXaosrimCRlmsV/+NkZ5UtjwHrjE Apr 20 20:05:43.372000 audit[7539]: AUDIT1103 pid=7539 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:05:43.387000 audit[7539]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdad818970 a2=3 a3=0 items=0 ppid=1 pid=7539 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=73 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 20:05:43.387000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 20:05:43.427372 containerd[1659]: time="2026-04-20T20:05:43.222737205Z" level=error msg="ExecSync for \"6c1d610fa13cf975d72818ef3ba28644ae87099c5273d909f3b984c9ff12b616\" failed" error="rpc error: code = Unknown desc = failed to exec in container: failed to create exec \"bae73107d9375fb30c472d491df33947de5d1a47632c80e452c2d46a45191a07\": cannot exec in a deleted state" Apr 20 20:05:43.439652 kubelet[3163]: E0420 20:05:43.286474 3163 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to create exec \"bae73107d9375fb30c472d491df33947de5d1a47632c80e452c2d46a45191a07\": cannot exec in a deleted state" containerID="6c1d610fa13cf975d72818ef3ba28644ae87099c5273d909f3b984c9ff12b616" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 20 20:05:43.409665 sshd-session[7539]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 20:05:43.494318 kernel: audit: type=1103 audit(1776715543.372:1371): pid=7539 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:05:43.494934 containerd[1659]: time="2026-04-20T20:05:43.472178154Z" level=error msg="ExecSync for \"6c1d610fa13cf975d72818ef3ba28644ae87099c5273d909f3b984c9ff12b616\" failed" error="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task 6c1d610fa13cf975d72818ef3ba28644ae87099c5273d909f3b984c9ff12b616 not found" Apr 20 20:05:43.495109 kernel: audit: type=1006 audit(1776715543.387:1372): pid=7539 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=73 res=1 Apr 20 20:05:43.495187 kernel: audit: type=1300 audit(1776715543.387:1372): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdad818970 a2=3 a3=0 items=0 ppid=1 pid=7539 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=73 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 20:05:43.495365 kernel: audit: type=1327 audit(1776715543.387:1372): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 20:05:43.562084 containerd[1659]: time="2026-04-20T20:05:43.561740150Z" level=info msg="StopContainer for \"6c1d610fa13cf975d72818ef3ba28644ae87099c5273d909f3b984c9ff12b616\" returns successfully" Apr 20 20:05:43.670251 kubelet[3163]: E0420 20:05:43.593324 3163 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task 6c1d610fa13cf975d72818ef3ba28644ae87099c5273d909f3b984c9ff12b616 not found" containerID="6c1d610fa13cf975d72818ef3ba28644ae87099c5273d909f3b984c9ff12b616" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 20 20:05:43.760002 containerd[1659]: time="2026-04-20T20:05:43.759096803Z" level=error msg="ExecSync for \"6c1d610fa13cf975d72818ef3ba28644ae87099c5273d909f3b984c9ff12b616\" failed" error="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" Apr 20 20:05:43.827296 kubelet[3163]: E0420 20:05:43.825918 3163 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" containerID="6c1d610fa13cf975d72818ef3ba28644ae87099c5273d909f3b984c9ff12b616" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 20 20:05:43.862707 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-31fa0b5b42e52fa654e4e40a75db16fac90293fdf9e236b87985c2e9f4e819b2-rootfs.mount: Deactivated successfully. Apr 20 20:05:43.893273 containerd[1659]: time="2026-04-20T20:05:43.884499364Z" level=info msg="CreateContainer within sandbox \"1bba4a8c43a21f7135445620d6423b332e6946df1fe8c521ac8720d95194888f\" for container name:\"calico-node\" attempt:3" Apr 20 20:05:44.178165 containerd[1659]: time="2026-04-20T20:05:44.177752884Z" level=info msg="Container 4fd2475b78ba528d6e7b1b1dc9e62d669bc5193579902ecbb3f7c1a1b24c3ca9: CDI devices from CRI Config.CDIDevices: []" Apr 20 20:05:44.303079 systemd-logind[1627]: New session '73' of user 'core' with class 'user' and type 'tty'. Apr 20 20:05:44.637149 containerd[1659]: time="2026-04-20T20:05:44.629257415Z" level=info msg="CreateContainer within sandbox \"1bba4a8c43a21f7135445620d6423b332e6946df1fe8c521ac8720d95194888f\" for name:\"calico-node\" attempt:3 returns container id \"4fd2475b78ba528d6e7b1b1dc9e62d669bc5193579902ecbb3f7c1a1b24c3ca9\"" Apr 20 20:05:44.694073 containerd[1659]: time="2026-04-20T20:05:44.688680081Z" level=info msg="StartContainer for \"4fd2475b78ba528d6e7b1b1dc9e62d669bc5193579902ecbb3f7c1a1b24c3ca9\"" Apr 20 20:05:44.724025 containerd[1659]: time="2026-04-20T20:05:44.723964319Z" level=info msg="connecting to shim 4fd2475b78ba528d6e7b1b1dc9e62d669bc5193579902ecbb3f7c1a1b24c3ca9" address="unix:///run/containerd/s/d6fd6f578359a16fb6047ac6b8915843558ecdd02f7ae288b74c76a061bb8a9a" protocol=ttrpc version=3 Apr 20 20:05:44.763910 systemd[1]: Started session-73.scope - Session 73 of User core. Apr 20 20:05:45.001585 kubelet[3163]: I0420 20:05:44.998940 3163 status_manager.go:895] "Failed to get status for pod" podUID="5ef51a6b32499d3d1e531fb8b3a83d4f" pod="kube-system/kube-apiserver-localhost" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:05:45.124695 kubelet[3163]: I0420 20:05:45.117331 3163 status_manager.go:895] "Failed to get status for pod" podUID="071d23f6-a94b-4165-9229-2d0570b516d8" pod="calico-system/calico-node-g9fs5" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-node-g9fs5\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:05:45.146733 kubelet[3163]: I0420 20:05:45.145241 3163 status_manager.go:895] "Failed to get status for pod" podUID="dfb0b7d2-b28d-4433-9fba-0074dfdf81ee" pod="calico-system/calico-apiserver-84684997fc-zpm5v" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-apiserver-84684997fc-zpm5v\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:05:45.181000 audit[7539]: AUDIT1105 pid=7539 uid=0 auid=500 ses=73 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:05:45.196183 kernel: audit: type=1105 audit(1776715545.181:1373): pid=7539 uid=0 auid=500 ses=73 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:05:45.288008 kubelet[3163]: I0420 20:05:45.262882 3163 scope.go:117] "RemoveContainer" containerID="38809e5cd6af13351ccdf029282f28389cf4d54a3e7b12c6f1dbc5b71d29ce3a" Apr 20 20:05:45.328000 audit[7579]: AUDIT1103 pid=7579 uid=0 auid=500 ses=73 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:05:45.337768 kernel: audit: type=1103 audit(1776715545.328:1374): pid=7579 uid=0 auid=500 ses=73 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:05:45.380838 kubelet[3163]: I0420 20:05:45.379108 3163 scope.go:117] "RemoveContainer" containerID="31fa0b5b42e52fa654e4e40a75db16fac90293fdf9e236b87985c2e9f4e819b2" Apr 20 20:05:45.396646 kubelet[3163]: I0420 20:05:45.381099 3163 status_manager.go:895] "Failed to get status for pod" podUID="dfb0b7d2-b28d-4433-9fba-0074dfdf81ee" pod="calico-system/calico-apiserver-84684997fc-zpm5v" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-apiserver-84684997fc-zpm5v\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:05:45.396646 kubelet[3163]: I0420 20:05:45.389871 3163 status_manager.go:895] "Failed to get status for pod" podUID="5ef51a6b32499d3d1e531fb8b3a83d4f" pod="kube-system/kube-apiserver-localhost" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:05:45.477638 kubelet[3163]: I0420 20:05:45.400911 3163 status_manager.go:895] "Failed to get status for pod" podUID="071d23f6-a94b-4165-9229-2d0570b516d8" pod="calico-system/calico-node-g9fs5" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-node-g9fs5\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:05:45.492668 containerd[1659]: time="2026-04-20T20:05:45.492501106Z" level=info msg="RemoveContainer for \"38809e5cd6af13351ccdf029282f28389cf4d54a3e7b12c6f1dbc5b71d29ce3a\"" Apr 20 20:05:45.555927 containerd[1659]: time="2026-04-20T20:05:45.550786390Z" level=info msg="CreateContainer within sandbox \"de19448814837f2189e818ee408a7454c5277a1c9b2f91f664708851a1478571\" for container name:\"calico-apiserver\" attempt:4" Apr 20 20:05:45.565794 kubelet[3163]: E0420 20:05:45.550744 3163 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/events/kube-scheduler-localhost.18a826e9b6ceb5d3\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-scheduler-localhost.18a826e9b6ceb5d3 kube-system 3221 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-scheduler-localhost,UID:33fee6ba1581201eda98a989140db110,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 19:20:38 +0000 UTC,LastTimestamp:2026-04-20 19:21:14.833055697 +0000 UTC m=+772.927305422,Count:6,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 20:05:45.900878 systemd[1]: Started cri-containerd-4fd2475b78ba528d6e7b1b1dc9e62d669bc5193579902ecbb3f7c1a1b24c3ca9.scope - libcontainer container 4fd2475b78ba528d6e7b1b1dc9e62d669bc5193579902ecbb3f7c1a1b24c3ca9. Apr 20 20:05:47.140444 containerd[1659]: time="2026-04-20T20:05:47.049990266Z" level=info msg="Container ac94b4ba291f8cdf5c7d0515c11d0bad7f6c92f6d00b04c1c0c2a0ec7954d02d: CDI devices from CRI Config.CDIDevices: []" Apr 20 20:05:47.265718 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1977606657.mount: Deactivated successfully. Apr 20 20:05:47.540462 containerd[1659]: time="2026-04-20T20:05:47.529644358Z" level=error msg="get state for 1bba4a8c43a21f7135445620d6423b332e6946df1fe8c521ac8720d95194888f" error="context deadline exceeded" Apr 20 20:05:47.540462 containerd[1659]: time="2026-04-20T20:05:47.529981639Z" level=warning msg="unknown status" status=0 Apr 20 20:05:48.264803 containerd[1659]: time="2026-04-20T20:05:48.264272641Z" level=info msg="RemoveContainer for \"38809e5cd6af13351ccdf029282f28389cf4d54a3e7b12c6f1dbc5b71d29ce3a\" returns successfully" Apr 20 20:05:48.449453 kubelet[3163]: E0420 20:05:48.449142 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.504s" Apr 20 20:05:48.596697 containerd[1659]: time="2026-04-20T20:05:48.524938365Z" level=error msg="get state for 4fd2475b78ba528d6e7b1b1dc9e62d669bc5193579902ecbb3f7c1a1b24c3ca9" error="context deadline exceeded" Apr 20 20:05:48.596697 containerd[1659]: time="2026-04-20T20:05:48.527486781Z" level=warning msg="unknown status" status=0 Apr 20 20:05:48.618004 containerd[1659]: time="2026-04-20T20:05:48.617811577Z" level=info msg="CreateContainer within sandbox \"de19448814837f2189e818ee408a7454c5277a1c9b2f91f664708851a1478571\" for name:\"calico-apiserver\" attempt:4 returns container id \"ac94b4ba291f8cdf5c7d0515c11d0bad7f6c92f6d00b04c1c0c2a0ec7954d02d\"" Apr 20 20:05:48.738839 kubelet[3163]: E0420 20:05:48.738788 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T20:05:48Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T20:05:48Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T20:05:48Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T20:05:48Z\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"localhost\": Patch \"https://10.0.0.14:6443/api/v1/nodes/localhost/status?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:05:48.748150 containerd[1659]: time="2026-04-20T20:05:48.738809248Z" level=info msg="StartContainer for \"ac94b4ba291f8cdf5c7d0515c11d0bad7f6c92f6d00b04c1c0c2a0ec7954d02d\"" Apr 20 20:05:48.753000 audit: BPF prog-id=240 op=LOAD Apr 20 20:05:48.753000 audit[7572]: SYSCALL arch=c000003e syscall=321 success=yes exit=19 a0=5 a1=c0001ee490 a2=98 a3=0 items=0 ppid=4032 pid=7572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 20:05:48.753000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466643234373562373862613532386436653762316231646339653632 Apr 20 20:05:48.754000 audit: BPF prog-id=241 op=LOAD Apr 20 20:05:48.754000 audit[7572]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001ee220 a2=98 a3=0 items=0 ppid=4032 pid=7572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 20:05:48.754000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466643234373562373862613532386436653762316231646339653632 Apr 20 20:05:48.852000 audit: BPF prog-id=241 op=UNLOAD Apr 20 20:05:48.852000 audit[7572]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4032 pid=7572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 20:05:48.852000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466643234373562373862613532386436653762316231646339653632 Apr 20 20:05:48.974000 audit: BPF prog-id=240 op=UNLOAD Apr 20 20:05:48.974000 audit[7572]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=13 a1=0 a2=0 a3=0 items=0 ppid=4032 pid=7572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 20:05:48.974000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466643234373562373862613532386436653762316231646339653632 Apr 20 20:05:48.976000 audit: BPF prog-id=242 op=LOAD Apr 20 20:05:48.981373 kernel: audit: type=1334 audit(1776715548.753:1375): prog-id=240 op=LOAD Apr 20 20:05:48.981458 containerd[1659]: time="2026-04-20T20:05:48.756218436Z" level=info msg="connecting to shim ac94b4ba291f8cdf5c7d0515c11d0bad7f6c92f6d00b04c1c0c2a0ec7954d02d" address="unix:///run/containerd/s/9f25d20f4617cde34f7397032d9ecbc0b43cd780bc15ce3e8713428f4b2ceb63" protocol=ttrpc version=3 Apr 20 20:05:48.976000 audit[7572]: SYSCALL arch=c000003e syscall=321 success=yes exit=19 a0=5 a1=c0001ee6f0 a2=98 a3=0 items=0 ppid=4032 pid=7572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 20:05:48.976000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466643234373562373862613532386436653762316231646339653632 Apr 20 20:05:48.981825 kernel: audit: type=1300 audit(1776715548.753:1375): arch=c000003e syscall=321 success=yes exit=19 a0=5 a1=c0001ee490 a2=98 a3=0 items=0 ppid=4032 pid=7572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 20:05:48.981846 kernel: audit: type=1327 audit(1776715548.753:1375): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466643234373562373862613532386436653762316231646339653632 Apr 20 20:05:48.981861 kernel: audit: type=1334 audit(1776715548.754:1376): prog-id=241 op=LOAD Apr 20 20:05:48.981874 kernel: audit: type=1300 audit(1776715548.754:1376): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001ee220 a2=98 a3=0 items=0 ppid=4032 pid=7572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 20:05:48.981888 kernel: audit: type=1327 audit(1776715548.754:1376): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466643234373562373862613532386436653762316231646339653632 Apr 20 20:05:48.981906 kernel: audit: type=1334 audit(1776715548.852:1377): prog-id=241 op=UNLOAD Apr 20 20:05:48.981923 kernel: audit: type=1300 audit(1776715548.852:1377): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4032 pid=7572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 20:05:49.030430 kubelet[3163]: E0420 20:05:49.030181 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.14:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:05:49.099915 kubelet[3163]: E0420 20:05:49.097810 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.14:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:05:49.284142 kubelet[3163]: E0420 20:05:49.280843 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.14:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:05:49.797931 kubelet[3163]: E0420 20:05:49.792467 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.14:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:05:49.797931 kubelet[3163]: E0420 20:05:49.797952 3163 kubelet_node_status.go:535] "Unable to update node status" err="update node status exceeds retry count" Apr 20 20:05:49.940337 kubelet[3163]: E0420 20:05:49.938519 3163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 20 20:05:50.053238 kubelet[3163]: E0420 20:05:50.046050 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.066s" Apr 20 20:05:50.164116 kubelet[3163]: I0420 20:05:50.128770 3163 scope.go:117] "RemoveContainer" containerID="6c1d610fa13cf975d72818ef3ba28644ae87099c5273d909f3b984c9ff12b616" Apr 20 20:05:50.258507 containerd[1659]: time="2026-04-20T20:05:50.256796793Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 20 20:05:50.450163 containerd[1659]: time="2026-04-20T20:05:50.259957930Z" level=error msg="ttrpc: received message on inactive stream" stream=89 Apr 20 20:05:51.598928 containerd[1659]: time="2026-04-20T20:05:51.598627914Z" level=info msg="RemoveContainer for \"6c1d610fa13cf975d72818ef3ba28644ae87099c5273d909f3b984c9ff12b616\"" Apr 20 20:05:52.195240 kubelet[3163]: E0420 20:05:52.190862 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.304s" Apr 20 20:05:52.293043 containerd[1659]: time="2026-04-20T20:05:52.291434718Z" level=info msg="TaskExit event container_id:\"54c6b42922bf5b9031d3f8d7454f339017c2e17aaf05317c673f5a78f690205c\" id:\"54c6b42922bf5b9031d3f8d7454f339017c2e17aaf05317c673f5a78f690205c\" pid:7012 exit_status:255 exited_at:{seconds:1776715399 nanos:78895607}" Apr 20 20:05:52.453106 sshd[7579]: Connection closed by 10.0.0.1 port 55950 Apr 20 20:05:52.456207 sshd-session[7539]: pam_unix(sshd:session): session closed for user core Apr 20 20:05:52.529000 audit[7539]: AUDIT1106 pid=7539 uid=0 auid=500 ses=73 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:05:52.551274 kernel: kauditd_printk_skb: 7 callbacks suppressed Apr 20 20:05:52.560000 audit[7539]: AUDIT1104 pid=7539 uid=0 auid=500 ses=73 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:05:52.657433 kernel: audit: type=1106 audit(1776715552.529:1380): pid=7539 uid=0 auid=500 ses=73 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:05:52.669212 kernel: audit: type=1104 audit(1776715552.560:1381): pid=7539 uid=0 auid=500 ses=73 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:05:52.841709 systemd[1]: sshd@71-8225-10.0.0.14:22-10.0.0.1:55950.service: Deactivated successfully. Apr 20 20:05:52.876000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@71-8225-10.0.0.14:22-10.0.0.1:55950 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:05:52.878805 systemd[1]: sshd@71-8225-10.0.0.14:22-10.0.0.1:55950.service: Consumed 2.158s CPU time, 4.1M memory peak. Apr 20 20:05:52.889219 kernel: audit: type=1131 audit(1776715552.876:1382): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@71-8225-10.0.0.14:22-10.0.0.1:55950 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:05:53.091454 systemd[1]: session-73.scope: Deactivated successfully. Apr 20 20:05:53.175957 systemd[1]: session-73.scope: Consumed 5.431s CPU time, 18M memory peak. Apr 20 20:05:53.399250 containerd[1659]: time="2026-04-20T20:05:53.385369873Z" level=info msg="RemoveContainer for \"6c1d610fa13cf975d72818ef3ba28644ae87099c5273d909f3b984c9ff12b616\" returns successfully" Apr 20 20:05:53.420440 systemd-logind[1627]: Session 73 logged out. Waiting for processes to exit. Apr 20 20:05:53.632454 containerd[1659]: time="2026-04-20T20:05:53.630122982Z" level=error msg="ContainerStatus for \"6c1d610fa13cf975d72818ef3ba28644ae87099c5273d909f3b984c9ff12b616\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6c1d610fa13cf975d72818ef3ba28644ae87099c5273d909f3b984c9ff12b616\": not found" Apr 20 20:05:53.725406 kubelet[3163]: E0420 20:05:53.646775 3163 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6c1d610fa13cf975d72818ef3ba28644ae87099c5273d909f3b984c9ff12b616\": not found" containerID="6c1d610fa13cf975d72818ef3ba28644ae87099c5273d909f3b984c9ff12b616" Apr 20 20:05:53.725406 kubelet[3163]: I0420 20:05:53.647319 3163 scope.go:117] "RemoveContainer" containerID="31fa0b5b42e52fa654e4e40a75db16fac90293fdf9e236b87985c2e9f4e819b2" Apr 20 20:05:53.739782 systemd-logind[1627]: Removed session 73. Apr 20 20:05:54.050036 kubelet[3163]: E0420 20:05:53.963172 3163 token_manager.go:124] "Couldn't update token" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/serviceaccounts/csi-node-driver/token\": dial tcp 10.0.0.14:6443: connect: connection refused" cacheKey="\"csi-node-driver\"/\"calico-system\"/[]string(nil)/3607/v1.BoundObjectReference{Kind:\"Pod\", APIVersion:\"v1\", Name:\"csi-node-driver-5h6vg\", UID:\"9f02930c-961c-4c4b-8334-b61cbd5c3d20\"}" Apr 20 20:05:54.086727 kubelet[3163]: E0420 20:05:54.073601 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.104s" Apr 20 20:05:54.522339 containerd[1659]: time="2026-04-20T20:05:54.522299662Z" level=info msg="RemoveContainer for \"31fa0b5b42e52fa654e4e40a75db16fac90293fdf9e236b87985c2e9f4e819b2\"" Apr 20 20:05:54.761021 systemd[1]: Started cri-containerd-ac94b4ba291f8cdf5c7d0515c11d0bad7f6c92f6d00b04c1c0c2a0ec7954d02d.scope - libcontainer container ac94b4ba291f8cdf5c7d0515c11d0bad7f6c92f6d00b04c1c0c2a0ec7954d02d. Apr 20 20:05:55.692443 containerd[1659]: time="2026-04-20T20:05:55.691615534Z" level=info msg="StartContainer for \"4fd2475b78ba528d6e7b1b1dc9e62d669bc5193579902ecbb3f7c1a1b24c3ca9\" returns successfully" Apr 20 20:05:56.025180 kubelet[3163]: I0420 20:05:55.763006 3163 status_manager.go:895] "Failed to get status for pod" podUID="5ef51a6b32499d3d1e531fb8b3a83d4f" pod="kube-system/kube-apiserver-localhost" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:05:56.681734 containerd[1659]: time="2026-04-20T20:05:56.675943539Z" level=error msg="get state for de19448814837f2189e818ee408a7454c5277a1c9b2f91f664708851a1478571" error="context deadline exceeded" Apr 20 20:05:56.710946 containerd[1659]: time="2026-04-20T20:05:56.676512589Z" level=warning msg="unknown status" status=0 Apr 20 20:05:57.691667 kubelet[3163]: E0420 20:05:57.680189 3163 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/events/kube-scheduler-localhost.18a826e9b6ceb5d3\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-scheduler-localhost.18a826e9b6ceb5d3 kube-system 3221 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-scheduler-localhost,UID:33fee6ba1581201eda98a989140db110,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 19:20:38 +0000 UTC,LastTimestamp:2026-04-20 19:21:14.833055697 +0000 UTC m=+772.927305422,Count:6,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 20:05:58.043048 containerd[1659]: time="2026-04-20T20:05:58.042140947Z" level=info msg="RemoveContainer for \"31fa0b5b42e52fa654e4e40a75db16fac90293fdf9e236b87985c2e9f4e819b2\" returns successfully" Apr 20 20:05:58.587000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@72-12-10.0.0.14:22-10.0.0.1:55424 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:05:58.578953 systemd[1]: Started sshd@72-12-10.0.0.14:22-10.0.0.1:55424.service - OpenSSH per-connection server daemon (10.0.0.1:55424). Apr 20 20:05:58.716219 kernel: audit: type=1130 audit(1776715558.587:1383): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@72-12-10.0.0.14:22-10.0.0.1:55424 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:05:59.392000 audit: BPF prog-id=243 op=LOAD Apr 20 20:05:59.462345 kernel: audit: type=1334 audit(1776715559.392:1384): prog-id=243 op=LOAD Apr 20 20:05:59.717000 audit: BPF prog-id=244 op=LOAD Apr 20 20:05:59.717000 audit[7601]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000206240 a2=98 a3=0 items=0 ppid=5396 pid=7601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 20:05:59.717000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163393462346261323931663863646635633764303531356331316430 Apr 20 20:05:59.748000 audit: BPF prog-id=244 op=UNLOAD Apr 20 20:05:59.748000 audit[7601]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5396 pid=7601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 20:05:59.748000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163393462346261323931663863646635633764303531356331316430 Apr 20 20:05:59.779000 audit: BPF prog-id=245 op=LOAD Apr 20 20:05:59.779000 audit[7601]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000206490 a2=98 a3=0 items=0 ppid=5396 pid=7601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 20:05:59.779000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163393462346261323931663863646635633764303531356331316430 Apr 20 20:05:59.861000 audit: BPF prog-id=246 op=LOAD Apr 20 20:05:59.864202 kernel: audit: type=1334 audit(1776715559.717:1385): prog-id=244 op=LOAD Apr 20 20:05:59.864236 kernel: audit: type=1300 audit(1776715559.717:1385): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000206240 a2=98 a3=0 items=0 ppid=5396 pid=7601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 20:05:59.864315 kernel: audit: type=1327 audit(1776715559.717:1385): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163393462346261323931663863646635633764303531356331316430 Apr 20 20:05:59.864430 kernel: audit: type=1334 audit(1776715559.748:1386): prog-id=244 op=UNLOAD Apr 20 20:05:59.864456 kernel: audit: type=1300 audit(1776715559.748:1386): arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5396 pid=7601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 20:05:59.864470 kernel: audit: type=1327 audit(1776715559.748:1386): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163393462346261323931663863646635633764303531356331316430 Apr 20 20:05:59.864499 kernel: audit: type=1334 audit(1776715559.779:1387): prog-id=245 op=LOAD Apr 20 20:05:59.872322 kernel: audit: type=1300 audit(1776715559.779:1387): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000206490 a2=98 a3=0 items=0 ppid=5396 pid=7601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 20:05:59.861000 audit[7601]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000206220 a2=98 a3=0 items=0 ppid=5396 pid=7601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 20:05:59.861000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163393462346261323931663863646635633764303531356331316430 Apr 20 20:05:59.982365 kubelet[3163]: I0420 20:05:59.928605 3163 status_manager.go:895] "Failed to get status for pod" podUID="071d23f6-a94b-4165-9229-2d0570b516d8" pod="calico-system/calico-node-g9fs5" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-node-g9fs5\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:05:59.988000 audit: BPF prog-id=246 op=UNLOAD Apr 20 20:05:59.988000 audit[7601]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5396 pid=7601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 20:05:59.988000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163393462346261323931663863646635633764303531356331316430 Apr 20 20:06:00.142000 audit: BPF prog-id=245 op=UNLOAD Apr 20 20:06:00.142000 audit[7601]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5396 pid=7601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 20:06:00.142000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163393462346261323931663863646635633764303531356331316430 Apr 20 20:06:00.284117 containerd[1659]: time="2026-04-20T20:06:00.278070455Z" level=error msg="get state for ac94b4ba291f8cdf5c7d0515c11d0bad7f6c92f6d00b04c1c0c2a0ec7954d02d" error="context deadline exceeded" Apr 20 20:06:00.145000 audit: BPF prog-id=247 op=LOAD Apr 20 20:06:00.145000 audit[7601]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0002066f0 a2=98 a3=0 items=0 ppid=5396 pid=7601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 20:06:00.315901 containerd[1659]: time="2026-04-20T20:06:00.284320007Z" level=warning msg="unknown status" status=0 Apr 20 20:06:00.316384 kubelet[3163]: I0420 20:06:00.316316 3163 status_manager.go:895] "Failed to get status for pod" podUID="dfb0b7d2-b28d-4433-9fba-0074dfdf81ee" pod="calico-system/calico-apiserver-84684997fc-zpm5v" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-apiserver-84684997fc-zpm5v\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:06:00.319242 kubelet[3163]: E0420 20:06:00.316734 3163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 20 20:06:00.145000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163393462346261323931663863646635633764303531356331316430 Apr 20 20:06:01.386210 kubelet[3163]: E0420 20:06:01.381667 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.049s" Apr 20 20:06:02.537647 containerd[1659]: time="2026-04-20T20:06:02.341232276Z" level=error msg="failed to delete task" error="context deadline exceeded" id=54c6b42922bf5b9031d3f8d7454f339017c2e17aaf05317c673f5a78f690205c Apr 20 20:06:02.590417 containerd[1659]: time="2026-04-20T20:06:02.578323986Z" level=error msg="Failed to handle backOff event container_id:\"54c6b42922bf5b9031d3f8d7454f339017c2e17aaf05317c673f5a78f690205c\" id:\"54c6b42922bf5b9031d3f8d7454f339017c2e17aaf05317c673f5a78f690205c\" pid:7012 exit_status:255 exited_at:{seconds:1776715399 nanos:78895607} for 54c6b42922bf5b9031d3f8d7454f339017c2e17aaf05317c673f5a78f690205c" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded" Apr 20 20:06:03.178373 kubelet[3163]: E0420 20:06:03.173390 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T20:06:01Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T20:06:01Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T20:06:01Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T20:06:01Z\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"localhost\": Patch \"https://10.0.0.14:6443/api/v1/nodes/localhost/status?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:06:03.287191 containerd[1659]: time="2026-04-20T20:06:03.204251695Z" level=error msg="ttrpc: received message on inactive stream" stream=107 Apr 20 20:06:03.384067 containerd[1659]: time="2026-04-20T20:06:03.277241157Z" level=error msg="get state for ac94b4ba291f8cdf5c7d0515c11d0bad7f6c92f6d00b04c1c0c2a0ec7954d02d" error="context deadline exceeded" Apr 20 20:06:03.407299 containerd[1659]: time="2026-04-20T20:06:03.388907369Z" level=warning msg="unknown status" status=0 Apr 20 20:06:04.081264 kubelet[3163]: E0420 20:06:04.062961 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.14:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:06:04.223205 kubelet[3163]: E0420 20:06:04.221660 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.14:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:06:04.223205 kubelet[3163]: E0420 20:06:04.222350 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.14:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:06:04.223205 kubelet[3163]: E0420 20:06:04.222605 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.14:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:06:04.223205 kubelet[3163]: E0420 20:06:04.222617 3163 kubelet_node_status.go:535] "Unable to update node status" err="update node status exceeds retry count" Apr 20 20:06:04.386959 kubelet[3163]: E0420 20:06:04.386462 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.85s" Apr 20 20:06:04.652574 kubelet[3163]: I0420 20:06:04.651392 3163 status_manager.go:895] "Failed to get status for pod" podUID="5ef51a6b32499d3d1e531fb8b3a83d4f" pod="kube-system/kube-apiserver-localhost" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:06:04.731323 kubelet[3163]: I0420 20:06:04.692439 3163 status_manager.go:895] "Failed to get status for pod" podUID="071d23f6-a94b-4165-9229-2d0570b516d8" pod="calico-system/calico-node-g9fs5" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-node-g9fs5\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:06:04.807332 containerd[1659]: time="2026-04-20T20:06:04.763693453Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 20 20:06:04.882735 containerd[1659]: time="2026-04-20T20:06:04.798113087Z" level=error msg="ttrpc: received message on inactive stream" stream=73 Apr 20 20:06:04.882735 containerd[1659]: time="2026-04-20T20:06:04.879365334Z" level=error msg="ttrpc: received message on inactive stream" stream=5 Apr 20 20:06:04.895875 kubelet[3163]: I0420 20:06:04.797272 3163 status_manager.go:895] "Failed to get status for pod" podUID="dfb0b7d2-b28d-4433-9fba-0074dfdf81ee" pod="calico-system/calico-apiserver-84684997fc-zpm5v" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-apiserver-84684997fc-zpm5v\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:06:04.982843 kubelet[3163]: E0420 20:06:04.915260 3163 token_manager.go:124] "Couldn't update token" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/serviceaccounts/calico-node/token\": dial tcp 10.0.0.14:6443: connect: connection refused" cacheKey="\"calico-node\"/\"calico-system\"/[]string(nil)/3607/v1.BoundObjectReference{Kind:\"Pod\", APIVersion:\"v1\", Name:\"calico-node-g9fs5\", UID:\"071d23f6-a94b-4165-9229-2d0570b516d8\"}" Apr 20 20:06:05.064443 kubelet[3163]: I0420 20:06:05.062837 3163 status_manager.go:895] "Failed to get status for pod" podUID="5ef51a6b32499d3d1e531fb8b3a83d4f" pod="kube-system/kube-apiserver-localhost" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:06:05.212635 kubelet[3163]: I0420 20:06:05.210816 3163 status_manager.go:895] "Failed to get status for pod" podUID="071d23f6-a94b-4165-9229-2d0570b516d8" pod="calico-system/calico-node-g9fs5" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-node-g9fs5\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:06:05.400295 kubelet[3163]: I0420 20:06:05.391301 3163 status_manager.go:895] "Failed to get status for pod" podUID="dfb0b7d2-b28d-4433-9fba-0074dfdf81ee" pod="calico-system/calico-apiserver-84684997fc-zpm5v" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-apiserver-84684997fc-zpm5v\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:06:05.654000 audit[7654]: AUDIT1101 pid=7654 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:06:05.691359 kernel: kauditd_printk_skb: 13 callbacks suppressed Apr 20 20:06:05.744229 sshd[7654]: Accepted publickey for core from 10.0.0.1 port 55424 ssh2: RSA SHA256:ZNIzts6V4KYKlrJxXaosrimCRlmsV/+NkZ5UtjwHrjE Apr 20 20:06:05.751742 kernel: audit: type=1101 audit(1776715565.654:1392): pid=7654 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:06:05.831000 audit[7654]: AUDIT1103 pid=7654 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:06:05.877000 audit[7654]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeed25e700 a2=3 a3=0 items=0 ppid=1 pid=7654 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=74 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 20:06:05.877000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 20:06:05.922973 kernel: audit: type=1103 audit(1776715565.831:1393): pid=7654 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:06:05.923206 kernel: audit: type=1006 audit(1776715565.877:1394): pid=7654 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=74 res=1 Apr 20 20:06:05.923168 sshd-session[7654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 20:06:05.924234 kernel: audit: type=1300 audit(1776715565.877:1394): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeed25e700 a2=3 a3=0 items=0 ppid=1 pid=7654 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=74 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 20:06:05.924621 kernel: audit: type=1327 audit(1776715565.877:1394): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 20:06:06.600033 systemd-logind[1627]: New session '74' of user 'core' with class 'user' and type 'tty'. Apr 20 20:06:06.732885 systemd[1]: Started session-74.scope - Session 74 of User core. Apr 20 20:06:06.957880 kubelet[3163]: E0420 20:06:06.867571 3163 token_manager.go:124] "Couldn't update token" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/serviceaccounts/calico-node/token\": dial tcp 10.0.0.14:6443: connect: connection refused" cacheKey="\"calico-node\"/\"calico-system\"/[]string(nil)/3607/v1.BoundObjectReference{Kind:\"Pod\", APIVersion:\"v1\", Name:\"calico-node-g9fs5\", UID:\"071d23f6-a94b-4165-9229-2d0570b516d8\"}" Apr 20 20:06:07.488000 audit[7654]: AUDIT1105 pid=7654 uid=0 auid=500 ses=74 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:06:07.584495 kernel: audit: type=1105 audit(1776715567.488:1395): pid=7654 uid=0 auid=500 ses=74 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:06:07.839000 audit[7672]: AUDIT1103 pid=7672 uid=0 auid=500 ses=74 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:06:07.943940 kernel: audit: type=1103 audit(1776715567.839:1396): pid=7672 uid=0 auid=500 ses=74 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:06:08.611814 kubelet[3163]: E0420 20:06:08.606441 3163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 20 20:06:08.611814 kubelet[3163]: E0420 20:06:08.607494 3163 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/events/kube-scheduler-localhost.18a826e9b6ceb5d3\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-scheduler-localhost.18a826e9b6ceb5d3 kube-system 3221 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-scheduler-localhost,UID:33fee6ba1581201eda98a989140db110,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 19:20:38 +0000 UTC,LastTimestamp:2026-04-20 19:21:14.833055697 +0000 UTC m=+772.927305422,Count:6,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 20:06:08.679182 kubelet[3163]: E0420 20:06:08.677912 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1537\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 20 20:06:09.009441 kubelet[3163]: E0420 20:06:09.008782 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.994s" Apr 20 20:06:09.140641 containerd[1659]: time="2026-04-20T20:06:09.138389415Z" level=info msg="StartContainer for \"ac94b4ba291f8cdf5c7d0515c11d0bad7f6c92f6d00b04c1c0c2a0ec7954d02d\" returns successfully" Apr 20 20:06:11.723719 kubelet[3163]: E0420 20:06:11.719766 3163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:06:12.184111 kubelet[3163]: I0420 20:06:12.183576 3163 status_manager.go:895] "Failed to get status for pod" podUID="5ef51a6b32499d3d1e531fb8b3a83d4f" pod="kube-system/kube-apiserver-localhost" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:06:12.184111 kubelet[3163]: I0420 20:06:12.183848 3163 status_manager.go:895] "Failed to get status for pod" podUID="071d23f6-a94b-4165-9229-2d0570b516d8" pod="calico-system/calico-node-g9fs5" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-node-g9fs5\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:06:12.184111 kubelet[3163]: I0420 20:06:12.184040 3163 status_manager.go:895] "Failed to get status for pod" podUID="dfb0b7d2-b28d-4433-9fba-0074dfdf81ee" pod="calico-system/calico-apiserver-84684997fc-zpm5v" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-apiserver-84684997fc-zpm5v\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:06:12.901511 kubelet[3163]: E0420 20:06:12.896833 3163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:06:15.156138 kubelet[3163]: E0420 20:06:15.154797 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.283s" Apr 20 20:06:15.204931 kubelet[3163]: E0420 20:06:15.204472 3163 secret.go:189] Couldn't get secret calico-system/node-certs: failed to sync secret cache: timed out waiting for the condition Apr 20 20:06:15.774207 kubelet[3163]: E0420 20:06:15.772946 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/071d23f6-a94b-4165-9229-2d0570b516d8-node-certs podName:071d23f6-a94b-4165-9229-2d0570b516d8 nodeName:}" failed. No retries permitted until 2026-04-20 20:08:17.353357834 +0000 UTC m=+3595.447607542 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "node-certs" (UniqueName: "kubernetes.io/secret/071d23f6-a94b-4165-9229-2d0570b516d8-node-certs") pod "calico-node-g9fs5" (UID: "071d23f6-a94b-4165-9229-2d0570b516d8") : failed to sync secret cache: timed out waiting for the condition Apr 20 20:06:16.010335 kubelet[3163]: E0420 20:06:16.009061 3163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 20 20:06:16.178689 kubelet[3163]: I0420 20:06:15.734274 3163 status_manager.go:895] "Failed to get status for pod" podUID="5ef51a6b32499d3d1e531fb8b3a83d4f" pod="kube-system/kube-apiserver-localhost" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:06:17.991795 sshd[7672]: Connection closed by 10.0.0.1 port 55424 Apr 20 20:06:18.086172 sshd-session[7654]: pam_unix(sshd:session): session closed for user core Apr 20 20:06:18.244943 kubelet[3163]: I0420 20:06:17.876880 3163 status_manager.go:895] "Failed to get status for pod" podUID="071d23f6-a94b-4165-9229-2d0570b516d8" pod="calico-system/calico-node-g9fs5" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-node-g9fs5\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:06:18.244000 audit[7654]: AUDIT1106 pid=7654 uid=0 auid=500 ses=74 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:06:18.265380 kernel: audit: type=1106 audit(1776715578.244:1397): pid=7654 uid=0 auid=500 ses=74 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:06:18.264000 audit[7654]: AUDIT1104 pid=7654 uid=0 auid=500 ses=74 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:06:18.320066 kernel: audit: type=1104 audit(1776715578.264:1398): pid=7654 uid=0 auid=500 ses=74 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:06:18.349730 kubelet[3163]: E0420 20:06:18.336170 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.805s" Apr 20 20:06:18.487129 kubelet[3163]: I0420 20:06:18.465361 3163 status_manager.go:895] "Failed to get status for pod" podUID="dfb0b7d2-b28d-4433-9fba-0074dfdf81ee" pod="calico-system/calico-apiserver-84684997fc-zpm5v" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-apiserver-84684997fc-zpm5v\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:06:18.654501 systemd[1]: sshd@72-12-10.0.0.14:22-10.0.0.1:55424.service: Deactivated successfully. Apr 20 20:06:18.716000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@72-12-10.0.0.14:22-10.0.0.1:55424 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:06:18.718402 systemd[1]: sshd@72-12-10.0.0.14:22-10.0.0.1:55424.service: Consumed 2.792s CPU time, 4.1M memory peak. Apr 20 20:06:19.030182 kernel: audit: type=1131 audit(1776715578.716:1399): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@72-12-10.0.0.14:22-10.0.0.1:55424 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:06:19.082258 kubelet[3163]: E0420 20:06:18.840229 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T20:06:15Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T20:06:15Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T20:06:15Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T20:06:15Z\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"localhost\": Patch \"https://10.0.0.14:6443/api/v1/nodes/localhost/status?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:06:19.082258 kubelet[3163]: E0420 20:06:18.841985 3163 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/events/kube-scheduler-localhost.18a826e9b6ceb5d3\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-scheduler-localhost.18a826e9b6ceb5d3 kube-system 3221 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-scheduler-localhost,UID:33fee6ba1581201eda98a989140db110,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 19:20:38 +0000 UTC,LastTimestamp:2026-04-20 19:21:14.833055697 +0000 UTC m=+772.927305422,Count:6,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 20:06:19.082258 kubelet[3163]: E0420 20:06:18.842165 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.14:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:06:19.082258 kubelet[3163]: E0420 20:06:18.842340 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.14:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:06:18.841991 systemd[1]: session-74.scope: Deactivated successfully. Apr 20 20:06:18.873923 systemd[1]: session-74.scope: Consumed 6.975s CPU time, 15.6M memory peak. Apr 20 20:06:19.243100 systemd-logind[1627]: Session 74 logged out. Waiting for processes to exit. Apr 20 20:06:19.393142 systemd-logind[1627]: Removed session 74. Apr 20 20:06:19.534409 kubelet[3163]: E0420 20:06:19.026237 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=2901\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 20 20:06:19.865780 kubelet[3163]: E0420 20:06:19.864668 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.14:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:06:20.066247 kubelet[3163]: E0420 20:06:20.045236 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=2825\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 20 20:06:20.199752 kubelet[3163]: E0420 20:06:20.199380 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.14:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:06:20.199752 kubelet[3163]: E0420 20:06:20.199597 3163 kubelet_node_status.go:535] "Unable to update node status" err="update node status exceeds retry count" Apr 20 20:06:20.699972 kubelet[3163]: E0420 20:06:20.680381 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.338s" Apr 20 20:06:21.471853 kubelet[3163]: E0420 20:06:21.455361 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.14:6443/api/v1/namespaces/tigera-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1537\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="object-\"tigera-operator\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 20 20:06:22.456153 systemd[1]: cri-containerd-ac94b4ba291f8cdf5c7d0515c11d0bad7f6c92f6d00b04c1c0c2a0ec7954d02d.scope: Deactivated successfully. Apr 20 20:06:22.563000 audit: BPF prog-id=247 op=UNLOAD Apr 20 20:06:22.578000 audit: BPF prog-id=243 op=UNLOAD Apr 20 20:06:22.701605 kernel: audit: type=1334 audit(1776715582.563:1400): prog-id=247 op=UNLOAD Apr 20 20:06:22.586458 systemd[1]: cri-containerd-ac94b4ba291f8cdf5c7d0515c11d0bad7f6c92f6d00b04c1c0c2a0ec7954d02d.scope: Consumed 11.202s CPU time, 19.2M memory peak. Apr 20 20:06:22.821260 kernel: audit: type=1334 audit(1776715582.578:1401): prog-id=243 op=UNLOAD Apr 20 20:06:22.872147 containerd[1659]: time="2026-04-20T20:06:22.871073467Z" level=info msg="received container exit event container_id:\"ac94b4ba291f8cdf5c7d0515c11d0bad7f6c92f6d00b04c1c0c2a0ec7954d02d\" id:\"ac94b4ba291f8cdf5c7d0515c11d0bad7f6c92f6d00b04c1c0c2a0ec7954d02d\" pid:7640 exit_status:1 exited_at:{seconds:1776715582 nanos:867134678}" Apr 20 20:06:23.490313 kubelet[3163]: E0420 20:06:23.450175 3163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 20 20:06:25.047186 systemd[1]: Started sshd@73-12301-10.0.0.14:22-10.0.0.1:33092.service - OpenSSH per-connection server daemon (10.0.0.1:33092). Apr 20 20:06:25.274000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@73-12301-10.0.0.14:22-10.0.0.1:33092 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:06:25.333287 kernel: audit: type=1130 audit(1776715585.274:1402): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@73-12301-10.0.0.14:22-10.0.0.1:33092 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:06:25.682687 kubelet[3163]: E0420 20:06:25.482461 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.597s" Apr 20 20:06:26.804118 kubelet[3163]: I0420 20:06:26.787309 3163 status_manager.go:895] "Failed to get status for pod" podUID="5ef51a6b32499d3d1e531fb8b3a83d4f" pod="kube-system/kube-apiserver-localhost" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:06:26.853014 kubelet[3163]: E0420 20:06:26.829186 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.Secret: Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/secrets?fieldSelector=metadata.name%3Dcalico-apiserver-certs&resourceVersion=2406\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="object-\"calico-system\"/\"calico-apiserver-certs\"" type="*v1.Secret" Apr 20 20:06:26.997361 kubelet[3163]: I0420 20:06:26.946341 3163 status_manager.go:895] "Failed to get status for pod" podUID="071d23f6-a94b-4165-9229-2d0570b516d8" pod="calico-system/calico-node-g9fs5" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-node-g9fs5\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:06:27.146242 kubelet[3163]: I0420 20:06:27.143408 3163 status_manager.go:895] "Failed to get status for pod" podUID="dfb0b7d2-b28d-4433-9fba-0074dfdf81ee" pod="calico-system/calico-apiserver-84684997fc-zpm5v" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-apiserver-84684997fc-zpm5v\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:06:27.788631 kubelet[3163]: E0420 20:06:27.778141 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.099s" Apr 20 20:06:28.881101 kubelet[3163]: E0420 20:06:28.843388 3163 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/events/kube-scheduler-localhost.18a826e9b6ceb5d3\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-scheduler-localhost.18a826e9b6ceb5d3 kube-system 3221 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-scheduler-localhost,UID:33fee6ba1581201eda98a989140db110,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 19:20:38 +0000 UTC,LastTimestamp:2026-04-20 19:21:14.833055697 +0000 UTC m=+772.927305422,Count:6,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 20:06:29.482036 kubelet[3163]: E0420 20:06:29.481899 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.Secret: Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/secrets?fieldSelector=metadata.name%3Dnode-certs&resourceVersion=2406\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="object-\"calico-system\"/\"node-certs\"" type="*v1.Secret" Apr 20 20:06:29.561000 audit[7729]: AUDIT1101 pid=7729 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:06:29.689418 kernel: audit: type=1101 audit(1776715589.561:1403): pid=7729 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:06:29.708386 sshd[7729]: Accepted publickey for core from 10.0.0.1 port 33092 ssh2: RSA SHA256:ZNIzts6V4KYKlrJxXaosrimCRlmsV/+NkZ5UtjwHrjE Apr 20 20:06:29.713000 audit[7729]: AUDIT1103 pid=7729 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:06:29.717000 audit[7729]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc4beae840 a2=3 a3=0 items=0 ppid=1 pid=7729 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=75 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 20:06:29.717000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 20:06:29.846596 kernel: audit: type=1103 audit(1776715589.713:1404): pid=7729 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:06:29.849119 kernel: audit: type=1006 audit(1776715589.717:1405): pid=7729 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=75 res=1 Apr 20 20:06:29.850196 kernel: audit: type=1300 audit(1776715589.717:1405): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc4beae840 a2=3 a3=0 items=0 ppid=1 pid=7729 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=75 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 20:06:29.855528 kernel: audit: type=1327 audit(1776715589.717:1405): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 20:06:29.914847 sshd-session[7729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 20:06:30.415056 kubelet[3163]: E0420 20:06:30.398785 3163 token_manager.go:124] "Couldn't update token" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/serviceaccounts/kube-proxy/token\": dial tcp 10.0.0.14:6443: connect: connection refused" cacheKey="\"kube-proxy\"/\"kube-system\"/[]string(nil)/3607/v1.BoundObjectReference{Kind:\"Pod\", APIVersion:\"v1\", Name:\"kube-proxy-c6mkn\", UID:\"526e8f89-8d32-4504-b20c-956610c7bb82\"}" Apr 20 20:06:31.058139 kubelet[3163]: E0420 20:06:31.043091 3163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 20 20:06:31.567351 systemd-logind[1627]: New session '75' of user 'core' with class 'user' and type 'tty'. Apr 20 20:06:31.776408 kubelet[3163]: E0420 20:06:31.682917 3163 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 20 20:06:31.914089 kubelet[3163]: E0420 20:06:31.912116 3163 projected.go:194] Error preparing data for projected volume kube-api-access-6ncsk for pod kube-system/kube-proxy-c6mkn: failed to sync configmap cache: timed out waiting for the condition Apr 20 20:06:31.987253 kubelet[3163]: E0420 20:06:31.984420 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/526e8f89-8d32-4504-b20c-956610c7bb82-kube-api-access-6ncsk podName:526e8f89-8d32-4504-b20c-956610c7bb82 nodeName:}" failed. No retries permitted until 2026-04-20 20:08:33.964415951 +0000 UTC m=+3612.058665650 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-6ncsk" (UniqueName: "kubernetes.io/projected/526e8f89-8d32-4504-b20c-956610c7bb82-kube-api-access-6ncsk") pod "kube-proxy-c6mkn" (UID: "526e8f89-8d32-4504-b20c-956610c7bb82") : failed to sync configmap cache: timed out waiting for the condition Apr 20 20:06:32.165251 kubelet[3163]: E0420 20:06:32.162198 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T20:06:30Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T20:06:30Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T20:06:30Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T20:06:30Z\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"localhost\": Patch \"https://10.0.0.14:6443/api/v1/nodes/localhost/status?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:06:32.312950 systemd[1]: Started session-75.scope - Session 75 of User core. Apr 20 20:06:32.346577 kubelet[3163]: E0420 20:06:32.345386 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.14:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:06:32.664216 kubelet[3163]: E0420 20:06:32.597378 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.14:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:06:32.726463 kubelet[3163]: E0420 20:06:32.723385 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.865s" Apr 20 20:06:32.782348 kubelet[3163]: E0420 20:06:32.749991 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.14:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:06:32.864809 kubelet[3163]: E0420 20:06:32.845351 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.14:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:06:32.864809 kubelet[3163]: E0420 20:06:32.847110 3163 kubelet_node_status.go:535] "Unable to update node status" err="update node status exceeds retry count" Apr 20 20:06:32.879000 audit[7729]: AUDIT1105 pid=7729 uid=0 auid=500 ses=75 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:06:32.984274 containerd[1659]: time="2026-04-20T20:06:32.874334207Z" level=error msg="failed to delete task" error="context deadline exceeded" id=ac94b4ba291f8cdf5c7d0515c11d0bad7f6c92f6d00b04c1c0c2a0ec7954d02d Apr 20 20:06:33.201831 kernel: audit: type=1105 audit(1776715592.879:1406): pid=7729 uid=0 auid=500 ses=75 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:06:33.338000 audit[7763]: AUDIT1103 pid=7763 uid=0 auid=500 ses=75 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:06:33.464968 kernel: audit: type=1103 audit(1776715593.338:1407): pid=7763 uid=0 auid=500 ses=75 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:06:33.636121 containerd[1659]: time="2026-04-20T20:06:33.333312590Z" level=error msg="failed to handle container TaskExit event container_id:\"ac94b4ba291f8cdf5c7d0515c11d0bad7f6c92f6d00b04c1c0c2a0ec7954d02d\" id:\"ac94b4ba291f8cdf5c7d0515c11d0bad7f6c92f6d00b04c1c0c2a0ec7954d02d\" pid:7640 exit_status:1 exited_at:{seconds:1776715582 nanos:867134678}" error="failed to stop container: failed to delete task: context deadline exceeded" Apr 20 20:06:34.058265 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ac94b4ba291f8cdf5c7d0515c11d0bad7f6c92f6d00b04c1c0c2a0ec7954d02d-rootfs.mount: Deactivated successfully. Apr 20 20:06:34.861167 containerd[1659]: time="2026-04-20T20:06:34.855960995Z" level=error msg="ttrpc: received message on inactive stream" stream=39 Apr 20 20:06:35.327063 containerd[1659]: time="2026-04-20T20:06:34.874372367Z" level=info msg="TaskExit event container_id:\"ac94b4ba291f8cdf5c7d0515c11d0bad7f6c92f6d00b04c1c0c2a0ec7954d02d\" id:\"ac94b4ba291f8cdf5c7d0515c11d0bad7f6c92f6d00b04c1c0c2a0ec7954d02d\" pid:7640 exit_status:1 exited_at:{seconds:1776715582 nanos:867134678}" Apr 20 20:06:36.266914 containerd[1659]: time="2026-04-20T20:06:36.154810340Z" level=error msg="failed to delete task" error="rpc error: code = NotFound desc = container not created: not found" id=ac94b4ba291f8cdf5c7d0515c11d0bad7f6c92f6d00b04c1c0c2a0ec7954d02d Apr 20 20:06:36.832273 containerd[1659]: time="2026-04-20T20:06:36.815152621Z" level=info msg="Ensure that container ac94b4ba291f8cdf5c7d0515c11d0bad7f6c92f6d00b04c1c0c2a0ec7954d02d in task-service has been cleanup successfully" Apr 20 20:06:37.265084 kubelet[3163]: E0420 20:06:37.074479 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.09s" Apr 20 20:06:37.490247 kubelet[3163]: I0420 20:06:37.474998 3163 status_manager.go:895] "Failed to get status for pod" podUID="5ef51a6b32499d3d1e531fb8b3a83d4f" pod="kube-system/kube-apiserver-localhost" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:06:39.026104 kubelet[3163]: I0420 20:06:39.024282 3163 status_manager.go:895] "Failed to get status for pod" podUID="071d23f6-a94b-4165-9229-2d0570b516d8" pod="calico-system/calico-node-g9fs5" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-node-g9fs5\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:06:39.043975 kubelet[3163]: E0420 20:06:39.043295 3163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 20 20:06:39.244116 kubelet[3163]: E0420 20:06:39.237404 3163 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/events/kube-scheduler-localhost.18a826e9b6ceb5d3\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-scheduler-localhost.18a826e9b6ceb5d3 kube-system 3221 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-scheduler-localhost,UID:33fee6ba1581201eda98a989140db110,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 19:20:38 +0000 UTC,LastTimestamp:2026-04-20 19:21:14.833055697 +0000 UTC m=+772.927305422,Count:6,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 20:06:39.358352 kubelet[3163]: I0420 20:06:39.252394 3163 status_manager.go:895] "Failed to get status for pod" podUID="dfb0b7d2-b28d-4433-9fba-0074dfdf81ee" pod="calico-system/calico-apiserver-84684997fc-zpm5v" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-apiserver-84684997fc-zpm5v\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:06:40.239973 kubelet[3163]: E0420 20:06:40.233730 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.88s" Apr 20 20:06:41.506187 kubelet[3163]: E0420 20:06:41.505729 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1537\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 20 20:06:41.644410 kubelet[3163]: I0420 20:06:41.643268 3163 scope.go:117] "RemoveContainer" containerID="ac94b4ba291f8cdf5c7d0515c11d0bad7f6c92f6d00b04c1c0c2a0ec7954d02d" Apr 20 20:06:41.665122 kubelet[3163]: E0420 20:06:41.663630 3163 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=calico-apiserver pod=calico-apiserver-84684997fc-zpm5v_calico-system(dfb0b7d2-b28d-4433-9fba-0074dfdf81ee)\"" pod="calico-system/calico-apiserver-84684997fc-zpm5v" podUID="dfb0b7d2-b28d-4433-9fba-0074dfdf81ee" Apr 20 20:06:41.672788 kubelet[3163]: I0420 20:06:41.669259 3163 status_manager.go:895] "Failed to get status for pod" podUID="5ef51a6b32499d3d1e531fb8b3a83d4f" pod="kube-system/kube-apiserver-localhost" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:06:41.774155 kubelet[3163]: I0420 20:06:41.737378 3163 status_manager.go:895] "Failed to get status for pod" podUID="071d23f6-a94b-4165-9229-2d0570b516d8" pod="calico-system/calico-node-g9fs5" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-node-g9fs5\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:06:41.956520 kubelet[3163]: I0420 20:06:41.954205 3163 status_manager.go:895] "Failed to get status for pod" podUID="dfb0b7d2-b28d-4433-9fba-0074dfdf81ee" pod="calico-system/calico-apiserver-84684997fc-zpm5v" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-apiserver-84684997fc-zpm5v\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:06:42.033395 kubelet[3163]: E0420 20:06:42.030892 3163 token_manager.go:124] "Couldn't update token" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/serviceaccounts/calico-apiserver/token\": dial tcp 10.0.0.14:6443: connect: connection refused" cacheKey="\"calico-apiserver\"/\"calico-system\"/[]string(nil)/3607/v1.BoundObjectReference{Kind:\"Pod\", APIVersion:\"v1\", Name:\"calico-apiserver-84684997fc-zpm5v\", UID:\"dfb0b7d2-b28d-4433-9fba-0074dfdf81ee\"}" Apr 20 20:06:43.576159 kubelet[3163]: E0420 20:06:43.574204 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T20:06:43Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T20:06:43Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T20:06:43Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T20:06:43Z\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"localhost\": Patch \"https://10.0.0.14:6443/api/v1/nodes/localhost/status?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:06:43.654626 kubelet[3163]: E0420 20:06:43.654474 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.14:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:06:43.730652 kubelet[3163]: E0420 20:06:43.729935 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.14:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:06:43.795747 kubelet[3163]: E0420 20:06:43.791915 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.14:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:06:43.911406 kubelet[3163]: E0420 20:06:43.836770 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.14:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:06:43.911406 kubelet[3163]: E0420 20:06:43.845899 3163 kubelet_node_status.go:535] "Unable to update node status" err="update node status exceeds retry count" Apr 20 20:06:45.013235 kubelet[3163]: I0420 20:06:45.012993 3163 status_manager.go:895] "Failed to get status for pod" podUID="5ef51a6b32499d3d1e531fb8b3a83d4f" pod="kube-system/kube-apiserver-localhost" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:06:45.071797 kubelet[3163]: E0420 20:06:45.011980 3163 token_manager.go:124] "Couldn't update token" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/tigera-operator/serviceaccounts/tigera-operator/token\": dial tcp 10.0.0.14:6443: connect: connection refused" cacheKey="\"tigera-operator\"/\"tigera-operator\"/[]string(nil)/3607/v1.BoundObjectReference{Kind:\"Pod\", APIVersion:\"v1\", Name:\"tigera-operator-6bf85f8dd-hvgdj\", UID:\"22f1ff03-de8a-48db-b03e-54fdbe0d3d5f\"}" Apr 20 20:06:45.161589 kubelet[3163]: I0420 20:06:45.160955 3163 status_manager.go:895] "Failed to get status for pod" podUID="071d23f6-a94b-4165-9229-2d0570b516d8" pod="calico-system/calico-node-g9fs5" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-node-g9fs5\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:06:45.239370 kubelet[3163]: I0420 20:06:45.219916 3163 status_manager.go:895] "Failed to get status for pod" podUID="dfb0b7d2-b28d-4433-9fba-0074dfdf81ee" pod="calico-system/calico-apiserver-84684997fc-zpm5v" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-apiserver-84684997fc-zpm5v\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:06:46.057683 kubelet[3163]: E0420 20:06:46.055058 3163 projected.go:289] Couldn't get configMap tigera-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Apr 20 20:06:46.178317 kubelet[3163]: E0420 20:06:46.090898 3163 projected.go:194] Error preparing data for projected volume kube-api-access-qj2d9 for pod tigera-operator/tigera-operator-6bf85f8dd-hvgdj: failed to sync configmap cache: timed out waiting for the condition Apr 20 20:06:46.254240 kubelet[3163]: E0420 20:06:46.247032 3163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 20 20:06:46.273798 kubelet[3163]: E0420 20:06:46.257295 3163 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/22f1ff03-de8a-48db-b03e-54fdbe0d3d5f-kube-api-access-qj2d9 podName:22f1ff03-de8a-48db-b03e-54fdbe0d3d5f nodeName:}" failed. No retries permitted until 2026-04-20 20:08:48.244702767 +0000 UTC m=+3626.338952470 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-qj2d9" (UniqueName: "kubernetes.io/projected/22f1ff03-de8a-48db-b03e-54fdbe0d3d5f-kube-api-access-qj2d9") pod "tigera-operator-6bf85f8dd-hvgdj" (UID: "22f1ff03-de8a-48db-b03e-54fdbe0d3d5f") : failed to sync configmap cache: timed out waiting for the condition Apr 20 20:06:49.040202 sshd[7763]: Connection closed by 10.0.0.1 port 33092 Apr 20 20:06:49.045258 sshd-session[7729]: pam_unix(sshd:session): session closed for user core Apr 20 20:06:49.136000 audit[7729]: AUDIT1106 pid=7729 uid=0 auid=500 ses=75 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:06:49.146000 audit[7729]: AUDIT1104 pid=7729 uid=0 auid=500 ses=75 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:06:49.222400 kernel: audit: type=1106 audit(1776715609.136:1408): pid=7729 uid=0 auid=500 ses=75 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:06:49.239877 kernel: audit: type=1104 audit(1776715609.146:1409): pid=7729 uid=0 auid=500 ses=75 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:06:49.330709 kubelet[3163]: E0420 20:06:49.308331 3163 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/events/kube-scheduler-localhost.18a826e9b6ceb5d3\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-scheduler-localhost.18a826e9b6ceb5d3 kube-system 3221 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-scheduler-localhost,UID:33fee6ba1581201eda98a989140db110,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 19:20:38 +0000 UTC,LastTimestamp:2026-04-20 19:21:14.833055697 +0000 UTC m=+772.927305422,Count:6,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 20:06:49.359182 systemd[1]: sshd@73-12301-10.0.0.14:22-10.0.0.1:33092.service: Deactivated successfully. Apr 20 20:06:49.511000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@73-12301-10.0.0.14:22-10.0.0.1:33092 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:06:49.524621 systemd[1]: sshd@73-12301-10.0.0.14:22-10.0.0.1:33092.service: Consumed 2.051s CPU time, 4.1M memory peak. Apr 20 20:06:49.577192 kernel: audit: type=1131 audit(1776715609.511:1410): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@73-12301-10.0.0.14:22-10.0.0.1:33092 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:06:50.094935 systemd[1]: session-75.scope: Deactivated successfully. Apr 20 20:06:50.293186 systemd[1]: session-75.scope: Consumed 11.018s CPU time, 18.1M memory peak. Apr 20 20:06:50.651193 systemd-logind[1627]: Session 75 logged out. Waiting for processes to exit. Apr 20 20:06:51.069095 systemd-logind[1627]: Removed session 75. Apr 20 20:06:53.296136 kubelet[3163]: E0420 20:06:53.296061 3163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 20 20:06:55.724088 systemd[1]: Started sshd@74-13-10.0.0.14:22-10.0.0.1:59562.service - OpenSSH per-connection server daemon (10.0.0.1:59562). Apr 20 20:06:55.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@74-13-10.0.0.14:22-10.0.0.1:59562 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:06:56.143944 kernel: audit: type=1130 audit(1776715615.829:1411): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@74-13-10.0.0.14:22-10.0.0.1:59562 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:06:56.354366 kubelet[3163]: E0420 20:06:56.354149 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=2825\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 20 20:06:56.527317 kubelet[3163]: I0420 20:06:56.527144 3163 status_manager.go:895] "Failed to get status for pod" podUID="5ef51a6b32499d3d1e531fb8b3a83d4f" pod="kube-system/kube-apiserver-localhost" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:06:56.633938 kubelet[3163]: I0420 20:06:56.628290 3163 status_manager.go:895] "Failed to get status for pod" podUID="071d23f6-a94b-4165-9229-2d0570b516d8" pod="calico-system/calico-node-g9fs5" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-node-g9fs5\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:06:57.316168 kubelet[3163]: I0420 20:06:57.280745 3163 status_manager.go:895] "Failed to get status for pod" podUID="dfb0b7d2-b28d-4433-9fba-0074dfdf81ee" pod="calico-system/calico-apiserver-84684997fc-zpm5v" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-apiserver-84684997fc-zpm5v\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:06:57.928087 kubelet[3163]: E0420 20:06:57.927695 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.8s" Apr 20 20:06:58.557496 kubelet[3163]: E0420 20:06:58.557128 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T20:06:56Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T20:06:56Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T20:06:56Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T20:06:56Z\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"localhost\": Patch \"https://10.0.0.14:6443/api/v1/nodes/localhost/status?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:06:58.898269 kubelet[3163]: E0420 20:06:58.889308 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.14:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:06:59.221309 kubelet[3163]: E0420 20:06:59.139226 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.14:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:06:59.262246 kubelet[3163]: E0420 20:06:59.260747 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.329s" Apr 20 20:06:59.369416 kubelet[3163]: E0420 20:06:59.368008 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.14:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:06:59.644395 kubelet[3163]: E0420 20:06:59.630329 3163 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/events/kube-scheduler-localhost.18a826e9b6ceb5d3\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-scheduler-localhost.18a826e9b6ceb5d3 kube-system 3221 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-scheduler-localhost,UID:33fee6ba1581201eda98a989140db110,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 19:20:38 +0000 UTC,LastTimestamp:2026-04-20 19:21:14.833055697 +0000 UTC m=+772.927305422,Count:6,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 20:06:59.743248 kubelet[3163]: E0420 20:06:59.739338 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.14:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:06:59.768362 kubelet[3163]: E0420 20:06:59.765101 3163 kubelet_node_status.go:535] "Unable to update node status" err="update node status exceeds retry count" Apr 20 20:06:59.901074 kubelet[3163]: E0420 20:06:59.885419 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://10.0.0.14:6443/api/v1/namespaces/tigera-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=1537\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="object-\"tigera-operator\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Apr 20 20:07:00.244173 kubelet[3163]: I0420 20:07:00.232079 3163 scope.go:117] "RemoveContainer" containerID="ac94b4ba291f8cdf5c7d0515c11d0bad7f6c92f6d00b04c1c0c2a0ec7954d02d" Apr 20 20:07:00.308887 kubelet[3163]: E0420 20:07:00.299396 3163 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=calico-apiserver pod=calico-apiserver-84684997fc-zpm5v_calico-system(dfb0b7d2-b28d-4433-9fba-0074dfdf81ee)\"" pod="calico-system/calico-apiserver-84684997fc-zpm5v" podUID="dfb0b7d2-b28d-4433-9fba-0074dfdf81ee" Apr 20 20:07:00.311412 kubelet[3163]: I0420 20:07:00.309335 3163 status_manager.go:895] "Failed to get status for pod" podUID="5ef51a6b32499d3d1e531fb8b3a83d4f" pod="kube-system/kube-apiserver-localhost" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:07:00.312964 kubelet[3163]: I0420 20:07:00.312230 3163 status_manager.go:895] "Failed to get status for pod" podUID="071d23f6-a94b-4165-9229-2d0570b516d8" pod="calico-system/calico-node-g9fs5" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-node-g9fs5\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:07:00.316780 kubelet[3163]: E0420 20:07:00.312249 3163 token_manager.go:124] "Couldn't update token" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/serviceaccounts/calico-apiserver/token\": dial tcp 10.0.0.14:6443: connect: connection refused" cacheKey="\"calico-apiserver\"/\"calico-system\"/[]string(nil)/3607/v1.BoundObjectReference{Kind:\"Pod\", APIVersion:\"v1\", Name:\"calico-apiserver-84684997fc-zpm5v\", UID:\"dfb0b7d2-b28d-4433-9fba-0074dfdf81ee\"}" Apr 20 20:07:00.318427 kubelet[3163]: I0420 20:07:00.317805 3163 status_manager.go:895] "Failed to get status for pod" podUID="dfb0b7d2-b28d-4433-9fba-0074dfdf81ee" pod="calico-system/calico-apiserver-84684997fc-zpm5v" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-apiserver-84684997fc-zpm5v\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:07:00.414135 kubelet[3163]: E0420 20:07:00.413161 3163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 20 20:07:00.871000 audit[7801]: AUDIT1101 pid=7801 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:07:00.880882 kernel: audit: type=1101 audit(1776715620.871:1412): pid=7801 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:07:00.896415 sshd[7801]: Accepted publickey for core from 10.0.0.1 port 59562 ssh2: RSA SHA256:ZNIzts6V4KYKlrJxXaosrimCRlmsV/+NkZ5UtjwHrjE Apr 20 20:07:00.921000 audit[7801]: AUDIT1103 pid=7801 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:07:00.971000 audit[7801]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffce2216a60 a2=3 a3=0 items=0 ppid=1 pid=7801 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=76 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 20:07:00.971000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 20:07:01.118292 kernel: audit: type=1103 audit(1776715620.921:1413): pid=7801 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:07:01.139318 kernel: audit: type=1006 audit(1776715620.971:1414): pid=7801 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=76 res=1 Apr 20 20:07:01.129832 sshd-session[7801]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 20 20:07:01.234249 kernel: audit: type=1300 audit(1776715620.971:1414): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffce2216a60 a2=3 a3=0 items=0 ppid=1 pid=7801 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=76 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 20 20:07:01.234408 kernel: audit: type=1327 audit(1776715620.971:1414): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Apr 20 20:07:01.944638 systemd-logind[1627]: New session '76' of user 'core' with class 'user' and type 'tty'. Apr 20 20:07:02.075034 systemd[1]: Started session-76.scope - Session 76 of User core. Apr 20 20:07:02.637000 audit[7801]: AUDIT1105 pid=7801 uid=0 auid=500 ses=76 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:07:02.656880 kernel: audit: type=1105 audit(1776715622.637:1415): pid=7801 uid=0 auid=500 ses=76 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:07:03.138000 audit[7814]: AUDIT1103 pid=7814 uid=0 auid=500 ses=76 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:07:03.245315 kernel: audit: type=1103 audit(1776715623.138:1416): pid=7814 uid=0 auid=500 ses=76 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:07:04.986678 kubelet[3163]: E0420 20:07:04.986443 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.098s" Apr 20 20:07:05.195190 kubelet[3163]: I0420 20:07:05.174353 3163 status_manager.go:895] "Failed to get status for pod" podUID="dfb0b7d2-b28d-4433-9fba-0074dfdf81ee" pod="calico-system/calico-apiserver-84684997fc-zpm5v" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-apiserver-84684997fc-zpm5v\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:07:05.432022 kubelet[3163]: I0420 20:07:05.386130 3163 status_manager.go:895] "Failed to get status for pod" podUID="5ef51a6b32499d3d1e531fb8b3a83d4f" pod="kube-system/kube-apiserver-localhost" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:07:05.622418 kubelet[3163]: I0420 20:07:05.621840 3163 status_manager.go:895] "Failed to get status for pod" podUID="071d23f6-a94b-4165-9229-2d0570b516d8" pod="calico-system/calico-node-g9fs5" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-node-g9fs5\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:07:05.988355 kubelet[3163]: E0420 20:07:05.883334 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.Secret: Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/secrets?fieldSelector=metadata.name%3Dcalico-apiserver-certs&resourceVersion=2406\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="object-\"calico-system\"/\"calico-apiserver-certs\"" type="*v1.Secret" Apr 20 20:07:06.066700 kubelet[3163]: E0420 20:07:06.065753 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.075s" Apr 20 20:07:07.283838 containerd[1659]: time="2026-04-20T20:07:07.280311203Z" level=info msg="TaskExit event container_id:\"54c6b42922bf5b9031d3f8d7454f339017c2e17aaf05317c673f5a78f690205c\" id:\"54c6b42922bf5b9031d3f8d7454f339017c2e17aaf05317c673f5a78f690205c\" pid:7012 exit_status:255 exited_at:{seconds:1776715399 nanos:78895607}" Apr 20 20:07:07.497047 kubelet[3163]: E0420 20:07:07.481318 3163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 20 20:07:09.388918 kubelet[3163]: E0420 20:07:09.345285 3163 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=2901\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 20 20:07:10.350199 kubelet[3163]: E0420 20:07:10.108407 3163 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/events/kube-scheduler-localhost.18a826e9b6ceb5d3\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-scheduler-localhost.18a826e9b6ceb5d3 kube-system 3221 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-scheduler-localhost,UID:33fee6ba1581201eda98a989140db110,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-20 19:20:38 +0000 UTC,LastTimestamp:2026-04-20 19:21:14.833055697 +0000 UTC m=+772.927305422,Count:6,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 20 20:07:11.023330 kubelet[3163]: E0420 20:07:10.931179 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.05s" Apr 20 20:07:11.990500 kubelet[3163]: E0420 20:07:11.988848 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T20:07:11Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T20:07:11Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T20:07:11Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-20T20:07:11Z\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"localhost\": Patch \"https://10.0.0.14:6443/api/v1/nodes/localhost/status?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:07:12.159321 kubelet[3163]: E0420 20:07:12.074644 3163 token_manager.go:124] "Couldn't update token" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/serviceaccounts/csi-node-driver/token\": dial tcp 10.0.0.14:6443: connect: connection refused" cacheKey="\"csi-node-driver\"/\"calico-system\"/[]string(nil)/3607/v1.BoundObjectReference{Kind:\"Pod\", APIVersion:\"v1\", Name:\"csi-node-driver-5h6vg\", UID:\"9f02930c-961c-4c4b-8334-b61cbd5c3d20\"}" Apr 20 20:07:12.374962 kubelet[3163]: E0420 20:07:12.367525 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.14:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:07:12.669032 kubelet[3163]: E0420 20:07:12.654488 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.14:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:07:12.751963 kubelet[3163]: E0420 20:07:12.745702 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.14:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:07:12.846448 kubelet[3163]: E0420 20:07:12.841805 3163 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://10.0.0.14:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:07:12.872315 kubelet[3163]: E0420 20:07:12.853462 3163 kubelet_node_status.go:535] "Unable to update node status" err="update node status exceeds retry count" Apr 20 20:07:14.487085 kubelet[3163]: E0420 20:07:14.475015 3163 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.602s" Apr 20 20:07:14.594074 sshd[7814]: Connection closed by 10.0.0.1 port 59562 Apr 20 20:07:14.695552 sshd-session[7801]: pam_unix(sshd:session): session closed for user core Apr 20 20:07:14.728000 audit[7801]: AUDIT1106 pid=7801 uid=0 auid=500 ses=76 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:07:14.752000 audit[7801]: AUDIT1104 pid=7801 uid=0 auid=500 ses=76 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:07:15.021262 kernel: audit: type=1106 audit(1776715634.728:1417): pid=7801 uid=0 auid=500 ses=76 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:07:15.024106 kernel: audit: type=1104 audit(1776715634.752:1418): pid=7801 uid=0 auid=500 ses=76 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Apr 20 20:07:15.078941 kubelet[3163]: I0420 20:07:15.077428 3163 scope.go:117] "RemoveContainer" containerID="ac94b4ba291f8cdf5c7d0515c11d0bad7f6c92f6d00b04c1c0c2a0ec7954d02d" Apr 20 20:07:15.218214 kubelet[3163]: E0420 20:07:15.218097 3163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="7s" Apr 20 20:07:15.218959 systemd[1]: sshd@74-13-10.0.0.14:22-10.0.0.1:59562.service: Deactivated successfully. Apr 20 20:07:15.219000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@74-13-10.0.0.14:22-10.0.0.1:59562 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:07:15.220333 systemd[1]: sshd@74-13-10.0.0.14:22-10.0.0.1:59562.service: Consumed 2.364s CPU time, 4.1M memory peak. Apr 20 20:07:15.226359 kernel: audit: type=1131 audit(1776715635.219:1419): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@74-13-10.0.0.14:22-10.0.0.1:59562 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 20 20:07:15.227341 kubelet[3163]: E0420 20:07:15.227117 3163 token_manager.go:124] "Couldn't update token" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/serviceaccounts/calico-apiserver/token\": dial tcp 10.0.0.14:6443: connect: connection refused" cacheKey="\"calico-apiserver\"/\"calico-system\"/[]string(nil)/3607/v1.BoundObjectReference{Kind:\"Pod\", APIVersion:\"v1\", Name:\"calico-apiserver-84684997fc-zpm5v\", UID:\"dfb0b7d2-b28d-4433-9fba-0074dfdf81ee\"}" Apr 20 20:07:15.230253 kubelet[3163]: I0420 20:07:15.227755 3163 status_manager.go:895] "Failed to get status for pod" podUID="5ef51a6b32499d3d1e531fb8b3a83d4f" pod="kube-system/kube-apiserver-localhost" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:07:15.239607 kubelet[3163]: I0420 20:07:15.239419 3163 status_manager.go:895] "Failed to get status for pod" podUID="071d23f6-a94b-4165-9229-2d0570b516d8" pod="calico-system/calico-node-g9fs5" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-node-g9fs5\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:07:15.280161 kubelet[3163]: I0420 20:07:15.277227 3163 status_manager.go:895] "Failed to get status for pod" podUID="dfb0b7d2-b28d-4433-9fba-0074dfdf81ee" pod="calico-system/calico-apiserver-84684997fc-zpm5v" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-apiserver-84684997fc-zpm5v\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:07:15.281131 kubelet[3163]: I0420 20:07:15.281042 3163 scope.go:117] "RemoveContainer" containerID="292f0d8812e0e22d6b9c42f42298e77dbbf1a3ff614a20049631a83923de267a" Apr 20 20:07:15.281906 kubelet[3163]: I0420 20:07:15.281875 3163 scope.go:117] "RemoveContainer" containerID="54c6b42922bf5b9031d3f8d7454f339017c2e17aaf05317c673f5a78f690205c" Apr 20 20:07:15.281970 kubelet[3163]: E0420 20:07:15.281954 3163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 20 20:07:15.283800 kubelet[3163]: I0420 20:07:15.282596 3163 status_manager.go:895] "Failed to get status for pod" podUID="5ef51a6b32499d3d1e531fb8b3a83d4f" pod="kube-system/kube-apiserver-localhost" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:07:15.343847 systemd[1]: session-76.scope: Deactivated successfully. Apr 20 20:07:15.384021 systemd[1]: session-76.scope: Consumed 9.336s CPU time, 16.1M memory peak. Apr 20 20:07:15.436971 kubelet[3163]: I0420 20:07:15.385059 3163 status_manager.go:895] "Failed to get status for pod" podUID="071d23f6-a94b-4165-9229-2d0570b516d8" pod="calico-system/calico-node-g9fs5" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-node-g9fs5\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:07:15.456579 kubelet[3163]: I0420 20:07:15.455931 3163 status_manager.go:895] "Failed to get status for pod" podUID="dfb0b7d2-b28d-4433-9fba-0074dfdf81ee" pod="calico-system/calico-apiserver-84684997fc-zpm5v" err="Get \"https://10.0.0.14:6443/api/v1/namespaces/calico-system/pods/calico-apiserver-84684997fc-zpm5v\": dial tcp 10.0.0.14:6443: connect: connection refused" Apr 20 20:07:15.458762 systemd-logind[1627]: Session 76 logged out. Waiting for processes to exit. Apr 20 20:07:15.475343 containerd[1659]: time="2026-04-20T20:07:15.475302742Z" level=info msg="RemoveContainer for \"292f0d8812e0e22d6b9c42f42298e77dbbf1a3ff614a20049631a83923de267a\"" Apr 20 20:07:15.475978 containerd[1659]: time="2026-04-20T20:07:15.475938112Z" level=info msg="CreateContainer within sandbox \"a0a1c013bb9119be3e83c967343167afaabfa5d3210072f49e9de991e138aad2\" for container name:\"kube-apiserver\" attempt:3" Apr 20 20:07:15.475747 systemd-logind[1627]: Removed session 76. Apr 20 20:07:15.476144 containerd[1659]: time="2026-04-20T20:07:15.476013271Z" level=info msg="CreateContainer within sandbox \"de19448814837f2189e818ee408a7454c5277a1c9b2f91f664708851a1478571\" for container name:\"calico-apiserver\" attempt:5" Apr 20 20:07:15.553981 containerd[1659]: time="2026-04-20T20:07:15.553077694Z" level=info msg="RemoveContainer for \"292f0d8812e0e22d6b9c42f42298e77dbbf1a3ff614a20049631a83923de267a\" returns successfully" Apr 20 20:07:15.636846 containerd[1659]: time="2026-04-20T20:07:15.636661783Z" level=info msg="Container a49e805f6ba4913eb3941c72012f25611041683d43539d254e09777806c4a06a: CDI devices from CRI Config.CDIDevices: []" Apr 20 20:07:15.645030 containerd[1659]: time="2026-04-20T20:07:15.636670073Z" level=info msg="Container 80834596950a79f07583d6f4e9671c44b4ed2bfb300d7fe421f7f61b1165232d: CDI devices from CRI Config.CDIDevices: []" Apr 20 20:07:15.753944 containerd[1659]: time="2026-04-20T20:07:15.752244619Z" level=info msg="CreateContainer within sandbox \"a0a1c013bb9119be3e83c967343167afaabfa5d3210072f49e9de991e138aad2\" for name:\"kube-apiserver\" attempt:3 returns container id \"a49e805f6ba4913eb3941c72012f25611041683d43539d254e09777806c4a06a\"" Apr 20 20:07:15.787847 containerd[1659]: time="2026-04-20T20:07:15.761926013Z" level=info msg="CreateContainer within sandbox \"de19448814837f2189e818ee408a7454c5277a1c9b2f91f664708851a1478571\" for name:\"calico-apiserver\" attempt:5 returns container id \"80834596950a79f07583d6f4e9671c44b4ed2bfb300d7fe421f7f61b1165232d\"" Apr 20 20:07:15.848918 containerd[1659]: time="2026-04-20T20:07:15.848358568Z" level=info msg="StartContainer for \"a49e805f6ba4913eb3941c72012f25611041683d43539d254e09777806c4a06a\"" Apr 20 20:07:15.849357 containerd[1659]: time="2026-04-20T20:07:15.848642519Z" level=info msg="StartContainer for \"80834596950a79f07583d6f4e9671c44b4ed2bfb300d7fe421f7f61b1165232d\"" Apr 20 20:07:15.850453 containerd[1659]: time="2026-04-20T20:07:15.850404853Z" level=info msg="connecting to shim 80834596950a79f07583d6f4e9671c44b4ed2bfb300d7fe421f7f61b1165232d" address="unix:///run/containerd/s/9f25d20f4617cde34f7397032d9ecbc0b43cd780bc15ce3e8713428f4b2ceb63" protocol=ttrpc version=3 Apr 20 20:07:15.909209 containerd[1659]: time="2026-04-20T20:07:15.906364114Z" level=info msg="connecting to shim a49e805f6ba4913eb3941c72012f25611041683d43539d254e09777806c4a06a" address="unix:///run/containerd/s/80102222aa3ed4b7ee78377cd8f0cd98fe2254d5e4d09c655e1726e3fa17fed4" protocol=ttrpc version=3