May 14 00:08:02.982434 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue May 13 22:08:35 -00 2025 May 14 00:08:02.982455 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=8b3c5774a4242053287d41edc0d029958b7c22c131f7dd36b16a68182354e130 May 14 00:08:02.982465 kernel: BIOS-provided physical RAM map: May 14 00:08:02.982471 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 14 00:08:02.982477 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 14 00:08:02.982483 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 14 00:08:02.982490 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007cfdbfff] usable May 14 00:08:02.982496 kernel: BIOS-e820: [mem 0x000000007cfdc000-0x000000007cffffff] reserved May 14 00:08:02.982503 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 14 00:08:02.982509 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved May 14 00:08:02.982516 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 14 00:08:02.982522 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 14 00:08:02.982528 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 14 00:08:02.982534 kernel: NX (Execute Disable) protection: active May 14 00:08:02.982541 kernel: APIC: Static calls initialized May 14 00:08:02.982549 kernel: SMBIOS 3.0.0 present. May 14 00:08:02.982556 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 May 14 00:08:02.982563 kernel: Hypervisor detected: KVM May 14 00:08:02.982569 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 14 00:08:02.982576 kernel: kvm-clock: using sched offset of 3469404934 cycles May 14 00:08:02.982583 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 14 00:08:02.982590 kernel: tsc: Detected 2495.310 MHz processor May 14 00:08:02.982597 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 14 00:08:02.982604 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 14 00:08:02.982612 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 May 14 00:08:02.982619 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 14 00:08:02.982626 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 14 00:08:02.982632 kernel: Using GB pages for direct mapping May 14 00:08:02.982639 kernel: ACPI: Early table checksum verification disabled May 14 00:08:02.982646 kernel: ACPI: RSDP 0x00000000000F5270 000014 (v00 BOCHS ) May 14 00:08:02.982653 kernel: ACPI: RSDT 0x000000007CFE265D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:08:02.982659 kernel: ACPI: FACP 0x000000007CFE244D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:08:02.982666 kernel: ACPI: DSDT 0x000000007CFE0040 00240D (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:08:02.982674 kernel: ACPI: FACS 0x000000007CFE0000 000040 May 14 00:08:02.982681 kernel: ACPI: APIC 0x000000007CFE2541 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:08:02.982687 kernel: ACPI: HPET 0x000000007CFE25C1 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:08:02.982694 kernel: ACPI: MCFG 0x000000007CFE25F9 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:08:02.982701 kernel: ACPI: WAET 0x000000007CFE2635 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:08:02.982707 kernel: ACPI: Reserving FACP table memory at [mem 0x7cfe244d-0x7cfe2540] May 14 00:08:02.982714 kernel: ACPI: Reserving DSDT table memory at [mem 0x7cfe0040-0x7cfe244c] May 14 00:08:02.982723 kernel: ACPI: Reserving FACS table memory at [mem 0x7cfe0000-0x7cfe003f] May 14 00:08:02.982731 kernel: ACPI: Reserving APIC table memory at [mem 0x7cfe2541-0x7cfe25c0] May 14 00:08:02.982738 kernel: ACPI: Reserving HPET table memory at [mem 0x7cfe25c1-0x7cfe25f8] May 14 00:08:02.982745 kernel: ACPI: Reserving MCFG table memory at [mem 0x7cfe25f9-0x7cfe2634] May 14 00:08:02.982752 kernel: ACPI: Reserving WAET table memory at [mem 0x7cfe2635-0x7cfe265c] May 14 00:08:02.982759 kernel: No NUMA configuration found May 14 00:08:02.982766 kernel: Faking a node at [mem 0x0000000000000000-0x000000007cfdbfff] May 14 00:08:02.982774 kernel: NODE_DATA(0) allocated [mem 0x7cfd6000-0x7cfdbfff] May 14 00:08:02.982782 kernel: Zone ranges: May 14 00:08:02.982793 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 14 00:08:02.982807 kernel: DMA32 [mem 0x0000000001000000-0x000000007cfdbfff] May 14 00:08:02.982817 kernel: Normal empty May 14 00:08:02.982826 kernel: Movable zone start for each node May 14 00:08:02.982834 kernel: Early memory node ranges May 14 00:08:02.982843 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 14 00:08:02.982852 kernel: node 0: [mem 0x0000000000100000-0x000000007cfdbfff] May 14 00:08:02.982862 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007cfdbfff] May 14 00:08:02.982875 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 14 00:08:02.982884 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 14 00:08:02.982893 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges May 14 00:08:02.982900 kernel: ACPI: PM-Timer IO Port: 0x608 May 14 00:08:02.982907 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 14 00:08:02.982914 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 14 00:08:02.982923 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 14 00:08:02.982937 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 14 00:08:02.982949 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 14 00:08:02.982962 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 14 00:08:02.982971 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 14 00:08:02.982980 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 14 00:08:02.982989 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 14 00:08:02.982998 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs May 14 00:08:02.983007 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 14 00:08:02.983016 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices May 14 00:08:02.983025 kernel: Booting paravirtualized kernel on KVM May 14 00:08:02.983034 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 14 00:08:02.983045 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 14 00:08:02.983054 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 May 14 00:08:02.983064 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 May 14 00:08:02.983072 kernel: pcpu-alloc: [0] 0 1 May 14 00:08:02.983082 kernel: kvm-guest: PV spinlocks disabled, no host support May 14 00:08:02.983092 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=8b3c5774a4242053287d41edc0d029958b7c22c131f7dd36b16a68182354e130 May 14 00:08:02.983102 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 14 00:08:02.983111 kernel: random: crng init done May 14 00:08:02.983122 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 14 00:08:02.983131 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 14 00:08:02.983140 kernel: Fallback order for Node 0: 0 May 14 00:08:02.983149 kernel: Built 1 zonelists, mobility grouping on. Total pages: 503708 May 14 00:08:02.983158 kernel: Policy zone: DMA32 May 14 00:08:02.983167 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 14 00:08:02.983176 kernel: Memory: 1917956K/2047464K available (14336K kernel code, 2296K rwdata, 25068K rodata, 43604K init, 1468K bss, 129248K reserved, 0K cma-reserved) May 14 00:08:02.983185 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 14 00:08:02.983195 kernel: ftrace: allocating 37993 entries in 149 pages May 14 00:08:02.983206 kernel: ftrace: allocated 149 pages with 4 groups May 14 00:08:02.983215 kernel: Dynamic Preempt: voluntary May 14 00:08:02.983224 kernel: rcu: Preemptible hierarchical RCU implementation. May 14 00:08:02.983234 kernel: rcu: RCU event tracing is enabled. May 14 00:08:02.983244 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 14 00:08:02.983253 kernel: Trampoline variant of Tasks RCU enabled. May 14 00:08:02.983262 kernel: Rude variant of Tasks RCU enabled. May 14 00:08:02.983272 kernel: Tracing variant of Tasks RCU enabled. May 14 00:08:02.983281 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 14 00:08:02.983293 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 14 00:08:02.983302 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 14 00:08:02.983311 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 14 00:08:02.983394 kernel: Console: colour VGA+ 80x25 May 14 00:08:02.983404 kernel: printk: console [tty0] enabled May 14 00:08:02.983424 kernel: printk: console [ttyS0] enabled May 14 00:08:02.983433 kernel: ACPI: Core revision 20230628 May 14 00:08:02.983444 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 14 00:08:02.983453 kernel: APIC: Switch to symmetric I/O mode setup May 14 00:08:02.983466 kernel: x2apic enabled May 14 00:08:02.983475 kernel: APIC: Switched APIC routing to: physical x2apic May 14 00:08:02.983482 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 14 00:08:02.983489 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 14 00:08:02.983496 kernel: Calibrating delay loop (skipped) preset value.. 4990.62 BogoMIPS (lpj=2495310) May 14 00:08:02.983503 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 14 00:08:02.983510 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 14 00:08:02.983517 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 14 00:08:02.983531 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 14 00:08:02.983538 kernel: Spectre V2 : Mitigation: Retpolines May 14 00:08:02.983545 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 14 00:08:02.983552 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 14 00:08:02.983561 kernel: RETBleed: Mitigation: untrained return thunk May 14 00:08:02.983569 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 14 00:08:02.983576 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 14 00:08:02.983583 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 14 00:08:02.983591 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 14 00:08:02.983599 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 14 00:08:02.983606 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 14 00:08:02.983614 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 14 00:08:02.983621 kernel: Freeing SMP alternatives memory: 32K May 14 00:08:02.983628 kernel: pid_max: default: 32768 minimum: 301 May 14 00:08:02.983636 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 14 00:08:02.983643 kernel: landlock: Up and running. May 14 00:08:02.983650 kernel: SELinux: Initializing. May 14 00:08:02.983657 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 14 00:08:02.983666 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 14 00:08:02.983673 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0) May 14 00:08:02.983680 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 14 00:08:02.983688 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 14 00:08:02.983695 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 14 00:08:02.983703 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 14 00:08:02.983710 kernel: ... version: 0 May 14 00:08:02.983717 kernel: ... bit width: 48 May 14 00:08:02.983725 kernel: ... generic registers: 6 May 14 00:08:02.983733 kernel: ... value mask: 0000ffffffffffff May 14 00:08:02.983740 kernel: ... max period: 00007fffffffffff May 14 00:08:02.983747 kernel: ... fixed-purpose events: 0 May 14 00:08:02.983754 kernel: ... event mask: 000000000000003f May 14 00:08:02.983761 kernel: signal: max sigframe size: 1776 May 14 00:08:02.983768 kernel: rcu: Hierarchical SRCU implementation. May 14 00:08:02.983776 kernel: rcu: Max phase no-delay instances is 400. May 14 00:08:02.983783 kernel: smp: Bringing up secondary CPUs ... May 14 00:08:02.983792 kernel: smpboot: x86: Booting SMP configuration: May 14 00:08:02.983799 kernel: .... node #0, CPUs: #1 May 14 00:08:02.983806 kernel: smp: Brought up 1 node, 2 CPUs May 14 00:08:02.983813 kernel: smpboot: Max logical packages: 1 May 14 00:08:02.983821 kernel: smpboot: Total of 2 processors activated (9981.24 BogoMIPS) May 14 00:08:02.983828 kernel: devtmpfs: initialized May 14 00:08:02.983835 kernel: x86/mm: Memory block size: 128MB May 14 00:08:02.983843 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 14 00:08:02.983850 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 14 00:08:02.983857 kernel: pinctrl core: initialized pinctrl subsystem May 14 00:08:02.983866 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 14 00:08:02.983873 kernel: audit: initializing netlink subsys (disabled) May 14 00:08:02.983881 kernel: audit: type=2000 audit(1747181281.654:1): state=initialized audit_enabled=0 res=1 May 14 00:08:02.983888 kernel: thermal_sys: Registered thermal governor 'step_wise' May 14 00:08:02.983895 kernel: thermal_sys: Registered thermal governor 'user_space' May 14 00:08:02.983902 kernel: cpuidle: using governor menu May 14 00:08:02.983909 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 14 00:08:02.983917 kernel: dca service started, version 1.12.1 May 14 00:08:02.983924 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) May 14 00:08:02.983933 kernel: PCI: Using configuration type 1 for base access May 14 00:08:02.983940 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 14 00:08:02.983947 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 14 00:08:02.983954 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 14 00:08:02.983962 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 14 00:08:02.983969 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 14 00:08:02.983976 kernel: ACPI: Added _OSI(Module Device) May 14 00:08:02.983983 kernel: ACPI: Added _OSI(Processor Device) May 14 00:08:02.983991 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 14 00:08:02.983999 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 14 00:08:02.984006 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 14 00:08:02.984014 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 14 00:08:02.984021 kernel: ACPI: Interpreter enabled May 14 00:08:02.984028 kernel: ACPI: PM: (supports S0 S5) May 14 00:08:02.984035 kernel: ACPI: Using IOAPIC for interrupt routing May 14 00:08:02.984042 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 14 00:08:02.984049 kernel: PCI: Using E820 reservations for host bridge windows May 14 00:08:02.984057 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 14 00:08:02.984065 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 14 00:08:02.984188 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 14 00:08:02.984263 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 14 00:08:02.984355 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 14 00:08:02.984366 kernel: PCI host bridge to bus 0000:00 May 14 00:08:02.984475 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 14 00:08:02.984565 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 14 00:08:02.984642 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 14 00:08:02.984710 kernel: pci_bus 0000:00: root bus resource [mem 0x7d000000-0xafffffff window] May 14 00:08:02.984777 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 14 00:08:02.984842 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] May 14 00:08:02.984908 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 14 00:08:02.984996 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 14 00:08:02.985085 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 May 14 00:08:02.985161 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfb800000-0xfbffffff pref] May 14 00:08:02.985236 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfd200000-0xfd203fff 64bit pref] May 14 00:08:02.985313 kernel: pci 0000:00:01.0: reg 0x20: [mem 0xfea10000-0xfea10fff] May 14 00:08:02.985428 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea00000-0xfea0ffff pref] May 14 00:08:02.985513 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 14 00:08:02.985633 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 May 14 00:08:02.985751 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea11000-0xfea11fff] May 14 00:08:02.985866 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 May 14 00:08:02.985978 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea12000-0xfea12fff] May 14 00:08:02.986072 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 May 14 00:08:02.986149 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea13000-0xfea13fff] May 14 00:08:02.986231 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 May 14 00:08:02.986312 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea14000-0xfea14fff] May 14 00:08:02.986436 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 May 14 00:08:02.986517 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea15000-0xfea15fff] May 14 00:08:02.986598 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 May 14 00:08:02.986673 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea16000-0xfea16fff] May 14 00:08:02.986753 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 May 14 00:08:02.986834 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea17000-0xfea17fff] May 14 00:08:02.986915 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 May 14 00:08:02.986991 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea18000-0xfea18fff] May 14 00:08:02.987085 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 May 14 00:08:02.987183 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfea19000-0xfea19fff] May 14 00:08:02.987288 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 14 00:08:02.987443 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 14 00:08:02.987549 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 14 00:08:02.987638 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc040-0xc05f] May 14 00:08:02.987716 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea1a000-0xfea1afff] May 14 00:08:02.987826 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 14 00:08:02.987921 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] May 14 00:08:02.988048 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 May 14 00:08:02.988142 kernel: pci 0000:01:00.0: reg 0x14: [mem 0xfe880000-0xfe880fff] May 14 00:08:02.988228 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] May 14 00:08:02.988306 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfe800000-0xfe87ffff pref] May 14 00:08:02.991436 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] May 14 00:08:02.991530 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] May 14 00:08:02.991607 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] May 14 00:08:02.991689 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 May 14 00:08:02.991773 kernel: pci 0000:02:00.0: reg 0x10: [mem 0xfe600000-0xfe603fff 64bit] May 14 00:08:02.991852 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] May 14 00:08:02.991949 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] May 14 00:08:02.992037 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] May 14 00:08:02.992134 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 May 14 00:08:02.992234 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfe400000-0xfe400fff] May 14 00:08:02.992382 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xfcc00000-0xfcc03fff 64bit pref] May 14 00:08:02.992487 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] May 14 00:08:02.992565 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] May 14 00:08:02.992639 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] May 14 00:08:02.992724 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 May 14 00:08:02.992801 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] May 14 00:08:02.992876 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] May 14 00:08:02.992975 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] May 14 00:08:02.993061 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] May 14 00:08:02.993161 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 May 14 00:08:02.993253 kernel: pci 0000:05:00.0: reg 0x14: [mem 0xfe000000-0xfe000fff] May 14 00:08:02.993349 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xfc800000-0xfc803fff 64bit pref] May 14 00:08:02.993436 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] May 14 00:08:02.993511 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] May 14 00:08:02.993589 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] May 14 00:08:02.994070 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 May 14 00:08:02.994156 kernel: pci 0000:06:00.0: reg 0x14: [mem 0xfde00000-0xfde00fff] May 14 00:08:02.994233 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xfc600000-0xfc603fff 64bit pref] May 14 00:08:02.994309 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] May 14 00:08:02.994468 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] May 14 00:08:02.994544 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] May 14 00:08:02.994555 kernel: acpiphp: Slot [0] registered May 14 00:08:02.994640 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 May 14 00:08:02.994714 kernel: pci 0000:07:00.0: reg 0x14: [mem 0xfdc80000-0xfdc80fff] May 14 00:08:02.994790 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xfc400000-0xfc403fff 64bit pref] May 14 00:08:02.994863 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfdc00000-0xfdc7ffff pref] May 14 00:08:02.994952 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] May 14 00:08:02.995040 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] May 14 00:08:02.995127 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] May 14 00:08:02.995144 kernel: acpiphp: Slot [0-2] registered May 14 00:08:02.995246 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] May 14 00:08:02.995368 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] May 14 00:08:02.995483 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] May 14 00:08:02.995500 kernel: acpiphp: Slot [0-3] registered May 14 00:08:02.995587 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] May 14 00:08:02.995674 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] May 14 00:08:02.995759 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] May 14 00:08:02.995774 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 14 00:08:02.995786 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 14 00:08:02.995794 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 14 00:08:02.995801 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 14 00:08:02.995809 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 14 00:08:02.995816 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 14 00:08:02.995823 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 14 00:08:02.995831 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 14 00:08:02.995838 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 14 00:08:02.995848 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 14 00:08:02.995861 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 14 00:08:02.995870 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 14 00:08:02.995877 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 14 00:08:02.995885 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 14 00:08:02.995892 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 14 00:08:02.995900 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 14 00:08:02.995907 kernel: iommu: Default domain type: Translated May 14 00:08:02.995914 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 14 00:08:02.995922 kernel: PCI: Using ACPI for IRQ routing May 14 00:08:02.995931 kernel: PCI: pci_cache_line_size set to 64 bytes May 14 00:08:02.995938 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 14 00:08:02.995946 kernel: e820: reserve RAM buffer [mem 0x7cfdc000-0x7fffffff] May 14 00:08:02.996027 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 14 00:08:02.996102 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 14 00:08:02.996175 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 14 00:08:02.996186 kernel: vgaarb: loaded May 14 00:08:02.996193 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 14 00:08:02.996201 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 14 00:08:02.996211 kernel: clocksource: Switched to clocksource kvm-clock May 14 00:08:02.996218 kernel: VFS: Disk quotas dquot_6.6.0 May 14 00:08:02.996226 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 14 00:08:02.996233 kernel: pnp: PnP ACPI init May 14 00:08:02.997404 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved May 14 00:08:02.997431 kernel: pnp: PnP ACPI: found 5 devices May 14 00:08:02.997439 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 14 00:08:02.997447 kernel: NET: Registered PF_INET protocol family May 14 00:08:02.997457 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) May 14 00:08:02.997465 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) May 14 00:08:02.997473 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 14 00:08:02.997483 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) May 14 00:08:02.997494 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) May 14 00:08:02.997504 kernel: TCP: Hash tables configured (established 16384 bind 16384) May 14 00:08:02.997513 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) May 14 00:08:02.997520 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) May 14 00:08:02.997530 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 14 00:08:02.997537 kernel: NET: Registered PF_XDP protocol family May 14 00:08:02.997623 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 May 14 00:08:02.997713 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 May 14 00:08:02.997790 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 May 14 00:08:02.997865 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] May 14 00:08:02.997938 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] May 14 00:08:02.998045 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] May 14 00:08:02.998149 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] May 14 00:08:02.998236 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] May 14 00:08:02.998310 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] May 14 00:08:02.999488 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] May 14 00:08:02.999563 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] May 14 00:08:02.999637 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] May 14 00:08:02.999709 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] May 14 00:08:02.999781 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] May 14 00:08:02.999857 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] May 14 00:08:02.999929 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] May 14 00:08:03.000001 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] May 14 00:08:03.000072 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] May 14 00:08:03.000144 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] May 14 00:08:03.000216 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] May 14 00:08:03.000290 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] May 14 00:08:03.001760 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] May 14 00:08:03.004371 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] May 14 00:08:03.004481 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] May 14 00:08:03.004559 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] May 14 00:08:03.004633 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] May 14 00:08:03.004705 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] May 14 00:08:03.004778 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] May 14 00:08:03.004852 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] May 14 00:08:03.004924 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] May 14 00:08:03.004997 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] May 14 00:08:03.005075 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] May 14 00:08:03.005148 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] May 14 00:08:03.005281 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] May 14 00:08:03.006795 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] May 14 00:08:03.006907 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] May 14 00:08:03.007009 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 14 00:08:03.007099 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 14 00:08:03.007286 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 14 00:08:03.007441 kernel: pci_bus 0000:00: resource 7 [mem 0x7d000000-0xafffffff window] May 14 00:08:03.007628 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 14 00:08:03.007747 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] May 14 00:08:03.007859 kernel: pci_bus 0000:01: resource 1 [mem 0xfe800000-0xfe9fffff] May 14 00:08:03.007975 kernel: pci_bus 0000:01: resource 2 [mem 0xfd000000-0xfd1fffff 64bit pref] May 14 00:08:03.008081 kernel: pci_bus 0000:02: resource 1 [mem 0xfe600000-0xfe7fffff] May 14 00:08:03.008174 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] May 14 00:08:03.008280 kernel: pci_bus 0000:03: resource 1 [mem 0xfe400000-0xfe5fffff] May 14 00:08:03.009432 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] May 14 00:08:03.009518 kernel: pci_bus 0000:04: resource 1 [mem 0xfe200000-0xfe3fffff] May 14 00:08:03.009594 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] May 14 00:08:03.009681 kernel: pci_bus 0000:05: resource 1 [mem 0xfe000000-0xfe1fffff] May 14 00:08:03.009771 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] May 14 00:08:03.009877 kernel: pci_bus 0000:06: resource 1 [mem 0xfde00000-0xfdffffff] May 14 00:08:03.009985 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] May 14 00:08:03.010090 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] May 14 00:08:03.010183 kernel: pci_bus 0000:07: resource 1 [mem 0xfdc00000-0xfddfffff] May 14 00:08:03.010440 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] May 14 00:08:03.011616 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] May 14 00:08:03.011727 kernel: pci_bus 0000:08: resource 1 [mem 0xfda00000-0xfdbfffff] May 14 00:08:03.011813 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] May 14 00:08:03.011936 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] May 14 00:08:03.012014 kernel: pci_bus 0000:09: resource 1 [mem 0xfd800000-0xfd9fffff] May 14 00:08:03.012083 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] May 14 00:08:03.012094 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 14 00:08:03.012103 kernel: PCI: CLS 0 bytes, default 64 May 14 00:08:03.012111 kernel: Initialise system trusted keyrings May 14 00:08:03.012119 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 May 14 00:08:03.012126 kernel: Key type asymmetric registered May 14 00:08:03.012138 kernel: Asymmetric key parser 'x509' registered May 14 00:08:03.012146 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 14 00:08:03.012154 kernel: io scheduler mq-deadline registered May 14 00:08:03.012162 kernel: io scheduler kyber registered May 14 00:08:03.012170 kernel: io scheduler bfq registered May 14 00:08:03.012252 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 May 14 00:08:03.012391 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 May 14 00:08:03.012485 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 May 14 00:08:03.012563 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 May 14 00:08:03.012675 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 May 14 00:08:03.012754 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 May 14 00:08:03.012829 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 May 14 00:08:03.012904 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 May 14 00:08:03.012982 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 May 14 00:08:03.013058 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 May 14 00:08:03.013134 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 May 14 00:08:03.013209 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 May 14 00:08:03.013290 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 May 14 00:08:03.013396 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 May 14 00:08:03.013493 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 May 14 00:08:03.013569 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 May 14 00:08:03.013581 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 14 00:08:03.013655 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 May 14 00:08:03.013730 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 May 14 00:08:03.013742 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 14 00:08:03.013753 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 May 14 00:08:03.013761 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 14 00:08:03.013770 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 14 00:08:03.013778 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 14 00:08:03.013786 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 14 00:08:03.013794 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 14 00:08:03.013873 kernel: rtc_cmos 00:03: RTC can wake from S4 May 14 00:08:03.013886 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 14 00:08:03.013954 kernel: rtc_cmos 00:03: registered as rtc0 May 14 00:08:03.014057 kernel: rtc_cmos 00:03: setting system clock to 2025-05-14T00:08:02 UTC (1747181282) May 14 00:08:03.014159 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 14 00:08:03.014173 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 14 00:08:03.014182 kernel: NET: Registered PF_INET6 protocol family May 14 00:08:03.014190 kernel: Segment Routing with IPv6 May 14 00:08:03.014198 kernel: In-situ OAM (IOAM) with IPv6 May 14 00:08:03.014206 kernel: NET: Registered PF_PACKET protocol family May 14 00:08:03.014213 kernel: Key type dns_resolver registered May 14 00:08:03.014225 kernel: IPI shorthand broadcast: enabled May 14 00:08:03.014233 kernel: sched_clock: Marking stable (1384008561, 151770744)->(1560248522, -24469217) May 14 00:08:03.014241 kernel: registered taskstats version 1 May 14 00:08:03.014249 kernel: Loading compiled-in X.509 certificates May 14 00:08:03.014257 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: 166efda032ca4d6e9037c569aca9b53585ee6f94' May 14 00:08:03.014264 kernel: Key type .fscrypt registered May 14 00:08:03.014272 kernel: Key type fscrypt-provisioning registered May 14 00:08:03.014280 kernel: ima: No TPM chip found, activating TPM-bypass! May 14 00:08:03.014287 kernel: ima: Allocated hash algorithm: sha1 May 14 00:08:03.014297 kernel: ima: No architecture policies found May 14 00:08:03.014304 kernel: clk: Disabling unused clocks May 14 00:08:03.014312 kernel: Freeing unused kernel image (initmem) memory: 43604K May 14 00:08:03.014354 kernel: Write protecting the kernel read-only data: 40960k May 14 00:08:03.014362 kernel: Freeing unused kernel image (rodata/data gap) memory: 1556K May 14 00:08:03.014370 kernel: Run /init as init process May 14 00:08:03.014378 kernel: with arguments: May 14 00:08:03.014388 kernel: /init May 14 00:08:03.014395 kernel: with environment: May 14 00:08:03.014404 kernel: HOME=/ May 14 00:08:03.014424 kernel: TERM=linux May 14 00:08:03.014431 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 14 00:08:03.014440 systemd[1]: Successfully made /usr/ read-only. May 14 00:08:03.014453 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 00:08:03.014462 systemd[1]: Detected virtualization kvm. May 14 00:08:03.014470 systemd[1]: Detected architecture x86-64. May 14 00:08:03.014478 systemd[1]: Running in initrd. May 14 00:08:03.014487 systemd[1]: No hostname configured, using default hostname. May 14 00:08:03.014496 systemd[1]: Hostname set to . May 14 00:08:03.014504 systemd[1]: Initializing machine ID from VM UUID. May 14 00:08:03.014513 systemd[1]: Queued start job for default target initrd.target. May 14 00:08:03.014521 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 00:08:03.014529 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 00:08:03.014538 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 14 00:08:03.014547 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 00:08:03.014557 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 14 00:08:03.014566 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 14 00:08:03.014575 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 14 00:08:03.014583 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 14 00:08:03.014592 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 00:08:03.014600 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 00:08:03.014609 systemd[1]: Reached target paths.target - Path Units. May 14 00:08:03.014618 systemd[1]: Reached target slices.target - Slice Units. May 14 00:08:03.014626 systemd[1]: Reached target swap.target - Swaps. May 14 00:08:03.014634 systemd[1]: Reached target timers.target - Timer Units. May 14 00:08:03.014643 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 14 00:08:03.014651 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 00:08:03.014659 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 14 00:08:03.014668 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 14 00:08:03.014676 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 00:08:03.014686 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 00:08:03.014694 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 00:08:03.014702 systemd[1]: Reached target sockets.target - Socket Units. May 14 00:08:03.014710 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 14 00:08:03.014718 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 00:08:03.014727 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 14 00:08:03.014735 systemd[1]: Starting systemd-fsck-usr.service... May 14 00:08:03.014743 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 00:08:03.014751 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 00:08:03.014761 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 00:08:03.014794 systemd-journald[188]: Collecting audit messages is disabled. May 14 00:08:03.014815 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 14 00:08:03.014826 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 00:08:03.014836 systemd-journald[188]: Journal started May 14 00:08:03.014855 systemd-journald[188]: Runtime Journal (/run/log/journal/194a356984ab435cb6a8370b34be7b4b) is 4.7M, max 38.3M, 33.5M free. May 14 00:08:03.007666 systemd-modules-load[190]: Inserted module 'overlay' May 14 00:08:03.057744 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 14 00:08:03.057769 kernel: Bridge firewalling registered May 14 00:08:03.057779 systemd[1]: Started systemd-journald.service - Journal Service. May 14 00:08:03.039257 systemd-modules-load[190]: Inserted module 'br_netfilter' May 14 00:08:03.058599 systemd[1]: Finished systemd-fsck-usr.service. May 14 00:08:03.059661 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 00:08:03.060816 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 00:08:03.065523 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 00:08:03.068490 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 00:08:03.075844 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 14 00:08:03.080402 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 00:08:03.087377 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 00:08:03.093188 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 14 00:08:03.101434 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 00:08:03.103631 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 00:08:03.115578 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 00:08:03.118427 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 14 00:08:03.121416 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 00:08:03.131535 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 00:08:03.140704 dracut-cmdline[222]: dracut-dracut-053 May 14 00:08:03.142988 dracut-cmdline[222]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=8b3c5774a4242053287d41edc0d029958b7c22c131f7dd36b16a68182354e130 May 14 00:08:03.167893 systemd-resolved[223]: Positive Trust Anchors: May 14 00:08:03.167908 systemd-resolved[223]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 00:08:03.167938 systemd-resolved[223]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 00:08:03.170122 systemd-resolved[223]: Defaulting to hostname 'linux'. May 14 00:08:03.171455 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 00:08:03.172084 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 00:08:03.198359 kernel: SCSI subsystem initialized May 14 00:08:03.208353 kernel: Loading iSCSI transport class v2.0-870. May 14 00:08:03.218355 kernel: iscsi: registered transport (tcp) May 14 00:08:03.252378 kernel: iscsi: registered transport (qla4xxx) May 14 00:08:03.252703 kernel: QLogic iSCSI HBA Driver May 14 00:08:03.300723 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 14 00:08:03.304295 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 14 00:08:03.354195 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 14 00:08:03.354291 kernel: device-mapper: uevent: version 1.0.3 May 14 00:08:03.354315 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 14 00:08:03.401395 kernel: raid6: avx2x4 gen() 30716 MB/s May 14 00:08:03.418424 kernel: raid6: avx2x2 gen() 31971 MB/s May 14 00:08:03.435654 kernel: raid6: avx2x1 gen() 24457 MB/s May 14 00:08:03.435753 kernel: raid6: using algorithm avx2x2 gen() 31971 MB/s May 14 00:08:03.455381 kernel: raid6: .... xor() 19315 MB/s, rmw enabled May 14 00:08:03.455471 kernel: raid6: using avx2x2 recovery algorithm May 14 00:08:03.475376 kernel: xor: automatically using best checksumming function avx May 14 00:08:03.620360 kernel: Btrfs loaded, zoned=no, fsverity=no May 14 00:08:03.636863 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 14 00:08:03.639582 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 00:08:03.675483 systemd-udevd[407]: Using default interface naming scheme 'v255'. May 14 00:08:03.683813 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 00:08:03.687430 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 14 00:08:03.714680 dracut-pre-trigger[413]: rd.md=0: removing MD RAID activation May 14 00:08:03.752293 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 14 00:08:03.755625 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 00:08:03.809057 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 00:08:03.814307 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 14 00:08:03.841132 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 14 00:08:03.845132 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 14 00:08:03.847451 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 00:08:03.849050 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 00:08:03.852479 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 14 00:08:03.868884 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 14 00:08:03.888826 kernel: scsi host0: Virtio SCSI HBA May 14 00:08:03.912214 kernel: ACPI: bus type USB registered May 14 00:08:03.912255 kernel: usbcore: registered new interface driver usbfs May 14 00:08:03.914340 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 May 14 00:08:03.916708 kernel: usbcore: registered new interface driver hub May 14 00:08:03.916733 kernel: usbcore: registered new device driver usb May 14 00:08:03.933340 kernel: cryptd: max_cpu_qlen set to 1000 May 14 00:08:03.958434 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 14 00:08:03.958555 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 00:08:03.959244 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 00:08:03.959796 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 00:08:03.959893 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 00:08:03.963497 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 14 00:08:03.967633 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 00:08:03.968858 kernel: libata version 3.00 loaded. May 14 00:08:03.998660 kernel: AVX2 version of gcm_enc/dec engaged. May 14 00:08:04.003920 kernel: ahci 0000:00:1f.2: version 3.0 May 14 00:08:04.004109 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 14 00:08:04.005149 kernel: AES CTR mode by8 optimization enabled May 14 00:08:04.009173 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 14 00:08:04.009593 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 14 00:08:04.019020 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller May 14 00:08:04.019160 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 May 14 00:08:04.019261 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 May 14 00:08:04.019376 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller May 14 00:08:04.020864 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 May 14 00:08:04.020957 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed May 14 00:08:04.021044 kernel: hub 1-0:1.0: USB hub found May 14 00:08:04.021161 kernel: hub 1-0:1.0: 4 ports detected May 14 00:08:04.021248 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. May 14 00:08:04.021379 kernel: hub 2-0:1.0: USB hub found May 14 00:08:04.021491 kernel: hub 2-0:1.0: 4 ports detected May 14 00:08:04.026602 kernel: scsi host1: ahci May 14 00:08:04.026725 kernel: scsi host2: ahci May 14 00:08:04.034337 kernel: scsi host3: ahci May 14 00:08:04.034504 kernel: scsi host4: ahci May 14 00:08:04.034622 kernel: scsi host5: ahci May 14 00:08:04.036337 kernel: scsi host6: ahci May 14 00:08:04.036462 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a100 irq 48 May 14 00:08:04.036473 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a180 irq 48 May 14 00:08:04.036482 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a200 irq 48 May 14 00:08:04.036495 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a280 irq 48 May 14 00:08:04.036504 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a300 irq 48 May 14 00:08:04.036513 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a380 irq 48 May 14 00:08:04.089229 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 00:08:04.090817 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 00:08:04.122397 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 00:08:04.262475 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd May 14 00:08:04.349349 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 14 00:08:04.349474 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 14 00:08:04.352953 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 14 00:08:04.353356 kernel: ata3: SATA link down (SStatus 0 SControl 300) May 14 00:08:04.355362 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 14 00:08:04.359333 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 14 00:08:04.360853 kernel: ata1.00: applying bridge limits May 14 00:08:04.363571 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 14 00:08:04.364344 kernel: ata1.00: configured for UDMA/100 May 14 00:08:04.366718 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 14 00:08:04.396860 kernel: sd 0:0:0:0: Power-on or device reset occurred May 14 00:08:04.398293 kernel: sd 0:0:0:0: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) May 14 00:08:04.400458 kernel: sd 0:0:0:0: [sda] Write Protect is off May 14 00:08:04.400848 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 May 14 00:08:04.401072 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA May 14 00:08:04.415202 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 14 00:08:04.415272 kernel: GPT:17805311 != 80003071 May 14 00:08:04.415286 kernel: GPT:Alternate GPT header not at the end of the disk. May 14 00:08:04.415299 kernel: GPT:17805311 != 80003071 May 14 00:08:04.415311 kernel: GPT: Use GNU Parted to correct GPT errors. May 14 00:08:04.415345 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 14 00:08:04.421332 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 14 00:08:04.425492 kernel: hid: raw HID events driver (C) Jiri Kosina May 14 00:08:04.435413 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 14 00:08:04.435622 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 14 00:08:04.435633 kernel: usbcore: registered new interface driver usbhid May 14 00:08:04.438798 kernel: usbhid: USB HID core driver May 14 00:08:04.454413 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 May 14 00:08:04.458367 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 May 14 00:08:04.460682 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 May 14 00:08:04.477442 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (455) May 14 00:08:04.477495 kernel: BTRFS: device fsid d2fbd39e-42cb-4ccb-87ec-99f56cfe77f8 devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (467) May 14 00:08:04.488153 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. May 14 00:08:04.506916 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. May 14 00:08:04.521760 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. May 14 00:08:04.523019 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. May 14 00:08:04.532836 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. May 14 00:08:04.535244 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 14 00:08:04.552918 disk-uuid[578]: Primary Header is updated. May 14 00:08:04.552918 disk-uuid[578]: Secondary Entries is updated. May 14 00:08:04.552918 disk-uuid[578]: Secondary Header is updated. May 14 00:08:04.562358 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 14 00:08:05.573550 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 14 00:08:05.574563 disk-uuid[579]: The operation has completed successfully. May 14 00:08:05.652589 systemd[1]: disk-uuid.service: Deactivated successfully. May 14 00:08:05.652686 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 14 00:08:05.678547 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 14 00:08:05.698175 sh[595]: Success May 14 00:08:05.720479 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 14 00:08:05.795748 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 14 00:08:05.801456 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 14 00:08:05.821193 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 14 00:08:05.840652 kernel: BTRFS info (device dm-0): first mount of filesystem d2fbd39e-42cb-4ccb-87ec-99f56cfe77f8 May 14 00:08:05.840739 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 14 00:08:05.844256 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 14 00:08:05.847934 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 14 00:08:05.850791 kernel: BTRFS info (device dm-0): using free space tree May 14 00:08:05.865907 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 14 00:08:05.869807 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 14 00:08:05.871611 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 14 00:08:05.874742 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 14 00:08:05.880524 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 14 00:08:05.920419 kernel: BTRFS info (device sda6): first mount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 14 00:08:05.920486 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 14 00:08:05.923528 kernel: BTRFS info (device sda6): using free space tree May 14 00:08:05.935008 kernel: BTRFS info (device sda6): enabling ssd optimizations May 14 00:08:05.935071 kernel: BTRFS info (device sda6): auto enabling async discard May 14 00:08:05.943448 kernel: BTRFS info (device sda6): last unmount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 14 00:08:05.946178 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 14 00:08:05.950213 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 14 00:08:06.011820 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 00:08:06.015572 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 00:08:06.048882 systemd-networkd[773]: lo: Link UP May 14 00:08:06.049538 systemd-networkd[773]: lo: Gained carrier May 14 00:08:06.051622 ignition[710]: Ignition 2.20.0 May 14 00:08:06.051641 ignition[710]: Stage: fetch-offline May 14 00:08:06.052539 systemd-networkd[773]: Enumeration completed May 14 00:08:06.051675 ignition[710]: no configs at "/usr/lib/ignition/base.d" May 14 00:08:06.052809 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 00:08:06.051683 ignition[710]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 14 00:08:06.053567 systemd-networkd[773]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 00:08:06.051762 ignition[710]: parsed url from cmdline: "" May 14 00:08:06.053571 systemd-networkd[773]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 00:08:06.051765 ignition[710]: no config URL provided May 14 00:08:06.054794 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 14 00:08:06.051769 ignition[710]: reading system config file "/usr/lib/ignition/user.ign" May 14 00:08:06.054872 systemd-networkd[773]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 00:08:06.051775 ignition[710]: no config at "/usr/lib/ignition/user.ign" May 14 00:08:06.054875 systemd-networkd[773]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 00:08:06.051779 ignition[710]: failed to fetch config: resource requires networking May 14 00:08:06.060913 systemd-networkd[773]: eth0: Link UP May 14 00:08:06.051939 ignition[710]: Ignition finished successfully May 14 00:08:06.060917 systemd-networkd[773]: eth0: Gained carrier May 14 00:08:06.060924 systemd-networkd[773]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 00:08:06.064862 systemd[1]: Reached target network.target - Network. May 14 00:08:06.065033 systemd-networkd[773]: eth1: Link UP May 14 00:08:06.065035 systemd-networkd[773]: eth1: Gained carrier May 14 00:08:06.065042 systemd-networkd[773]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 00:08:06.069427 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 14 00:08:06.094475 ignition[783]: Ignition 2.20.0 May 14 00:08:06.094484 ignition[783]: Stage: fetch May 14 00:08:06.094622 ignition[783]: no configs at "/usr/lib/ignition/base.d" May 14 00:08:06.094631 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 14 00:08:06.094709 ignition[783]: parsed url from cmdline: "" May 14 00:08:06.094712 ignition[783]: no config URL provided May 14 00:08:06.094716 ignition[783]: reading system config file "/usr/lib/ignition/user.ign" May 14 00:08:06.094723 ignition[783]: no config at "/usr/lib/ignition/user.ign" May 14 00:08:06.094741 ignition[783]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 May 14 00:08:06.094901 ignition[783]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable May 14 00:08:06.105382 systemd-networkd[773]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 May 14 00:08:06.136384 systemd-networkd[773]: eth0: DHCPv4 address 37.27.39.104/32, gateway 172.31.1.1 acquired from 172.31.1.1 May 14 00:08:06.295523 ignition[783]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 May 14 00:08:06.299929 ignition[783]: GET result: OK May 14 00:08:06.300063 ignition[783]: parsing config with SHA512: 052c5a4a23bdebb73c353570de976e814209c8b9bf05bbe9054e47e2d4140686d01af65382152a4afce6a922d41db594ec67cf552a30f19560774c23e331b6ec May 14 00:08:06.310510 unknown[783]: fetched base config from "system" May 14 00:08:06.310537 unknown[783]: fetched base config from "system" May 14 00:08:06.311513 ignition[783]: fetch: fetch complete May 14 00:08:06.310552 unknown[783]: fetched user config from "hetzner" May 14 00:08:06.311527 ignition[783]: fetch: fetch passed May 14 00:08:06.311605 ignition[783]: Ignition finished successfully May 14 00:08:06.314992 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 14 00:08:06.318027 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 14 00:08:06.347658 ignition[791]: Ignition 2.20.0 May 14 00:08:06.347676 ignition[791]: Stage: kargs May 14 00:08:06.347915 ignition[791]: no configs at "/usr/lib/ignition/base.d" May 14 00:08:06.350655 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 14 00:08:06.347933 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 14 00:08:06.349141 ignition[791]: kargs: kargs passed May 14 00:08:06.349202 ignition[791]: Ignition finished successfully May 14 00:08:06.355497 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 14 00:08:06.381635 ignition[797]: Ignition 2.20.0 May 14 00:08:06.382127 ignition[797]: Stage: disks May 14 00:08:06.382292 ignition[797]: no configs at "/usr/lib/ignition/base.d" May 14 00:08:06.384570 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 14 00:08:06.382301 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 14 00:08:06.394739 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 14 00:08:06.383086 ignition[797]: disks: disks passed May 14 00:08:06.395938 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 14 00:08:06.383121 ignition[797]: Ignition finished successfully May 14 00:08:06.398002 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 00:08:06.401245 systemd[1]: Reached target sysinit.target - System Initialization. May 14 00:08:06.404551 systemd[1]: Reached target basic.target - Basic System. May 14 00:08:06.410524 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 14 00:08:06.448492 systemd-fsck[805]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks May 14 00:08:06.452725 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 14 00:08:06.456213 systemd[1]: Mounting sysroot.mount - /sysroot... May 14 00:08:06.597358 kernel: EXT4-fs (sda9): mounted filesystem c413e98b-da35-46b1-9852-45706e1b1f52 r/w with ordered data mode. Quota mode: none. May 14 00:08:06.597864 systemd[1]: Mounted sysroot.mount - /sysroot. May 14 00:08:06.598857 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 14 00:08:06.602531 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 00:08:06.613371 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 14 00:08:06.616864 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 14 00:08:06.618497 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 14 00:08:06.619507 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 14 00:08:06.628674 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 14 00:08:06.646447 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (813) May 14 00:08:06.646477 kernel: BTRFS info (device sda6): first mount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 14 00:08:06.646487 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 14 00:08:06.646497 kernel: BTRFS info (device sda6): using free space tree May 14 00:08:06.650022 kernel: BTRFS info (device sda6): enabling ssd optimizations May 14 00:08:06.650056 kernel: BTRFS info (device sda6): auto enabling async discard May 14 00:08:06.651043 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 14 00:08:06.666409 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 00:08:06.714586 coreos-metadata[815]: May 14 00:08:06.714 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 May 14 00:08:06.719077 coreos-metadata[815]: May 14 00:08:06.716 INFO Fetch successful May 14 00:08:06.719077 coreos-metadata[815]: May 14 00:08:06.717 INFO wrote hostname ci-4284-0-0-n-186718797f to /sysroot/etc/hostname May 14 00:08:06.720375 initrd-setup-root[840]: cut: /sysroot/etc/passwd: No such file or directory May 14 00:08:06.723062 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 14 00:08:06.727653 initrd-setup-root[848]: cut: /sysroot/etc/group: No such file or directory May 14 00:08:06.733196 initrd-setup-root[855]: cut: /sysroot/etc/shadow: No such file or directory May 14 00:08:06.739365 initrd-setup-root[862]: cut: /sysroot/etc/gshadow: No such file or directory May 14 00:08:06.847682 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 14 00:08:06.850967 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 14 00:08:06.866461 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 14 00:08:06.871707 kernel: BTRFS info (device sda6): last unmount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 14 00:08:06.871416 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 14 00:08:06.896006 ignition[929]: INFO : Ignition 2.20.0 May 14 00:08:06.896871 ignition[929]: INFO : Stage: mount May 14 00:08:06.897621 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 00:08:06.898483 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 14 00:08:06.901437 ignition[929]: INFO : mount: mount passed May 14 00:08:06.901437 ignition[929]: INFO : Ignition finished successfully May 14 00:08:06.902785 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 14 00:08:06.905446 systemd[1]: Starting ignition-files.service - Ignition (files)... May 14 00:08:06.917551 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 14 00:08:06.924239 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 00:08:06.948379 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (941) May 14 00:08:06.948479 kernel: BTRFS info (device sda6): first mount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 14 00:08:06.952250 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 14 00:08:06.952348 kernel: BTRFS info (device sda6): using free space tree May 14 00:08:06.969095 kernel: BTRFS info (device sda6): enabling ssd optimizations May 14 00:08:06.969186 kernel: BTRFS info (device sda6): auto enabling async discard May 14 00:08:06.973171 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 00:08:07.001909 ignition[957]: INFO : Ignition 2.20.0 May 14 00:08:07.001909 ignition[957]: INFO : Stage: files May 14 00:08:07.003570 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 00:08:07.003570 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 14 00:08:07.003570 ignition[957]: DEBUG : files: compiled without relabeling support, skipping May 14 00:08:07.005756 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 14 00:08:07.005756 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 14 00:08:07.007333 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 14 00:08:07.007333 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 14 00:08:07.008916 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 14 00:08:07.008916 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 14 00:08:07.008916 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 May 14 00:08:07.007349 unknown[957]: wrote ssh authorized keys file for user: core May 14 00:08:07.288349 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 14 00:08:07.801563 systemd-networkd[773]: eth1: Gained IPv6LL May 14 00:08:07.865514 systemd-networkd[773]: eth0: Gained IPv6LL May 14 00:08:09.105077 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 14 00:08:09.108094 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 14 00:08:09.108094 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 14 00:08:09.108094 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 14 00:08:09.108094 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 14 00:08:09.108094 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 00:08:09.108094 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 00:08:09.108094 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 00:08:09.108094 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 00:08:09.108094 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 14 00:08:09.108094 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 14 00:08:09.108094 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 14 00:08:09.108094 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 14 00:08:09.108094 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 14 00:08:09.108094 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 May 14 00:08:09.767272 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 14 00:08:10.086057 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 14 00:08:10.086057 ignition[957]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 14 00:08:10.092241 ignition[957]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 00:08:10.092241 ignition[957]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 00:08:10.092241 ignition[957]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 14 00:08:10.092241 ignition[957]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 14 00:08:10.092241 ignition[957]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" May 14 00:08:10.092241 ignition[957]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" May 14 00:08:10.092241 ignition[957]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 14 00:08:10.092241 ignition[957]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" May 14 00:08:10.092241 ignition[957]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" May 14 00:08:10.092241 ignition[957]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" May 14 00:08:10.092241 ignition[957]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" May 14 00:08:10.092241 ignition[957]: INFO : files: files passed May 14 00:08:10.092241 ignition[957]: INFO : Ignition finished successfully May 14 00:08:10.091677 systemd[1]: Finished ignition-files.service - Ignition (files). May 14 00:08:10.099452 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 14 00:08:10.110584 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 14 00:08:10.114555 systemd[1]: ignition-quench.service: Deactivated successfully. May 14 00:08:10.118460 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 14 00:08:10.129834 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 00:08:10.129834 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 14 00:08:10.132264 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 00:08:10.133539 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 00:08:10.135110 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 14 00:08:10.138490 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 14 00:08:10.196008 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 14 00:08:10.196155 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 14 00:08:10.197939 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 14 00:08:10.200126 systemd[1]: Reached target initrd.target - Initrd Default Target. May 14 00:08:10.201056 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 14 00:08:10.204495 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 14 00:08:10.233606 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 00:08:10.236489 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 14 00:08:10.260191 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 14 00:08:10.261518 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 00:08:10.262734 systemd[1]: Stopped target timers.target - Timer Units. May 14 00:08:10.263233 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 14 00:08:10.263369 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 00:08:10.264023 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 14 00:08:10.264654 systemd[1]: Stopped target basic.target - Basic System. May 14 00:08:10.265588 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 14 00:08:10.266565 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 14 00:08:10.267691 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 14 00:08:10.269489 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 14 00:08:10.270420 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 14 00:08:10.271552 systemd[1]: Stopped target sysinit.target - System Initialization. May 14 00:08:10.272639 systemd[1]: Stopped target local-fs.target - Local File Systems. May 14 00:08:10.273711 systemd[1]: Stopped target swap.target - Swaps. May 14 00:08:10.274691 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 14 00:08:10.274812 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 14 00:08:10.276287 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 14 00:08:10.278170 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 00:08:10.280037 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 14 00:08:10.280190 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 00:08:10.281453 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 14 00:08:10.281651 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 14 00:08:10.283159 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 14 00:08:10.283337 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 00:08:10.284800 systemd[1]: ignition-files.service: Deactivated successfully. May 14 00:08:10.284938 systemd[1]: Stopped ignition-files.service - Ignition (files). May 14 00:08:10.285987 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 14 00:08:10.286179 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 14 00:08:10.290570 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 14 00:08:10.293441 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 14 00:08:10.294507 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 14 00:08:10.294693 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 14 00:08:10.296664 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 14 00:08:10.296802 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 14 00:08:10.304002 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 14 00:08:10.304089 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 14 00:08:10.317414 ignition[1012]: INFO : Ignition 2.20.0 May 14 00:08:10.317414 ignition[1012]: INFO : Stage: umount May 14 00:08:10.321009 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 00:08:10.321009 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 14 00:08:10.321009 ignition[1012]: INFO : umount: umount passed May 14 00:08:10.321009 ignition[1012]: INFO : Ignition finished successfully May 14 00:08:10.319969 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 14 00:08:10.320510 systemd[1]: ignition-mount.service: Deactivated successfully. May 14 00:08:10.320582 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 14 00:08:10.323489 systemd[1]: ignition-disks.service: Deactivated successfully. May 14 00:08:10.323547 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 14 00:08:10.330502 systemd[1]: ignition-kargs.service: Deactivated successfully. May 14 00:08:10.330562 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 14 00:08:10.331611 systemd[1]: ignition-fetch.service: Deactivated successfully. May 14 00:08:10.331662 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 14 00:08:10.332757 systemd[1]: Stopped target network.target - Network. May 14 00:08:10.333827 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 14 00:08:10.333888 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 14 00:08:10.335083 systemd[1]: Stopped target paths.target - Path Units. May 14 00:08:10.336167 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 14 00:08:10.341391 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 00:08:10.342834 systemd[1]: Stopped target slices.target - Slice Units. May 14 00:08:10.343521 systemd[1]: Stopped target sockets.target - Socket Units. May 14 00:08:10.345776 systemd[1]: iscsid.socket: Deactivated successfully. May 14 00:08:10.345823 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 14 00:08:10.346893 systemd[1]: iscsiuio.socket: Deactivated successfully. May 14 00:08:10.346934 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 00:08:10.348008 systemd[1]: ignition-setup.service: Deactivated successfully. May 14 00:08:10.348062 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 14 00:08:10.349186 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 14 00:08:10.349240 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 14 00:08:10.350500 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 14 00:08:10.351731 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 14 00:08:10.353448 systemd[1]: sysroot-boot.service: Deactivated successfully. May 14 00:08:10.353571 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 14 00:08:10.355534 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 14 00:08:10.355626 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 14 00:08:10.358625 systemd[1]: systemd-resolved.service: Deactivated successfully. May 14 00:08:10.358755 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 14 00:08:10.362778 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 14 00:08:10.363250 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 14 00:08:10.363332 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 00:08:10.366506 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 14 00:08:10.366719 systemd[1]: systemd-networkd.service: Deactivated successfully. May 14 00:08:10.366813 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 14 00:08:10.368755 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 14 00:08:10.369186 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 14 00:08:10.369237 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 14 00:08:10.373619 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 14 00:08:10.374742 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 14 00:08:10.375522 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 00:08:10.376196 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 14 00:08:10.376233 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 14 00:08:10.377841 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 14 00:08:10.377875 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 14 00:08:10.378955 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 00:08:10.381870 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 14 00:08:10.391482 systemd[1]: network-cleanup.service: Deactivated successfully. May 14 00:08:10.392068 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 14 00:08:10.394838 systemd[1]: systemd-udevd.service: Deactivated successfully. May 14 00:08:10.394973 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 00:08:10.396132 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 14 00:08:10.396171 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 14 00:08:10.397132 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 14 00:08:10.397164 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 14 00:08:10.398169 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 14 00:08:10.398212 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 14 00:08:10.399718 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 14 00:08:10.399759 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 14 00:08:10.400818 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 14 00:08:10.400862 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 00:08:10.404423 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 14 00:08:10.405090 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 14 00:08:10.405139 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 00:08:10.406929 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 14 00:08:10.406970 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 14 00:08:10.408651 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 14 00:08:10.408687 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 14 00:08:10.410620 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 00:08:10.410655 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 00:08:10.413293 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 14 00:08:10.413473 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 14 00:08:10.414522 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 14 00:08:10.416432 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 14 00:08:10.429480 systemd[1]: Switching root. May 14 00:08:10.500485 systemd-journald[188]: Journal stopped May 14 00:08:11.673200 systemd-journald[188]: Received SIGTERM from PID 1 (systemd). May 14 00:08:11.673241 kernel: SELinux: policy capability network_peer_controls=1 May 14 00:08:11.673252 kernel: SELinux: policy capability open_perms=1 May 14 00:08:11.673263 kernel: SELinux: policy capability extended_socket_class=1 May 14 00:08:11.673275 kernel: SELinux: policy capability always_check_network=0 May 14 00:08:11.673283 kernel: SELinux: policy capability cgroup_seclabel=1 May 14 00:08:11.673292 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 14 00:08:11.673301 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 14 00:08:11.673311 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 14 00:08:11.673342 kernel: audit: type=1403 audit(1747181290.702:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 14 00:08:11.673353 systemd[1]: Successfully loaded SELinux policy in 60.225ms. May 14 00:08:11.673381 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 15.210ms. May 14 00:08:11.673393 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 00:08:11.673404 systemd[1]: Detected virtualization kvm. May 14 00:08:11.673415 systemd[1]: Detected architecture x86-64. May 14 00:08:11.673424 systemd[1]: Detected first boot. May 14 00:08:11.673434 systemd[1]: Hostname set to . May 14 00:08:11.673443 systemd[1]: Initializing machine ID from VM UUID. May 14 00:08:11.673453 zram_generator::config[1057]: No configuration found. May 14 00:08:11.673465 kernel: Guest personality initialized and is inactive May 14 00:08:11.673475 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 14 00:08:11.673485 kernel: Initialized host personality May 14 00:08:11.673495 kernel: NET: Registered PF_VSOCK protocol family May 14 00:08:11.673504 systemd[1]: Populated /etc with preset unit settings. May 14 00:08:11.673516 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 14 00:08:11.673525 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 14 00:08:11.673535 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 14 00:08:11.673545 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 14 00:08:11.673555 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 14 00:08:11.673566 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 14 00:08:11.673577 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 14 00:08:11.673587 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 14 00:08:11.673597 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 14 00:08:11.673607 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 14 00:08:11.673619 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 14 00:08:11.673631 systemd[1]: Created slice user.slice - User and Session Slice. May 14 00:08:11.673641 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 00:08:11.673650 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 00:08:11.673662 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 14 00:08:11.673672 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 14 00:08:11.673682 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 14 00:08:11.673692 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 00:08:11.673702 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 14 00:08:11.673713 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 00:08:11.673724 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 14 00:08:11.673737 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 14 00:08:11.673747 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 14 00:08:11.673757 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 14 00:08:11.673767 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 00:08:11.673777 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 00:08:11.673786 systemd[1]: Reached target slices.target - Slice Units. May 14 00:08:11.673797 systemd[1]: Reached target swap.target - Swaps. May 14 00:08:11.673807 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 14 00:08:11.673817 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 14 00:08:11.673828 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 14 00:08:11.673838 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 00:08:11.673849 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 00:08:11.673863 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 00:08:11.673873 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 14 00:08:11.673884 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 14 00:08:11.673894 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 14 00:08:11.673904 systemd[1]: Mounting media.mount - External Media Directory... May 14 00:08:11.673914 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 00:08:11.673924 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 14 00:08:11.673934 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 14 00:08:11.673944 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 14 00:08:11.673955 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 14 00:08:11.673966 systemd[1]: Reached target machines.target - Containers. May 14 00:08:11.673977 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 14 00:08:11.673987 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 00:08:11.673997 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 00:08:11.674007 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 14 00:08:11.674017 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 00:08:11.674027 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 00:08:11.674037 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 00:08:11.674046 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 14 00:08:11.674058 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 00:08:11.674068 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 14 00:08:11.674077 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 14 00:08:11.674088 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 14 00:08:11.674097 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 14 00:08:11.674107 systemd[1]: Stopped systemd-fsck-usr.service. May 14 00:08:11.674117 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 00:08:11.674127 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 00:08:11.674137 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 00:08:11.674148 kernel: loop: module loaded May 14 00:08:11.674158 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 14 00:08:11.674169 kernel: fuse: init (API version 7.39) May 14 00:08:11.674178 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 14 00:08:11.674188 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 14 00:08:11.674199 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 00:08:11.674209 systemd[1]: verity-setup.service: Deactivated successfully. May 14 00:08:11.674218 systemd[1]: Stopped verity-setup.service. May 14 00:08:11.674228 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 00:08:11.674240 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 14 00:08:11.674250 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 14 00:08:11.674260 systemd[1]: Mounted media.mount - External Media Directory. May 14 00:08:11.674271 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 14 00:08:11.674281 kernel: ACPI: bus type drm_connector registered May 14 00:08:11.674290 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 14 00:08:11.674312 systemd-journald[1145]: Collecting audit messages is disabled. May 14 00:08:11.674388 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 14 00:08:11.676357 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 14 00:08:11.676378 systemd-journald[1145]: Journal started May 14 00:08:11.676402 systemd-journald[1145]: Runtime Journal (/run/log/journal/194a356984ab435cb6a8370b34be7b4b) is 4.7M, max 38.3M, 33.5M free. May 14 00:08:11.352199 systemd[1]: Queued start job for default target multi-user.target. May 14 00:08:11.364462 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. May 14 00:08:11.365105 systemd[1]: systemd-journald.service: Deactivated successfully. May 14 00:08:11.680543 systemd[1]: Started systemd-journald.service - Journal Service. May 14 00:08:11.679079 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 00:08:11.679925 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 14 00:08:11.680049 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 14 00:08:11.680733 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 00:08:11.680908 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 00:08:11.681603 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 00:08:11.681772 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 00:08:11.682524 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 00:08:11.682699 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 00:08:11.683400 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 14 00:08:11.683567 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 14 00:08:11.684229 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 00:08:11.684911 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 00:08:11.685570 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 00:08:11.686306 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 14 00:08:11.687194 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 14 00:08:11.693633 systemd[1]: Reached target network-pre.target - Preparation for Network. May 14 00:08:11.697408 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 14 00:08:11.700795 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 14 00:08:11.701379 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 14 00:08:11.701460 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 00:08:11.702916 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 14 00:08:11.708255 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 14 00:08:11.710482 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 14 00:08:11.711009 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 00:08:11.714813 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 14 00:08:11.716471 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 14 00:08:11.717521 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 00:08:11.721064 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 14 00:08:11.723590 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 00:08:11.724970 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 00:08:11.726660 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 14 00:08:11.731466 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 14 00:08:11.735923 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 14 00:08:11.737645 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 14 00:08:11.738141 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 14 00:08:11.741645 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 14 00:08:11.754487 systemd-journald[1145]: Time spent on flushing to /var/log/journal/194a356984ab435cb6a8370b34be7b4b is 55.237ms for 1143 entries. May 14 00:08:11.754487 systemd-journald[1145]: System Journal (/var/log/journal/194a356984ab435cb6a8370b34be7b4b) is 8M, max 584.8M, 576.8M free. May 14 00:08:11.836443 systemd-journald[1145]: Received client request to flush runtime journal. May 14 00:08:11.836505 kernel: loop0: detected capacity change from 0 to 109808 May 14 00:08:11.836523 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 14 00:08:11.758684 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 14 00:08:11.761877 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 14 00:08:11.764483 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 14 00:08:11.808112 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 00:08:11.815392 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 14 00:08:11.840680 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 14 00:08:11.843306 systemd-tmpfiles[1183]: ACLs are not supported, ignoring. May 14 00:08:11.843332 systemd-tmpfiles[1183]: ACLs are not supported, ignoring. May 14 00:08:11.846835 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 00:08:11.849900 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 14 00:08:11.856012 udevadm[1193]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 14 00:08:11.863681 kernel: loop1: detected capacity change from 0 to 8 May 14 00:08:11.861455 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 14 00:08:11.862406 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 14 00:08:11.889286 kernel: loop2: detected capacity change from 0 to 218376 May 14 00:08:11.925022 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 14 00:08:11.931466 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 00:08:11.934347 kernel: loop3: detected capacity change from 0 to 151640 May 14 00:08:11.954554 systemd-tmpfiles[1207]: ACLs are not supported, ignoring. May 14 00:08:11.954872 systemd-tmpfiles[1207]: ACLs are not supported, ignoring. May 14 00:08:11.970821 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 00:08:11.996351 kernel: loop4: detected capacity change from 0 to 109808 May 14 00:08:12.012352 kernel: loop5: detected capacity change from 0 to 8 May 14 00:08:12.016339 kernel: loop6: detected capacity change from 0 to 218376 May 14 00:08:12.056354 kernel: loop7: detected capacity change from 0 to 151640 May 14 00:08:12.080614 (sd-merge)[1212]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. May 14 00:08:12.081782 (sd-merge)[1212]: Merged extensions into '/usr'. May 14 00:08:12.090483 systemd[1]: Reload requested from client PID 1182 ('systemd-sysext') (unit systemd-sysext.service)... May 14 00:08:12.090594 systemd[1]: Reloading... May 14 00:08:12.208364 zram_generator::config[1240]: No configuration found. May 14 00:08:12.305462 ldconfig[1177]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 14 00:08:12.330914 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 00:08:12.402872 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 14 00:08:12.403009 systemd[1]: Reloading finished in 312 ms. May 14 00:08:12.418785 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 14 00:08:12.419754 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 14 00:08:12.431464 systemd[1]: Starting ensure-sysext.service... May 14 00:08:12.436003 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 00:08:12.462975 systemd[1]: Reload requested from client PID 1283 ('systemctl') (unit ensure-sysext.service)... May 14 00:08:12.462990 systemd[1]: Reloading... May 14 00:08:12.463024 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 14 00:08:12.463253 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 14 00:08:12.463981 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 14 00:08:12.464204 systemd-tmpfiles[1284]: ACLs are not supported, ignoring. May 14 00:08:12.464258 systemd-tmpfiles[1284]: ACLs are not supported, ignoring. May 14 00:08:12.470117 systemd-tmpfiles[1284]: Detected autofs mount point /boot during canonicalization of boot. May 14 00:08:12.470129 systemd-tmpfiles[1284]: Skipping /boot May 14 00:08:12.482941 systemd-tmpfiles[1284]: Detected autofs mount point /boot during canonicalization of boot. May 14 00:08:12.482955 systemd-tmpfiles[1284]: Skipping /boot May 14 00:08:12.544095 zram_generator::config[1319]: No configuration found. May 14 00:08:12.647180 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 00:08:12.716657 systemd[1]: Reloading finished in 253 ms. May 14 00:08:12.729223 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 14 00:08:12.730123 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 00:08:12.759668 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 00:08:12.776693 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 14 00:08:12.782472 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 14 00:08:12.793878 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 00:08:12.798819 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 00:08:12.803733 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 14 00:08:12.811278 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 00:08:12.812375 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 00:08:12.817457 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 00:08:12.826976 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 00:08:12.832014 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 00:08:12.832898 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 00:08:12.833477 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 00:08:12.833647 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 00:08:12.839021 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 14 00:08:12.847894 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 00:08:12.849167 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 00:08:12.849376 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 00:08:12.849466 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 00:08:12.851372 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 14 00:08:12.856122 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 14 00:08:12.857079 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 00:08:12.858836 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 00:08:12.858979 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 00:08:12.870373 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 14 00:08:12.873791 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 00:08:12.873995 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 00:08:12.881381 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 00:08:12.886078 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 00:08:12.887202 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 00:08:12.887307 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 00:08:12.887767 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 00:08:12.889170 systemd-udevd[1368]: Using default interface naming scheme 'v255'. May 14 00:08:12.890573 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 14 00:08:12.892221 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 00:08:12.892978 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 00:08:12.894855 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 00:08:12.894986 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 00:08:12.896083 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 14 00:08:12.897156 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 00:08:12.897283 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 00:08:12.898908 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 00:08:12.899360 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 00:08:12.904495 systemd[1]: Finished ensure-sysext.service. May 14 00:08:12.908952 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 00:08:12.909004 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 00:08:12.911464 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 14 00:08:12.911940 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 14 00:08:12.919085 augenrules[1401]: No rules May 14 00:08:12.919993 systemd[1]: audit-rules.service: Deactivated successfully. May 14 00:08:12.920412 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 00:08:12.930402 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 00:08:12.935064 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 00:08:12.950563 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 14 00:08:12.985077 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 14 00:08:13.086376 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1416) May 14 00:08:13.089993 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 14 00:08:13.090593 systemd[1]: Reached target time-set.target - System Time Set. May 14 00:08:13.103424 systemd-networkd[1411]: lo: Link UP May 14 00:08:13.103432 systemd-networkd[1411]: lo: Gained carrier May 14 00:08:13.108807 systemd-resolved[1367]: Positive Trust Anchors: May 14 00:08:13.109506 systemd-networkd[1411]: Enumeration completed May 14 00:08:13.109581 systemd-resolved[1367]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 00:08:13.110347 systemd-resolved[1367]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 00:08:13.110718 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 00:08:13.112855 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 14 00:08:13.114418 systemd-networkd[1411]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 00:08:13.114800 systemd-networkd[1411]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 00:08:13.116642 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 14 00:08:13.116657 systemd-networkd[1411]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 00:08:13.116660 systemd-networkd[1411]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 00:08:13.116989 systemd-networkd[1411]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 00:08:13.117010 systemd-networkd[1411]: eth0: Link UP May 14 00:08:13.117013 systemd-networkd[1411]: eth0: Gained carrier May 14 00:08:13.117021 systemd-networkd[1411]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 00:08:13.119747 systemd-resolved[1367]: Using system hostname 'ci-4284-0-0-n-186718797f'. May 14 00:08:13.122257 systemd-networkd[1411]: eth1: Link UP May 14 00:08:13.122261 systemd-networkd[1411]: eth1: Gained carrier May 14 00:08:13.122274 systemd-networkd[1411]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 00:08:13.122488 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 00:08:13.122986 systemd[1]: Reached target network.target - Network. May 14 00:08:13.123401 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 00:08:13.147765 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 14 00:08:13.158460 kernel: mousedev: PS/2 mouse device common for all mice May 14 00:08:13.161056 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 14 00:08:13.160445 systemd-networkd[1411]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 May 14 00:08:13.161202 systemd-timesyncd[1399]: Network configuration changed, trying to establish connection. May 14 00:08:13.167774 kernel: ACPI: button: Power Button [PWRF] May 14 00:08:13.188113 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. May 14 00:08:13.188164 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 00:08:13.188249 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 00:08:13.189218 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 00:08:13.190937 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 00:08:13.192671 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 00:08:13.193169 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 00:08:13.193201 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 00:08:13.193221 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 14 00:08:13.193231 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 00:08:13.211007 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 00:08:13.211162 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 00:08:13.213894 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 00:08:13.214033 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 00:08:13.214998 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 00:08:13.216417 systemd-networkd[1411]: eth0: DHCPv4 address 37.27.39.104/32, gateway 172.31.1.1 acquired from 172.31.1.1 May 14 00:08:13.217510 systemd-timesyncd[1399]: Network configuration changed, trying to establish connection. May 14 00:08:13.217522 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 00:08:13.219262 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 00:08:13.220057 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 00:08:13.227560 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 14 00:08:13.228128 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 14 00:08:13.229124 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 14 00:08:13.237394 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 May 14 00:08:13.244810 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 May 14 00:08:13.244838 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console May 14 00:08:13.247348 kernel: EDAC MC: Ver: 3.0.0 May 14 00:08:13.250374 kernel: Console: switching to colour dummy device 80x25 May 14 00:08:13.251647 kernel: [drm] features: -virgl +edid -resource_blob -host_visible May 14 00:08:13.251667 kernel: [drm] features: -context_init May 14 00:08:13.258383 kernel: [drm] number of scanouts: 1 May 14 00:08:13.261690 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. May 14 00:08:13.264680 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 14 00:08:13.272352 kernel: [drm] number of cap sets: 0 May 14 00:08:13.276455 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 May 14 00:08:13.282666 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device May 14 00:08:13.282696 kernel: Console: switching to colour frame buffer device 160x50 May 14 00:08:13.283061 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 00:08:13.300076 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device May 14 00:08:13.312715 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 14 00:08:13.321421 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 00:08:13.321617 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 00:08:13.328748 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 00:08:13.335040 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 00:08:13.335385 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 00:08:13.336831 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 00:08:13.427418 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 00:08:13.441824 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 14 00:08:13.444559 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 14 00:08:13.470737 lvm[1476]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 14 00:08:13.507670 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 14 00:08:13.508209 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 00:08:13.508623 systemd[1]: Reached target sysinit.target - System Initialization. May 14 00:08:13.509584 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 14 00:08:13.509768 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 14 00:08:13.510156 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 14 00:08:13.510535 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 14 00:08:13.510746 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 14 00:08:13.510884 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 14 00:08:13.510930 systemd[1]: Reached target paths.target - Path Units. May 14 00:08:13.511026 systemd[1]: Reached target timers.target - Timer Units. May 14 00:08:13.513284 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 14 00:08:13.516168 systemd[1]: Starting docker.socket - Docker Socket for the API... May 14 00:08:13.523131 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 14 00:08:13.524665 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 14 00:08:13.525549 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 14 00:08:13.537420 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 14 00:08:13.541837 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 14 00:08:13.548212 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 14 00:08:13.552734 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 14 00:08:13.553668 systemd[1]: Reached target sockets.target - Socket Units. May 14 00:08:13.554269 systemd[1]: Reached target basic.target - Basic System. May 14 00:08:13.558497 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 14 00:08:13.558556 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 14 00:08:13.563425 systemd[1]: Starting containerd.service - containerd container runtime... May 14 00:08:13.570691 lvm[1480]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 14 00:08:13.571609 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 14 00:08:13.584571 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 14 00:08:13.595693 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 14 00:08:13.604584 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 14 00:08:13.608264 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 14 00:08:13.615862 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 14 00:08:13.621998 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 14 00:08:13.627430 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. May 14 00:08:13.630143 jq[1484]: false May 14 00:08:13.632439 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 14 00:08:13.643958 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 14 00:08:13.650378 extend-filesystems[1485]: Found loop4 May 14 00:08:13.650378 extend-filesystems[1485]: Found loop5 May 14 00:08:13.650378 extend-filesystems[1485]: Found loop6 May 14 00:08:13.650378 extend-filesystems[1485]: Found loop7 May 14 00:08:13.650378 extend-filesystems[1485]: Found sda May 14 00:08:13.650378 extend-filesystems[1485]: Found sda1 May 14 00:08:13.650378 extend-filesystems[1485]: Found sda2 May 14 00:08:13.650378 extend-filesystems[1485]: Found sda3 May 14 00:08:13.650378 extend-filesystems[1485]: Found usr May 14 00:08:13.650378 extend-filesystems[1485]: Found sda4 May 14 00:08:13.650378 extend-filesystems[1485]: Found sda6 May 14 00:08:13.650378 extend-filesystems[1485]: Found sda7 May 14 00:08:13.650378 extend-filesystems[1485]: Found sda9 May 14 00:08:13.650378 extend-filesystems[1485]: Checking size of /dev/sda9 May 14 00:08:13.730746 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks May 14 00:08:13.730835 coreos-metadata[1482]: May 14 00:08:13.681 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 May 14 00:08:13.730835 coreos-metadata[1482]: May 14 00:08:13.682 INFO Fetch successful May 14 00:08:13.730835 coreos-metadata[1482]: May 14 00:08:13.683 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 May 14 00:08:13.730835 coreos-metadata[1482]: May 14 00:08:13.683 INFO Fetch successful May 14 00:08:13.657147 dbus-daemon[1483]: [system] SELinux support is enabled May 14 00:08:13.658663 systemd[1]: Starting systemd-logind.service - User Login Management... May 14 00:08:13.735065 extend-filesystems[1485]: Resized partition /dev/sda9 May 14 00:08:13.663499 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 14 00:08:13.744243 extend-filesystems[1499]: resize2fs 1.47.2 (1-Jan-2025) May 14 00:08:13.664374 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 14 00:08:13.670952 systemd[1]: Starting update-engine.service - Update Engine... May 14 00:08:13.699417 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 14 00:08:13.708818 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 14 00:08:13.726100 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 14 00:08:13.738504 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 14 00:08:13.738671 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 14 00:08:13.738895 systemd[1]: motdgen.service: Deactivated successfully. May 14 00:08:13.739027 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 14 00:08:13.751760 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 14 00:08:13.751942 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 14 00:08:13.766949 jq[1509]: true May 14 00:08:13.770393 update_engine[1500]: I20250514 00:08:13.769712 1500 main.cc:92] Flatcar Update Engine starting May 14 00:08:13.775216 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 14 00:08:13.775249 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 14 00:08:13.780596 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 14 00:08:13.780620 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 14 00:08:13.791449 update_engine[1500]: I20250514 00:08:13.791249 1500 update_check_scheduler.cc:74] Next update check in 8m35s May 14 00:08:13.797224 systemd[1]: Started update-engine.service - Update Engine. May 14 00:08:13.800871 jq[1523]: true May 14 00:08:13.801030 tar[1515]: linux-amd64/LICENSE May 14 00:08:13.801030 tar[1515]: linux-amd64/helm May 14 00:08:13.805758 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 14 00:08:13.811728 (ntainerd)[1528]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 14 00:08:13.841711 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1409) May 14 00:08:13.857045 systemd-logind[1496]: New seat seat0. May 14 00:08:13.872413 systemd-logind[1496]: Watching system buttons on /dev/input/event2 (Power Button) May 14 00:08:13.872442 systemd-logind[1496]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 14 00:08:13.872637 systemd[1]: Started systemd-logind.service - User Login Management. May 14 00:08:13.915309 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 14 00:08:13.928815 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 14 00:08:13.976306 kernel: EXT4-fs (sda9): resized filesystem to 9393147 May 14 00:08:13.982029 locksmithd[1531]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 14 00:08:14.019428 extend-filesystems[1499]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required May 14 00:08:14.019428 extend-filesystems[1499]: old_desc_blocks = 1, new_desc_blocks = 5 May 14 00:08:14.019428 extend-filesystems[1499]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. May 14 00:08:14.019243 systemd[1]: extend-filesystems.service: Deactivated successfully. May 14 00:08:14.023114 extend-filesystems[1485]: Resized filesystem in /dev/sda9 May 14 00:08:14.023114 extend-filesystems[1485]: Found sr0 May 14 00:08:14.019462 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 14 00:08:14.034407 bash[1552]: Updated "/home/core/.ssh/authorized_keys" May 14 00:08:14.036737 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 14 00:08:14.045164 systemd[1]: Starting sshkeys.service... May 14 00:08:14.081148 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 14 00:08:14.084868 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 14 00:08:14.138642 coreos-metadata[1567]: May 14 00:08:14.138 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 May 14 00:08:14.139870 coreos-metadata[1567]: May 14 00:08:14.139 INFO Fetch successful May 14 00:08:14.142783 unknown[1567]: wrote ssh authorized keys file for user: core May 14 00:08:14.192424 update-ssh-keys[1571]: Updated "/home/core/.ssh/authorized_keys" May 14 00:08:14.189397 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 14 00:08:14.194380 systemd[1]: Finished sshkeys.service. May 14 00:08:14.207260 containerd[1528]: time="2025-05-14T00:08:14Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 14 00:08:14.209715 containerd[1528]: time="2025-05-14T00:08:14.208668467Z" level=info msg="starting containerd" revision=88aa2f531d6c2922003cc7929e51daf1c14caa0a version=v2.0.1 May 14 00:08:14.229605 containerd[1528]: time="2025-05-14T00:08:14.229541617Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.085µs" May 14 00:08:14.229605 containerd[1528]: time="2025-05-14T00:08:14.229594035Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 14 00:08:14.229605 containerd[1528]: time="2025-05-14T00:08:14.229616327Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 14 00:08:14.229829 containerd[1528]: time="2025-05-14T00:08:14.229804661Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 14 00:08:14.229881 containerd[1528]: time="2025-05-14T00:08:14.229831080Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 14 00:08:14.229881 containerd[1528]: time="2025-05-14T00:08:14.229861777Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 14 00:08:14.229949 containerd[1528]: time="2025-05-14T00:08:14.229924174Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 14 00:08:14.229949 containerd[1528]: time="2025-05-14T00:08:14.229946366Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 14 00:08:14.230227 containerd[1528]: time="2025-05-14T00:08:14.230201685Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 14 00:08:14.230227 containerd[1528]: time="2025-05-14T00:08:14.230223907Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 14 00:08:14.230273 containerd[1528]: time="2025-05-14T00:08:14.230243343Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 14 00:08:14.230273 containerd[1528]: time="2025-05-14T00:08:14.230252450Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 14 00:08:14.236393 containerd[1528]: time="2025-05-14T00:08:14.236343004Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 14 00:08:14.236645 containerd[1528]: time="2025-05-14T00:08:14.236617579Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 14 00:08:14.236675 containerd[1528]: time="2025-05-14T00:08:14.236655430Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 14 00:08:14.236675 containerd[1528]: time="2025-05-14T00:08:14.236666691Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 14 00:08:14.236710 containerd[1528]: time="2025-05-14T00:08:14.236696176Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 14 00:08:14.236927 containerd[1528]: time="2025-05-14T00:08:14.236907483Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 14 00:08:14.236980 containerd[1528]: time="2025-05-14T00:08:14.236961243Z" level=info msg="metadata content store policy set" policy=shared May 14 00:08:14.244968 sshd_keygen[1516]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 14 00:08:14.249297 containerd[1528]: time="2025-05-14T00:08:14.248937727Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 14 00:08:14.249297 containerd[1528]: time="2025-05-14T00:08:14.248997428Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 14 00:08:14.249297 containerd[1528]: time="2025-05-14T00:08:14.249014119Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 14 00:08:14.249297 containerd[1528]: time="2025-05-14T00:08:14.249028677Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 14 00:08:14.249297 containerd[1528]: time="2025-05-14T00:08:14.249041771Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 14 00:08:14.249297 containerd[1528]: time="2025-05-14T00:08:14.249054085Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 14 00:08:14.249297 containerd[1528]: time="2025-05-14T00:08:14.249069223Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 14 00:08:14.249297 containerd[1528]: time="2025-05-14T00:08:14.249082388Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 14 00:08:14.249297 containerd[1528]: time="2025-05-14T00:08:14.249099149Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 14 00:08:14.249297 containerd[1528]: time="2025-05-14T00:08:14.249110551Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 14 00:08:14.249297 containerd[1528]: time="2025-05-14T00:08:14.249120349Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 14 00:08:14.249297 containerd[1528]: time="2025-05-14T00:08:14.249132973Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 14 00:08:14.249297 containerd[1528]: time="2025-05-14T00:08:14.249242017Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 14 00:08:14.249297 containerd[1528]: time="2025-05-14T00:08:14.249261684Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 14 00:08:14.249630 containerd[1528]: time="2025-05-14T00:08:14.249274438Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 14 00:08:14.249630 containerd[1528]: time="2025-05-14T00:08:14.249288995Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 14 00:08:14.249630 containerd[1528]: time="2025-05-14T00:08:14.249301389Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 14 00:08:14.249630 containerd[1528]: time="2025-05-14T00:08:14.249335282Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 14 00:08:14.249630 containerd[1528]: time="2025-05-14T00:08:14.249366701Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 14 00:08:14.249630 containerd[1528]: time="2025-05-14T00:08:14.249379676Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 14 00:08:14.249630 containerd[1528]: time="2025-05-14T00:08:14.249391808Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 14 00:08:14.249630 containerd[1528]: time="2025-05-14T00:08:14.249404763Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 14 00:08:14.249630 containerd[1528]: time="2025-05-14T00:08:14.249417346Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 14 00:08:14.249630 containerd[1528]: time="2025-05-14T00:08:14.249485784Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 14 00:08:14.249630 containerd[1528]: time="2025-05-14T00:08:14.249498158Z" level=info msg="Start snapshots syncer" May 14 00:08:14.249630 containerd[1528]: time="2025-05-14T00:08:14.249522543Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 14 00:08:14.249866 containerd[1528]: time="2025-05-14T00:08:14.249745983Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 14 00:08:14.249866 containerd[1528]: time="2025-05-14T00:08:14.249786138Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 14 00:08:14.249988 containerd[1528]: time="2025-05-14T00:08:14.249845549Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 14 00:08:14.249988 containerd[1528]: time="2025-05-14T00:08:14.249917524Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 14 00:08:14.249988 containerd[1528]: time="2025-05-14T00:08:14.249938203Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 14 00:08:14.249988 containerd[1528]: time="2025-05-14T00:08:14.249949404Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 14 00:08:14.249988 containerd[1528]: time="2025-05-14T00:08:14.249965223Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 14 00:08:14.249988 containerd[1528]: time="2025-05-14T00:08:14.249975733Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 14 00:08:14.249988 containerd[1528]: time="2025-05-14T00:08:14.249985000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 14 00:08:14.250127 containerd[1528]: time="2025-05-14T00:08:14.249994679Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 14 00:08:14.250127 containerd[1528]: time="2025-05-14T00:08:14.250015628Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 14 00:08:14.250127 containerd[1528]: time="2025-05-14T00:08:14.250029063Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 14 00:08:14.250127 containerd[1528]: time="2025-05-14T00:08:14.250037218Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 14 00:08:14.253340 containerd[1528]: time="2025-05-14T00:08:14.252745348Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 14 00:08:14.253340 containerd[1528]: time="2025-05-14T00:08:14.252803938Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 14 00:08:14.253340 containerd[1528]: time="2025-05-14T00:08:14.252813496Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 14 00:08:14.253340 containerd[1528]: time="2025-05-14T00:08:14.252824356Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 14 00:08:14.253340 containerd[1528]: time="2025-05-14T00:08:14.252833283Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 14 00:08:14.253340 containerd[1528]: time="2025-05-14T00:08:14.252842881Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 14 00:08:14.253340 containerd[1528]: time="2025-05-14T00:08:14.252856707Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 14 00:08:14.253340 containerd[1528]: time="2025-05-14T00:08:14.252876864Z" level=info msg="runtime interface created" May 14 00:08:14.253340 containerd[1528]: time="2025-05-14T00:08:14.252881644Z" level=info msg="created NRI interface" May 14 00:08:14.253340 containerd[1528]: time="2025-05-14T00:08:14.252889749Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 14 00:08:14.253340 containerd[1528]: time="2025-05-14T00:08:14.252908494Z" level=info msg="Connect containerd service" May 14 00:08:14.253340 containerd[1528]: time="2025-05-14T00:08:14.252960862Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 14 00:08:14.255544 containerd[1528]: time="2025-05-14T00:08:14.255524831Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 14 00:08:14.265479 systemd-networkd[1411]: eth1: Gained IPv6LL May 14 00:08:14.266547 systemd-timesyncd[1399]: Network configuration changed, trying to establish connection. May 14 00:08:14.270742 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 14 00:08:14.271930 systemd[1]: Reached target network-online.target - Network is Online. May 14 00:08:14.280421 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:08:14.283620 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 14 00:08:14.290043 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 14 00:08:14.300449 systemd[1]: Starting issuegen.service - Generate /run/issue... May 14 00:08:14.327563 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 14 00:08:14.330702 systemd[1]: issuegen.service: Deactivated successfully. May 14 00:08:14.330863 systemd[1]: Finished issuegen.service - Generate /run/issue. May 14 00:08:14.340130 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 14 00:08:14.365453 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 14 00:08:14.372000 systemd[1]: Started getty@tty1.service - Getty on tty1. May 14 00:08:14.377680 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 14 00:08:14.378290 systemd[1]: Reached target getty.target - Login Prompts. May 14 00:08:14.435347 containerd[1528]: time="2025-05-14T00:08:14.435073593Z" level=info msg="Start subscribing containerd event" May 14 00:08:14.435347 containerd[1528]: time="2025-05-14T00:08:14.435145688Z" level=info msg="Start recovering state" May 14 00:08:14.435347 containerd[1528]: time="2025-05-14T00:08:14.435240296Z" level=info msg="Start event monitor" May 14 00:08:14.435347 containerd[1528]: time="2025-05-14T00:08:14.435253029Z" level=info msg="Start cni network conf syncer for default" May 14 00:08:14.435347 containerd[1528]: time="2025-05-14T00:08:14.435259512Z" level=info msg="Start streaming server" May 14 00:08:14.435347 containerd[1528]: time="2025-05-14T00:08:14.435271414Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 14 00:08:14.435347 containerd[1528]: time="2025-05-14T00:08:14.435277615Z" level=info msg="runtime interface starting up..." May 14 00:08:14.435347 containerd[1528]: time="2025-05-14T00:08:14.435284608Z" level=info msg="starting plugins..." May 14 00:08:14.435347 containerd[1528]: time="2025-05-14T00:08:14.435295048Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 14 00:08:14.437162 containerd[1528]: time="2025-05-14T00:08:14.436947077Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 14 00:08:14.437162 containerd[1528]: time="2025-05-14T00:08:14.436981762Z" level=info msg=serving... address=/run/containerd/containerd.sock May 14 00:08:14.437162 containerd[1528]: time="2025-05-14T00:08:14.437144938Z" level=info msg="containerd successfully booted in 0.230186s" May 14 00:08:14.437463 systemd[1]: Started containerd.service - containerd container runtime. May 14 00:08:14.457929 systemd-networkd[1411]: eth0: Gained IPv6LL May 14 00:08:14.458563 systemd-timesyncd[1399]: Network configuration changed, trying to establish connection. May 14 00:08:14.583522 tar[1515]: linux-amd64/README.md May 14 00:08:14.600088 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 14 00:08:15.638582 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:08:15.643289 systemd[1]: Reached target multi-user.target - Multi-User System. May 14 00:08:15.646510 systemd[1]: Startup finished in 1.557s (kernel) + 7.932s (initrd) + 5.003s (userspace) = 14.493s. May 14 00:08:15.652005 (kubelet)[1625]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 00:08:16.574015 kubelet[1625]: E0514 00:08:16.573929 1625 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 00:08:16.579097 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 00:08:16.580001 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 00:08:16.580500 systemd[1]: kubelet.service: Consumed 1.481s CPU time, 252.4M memory peak. May 14 00:08:26.733851 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 14 00:08:26.737391 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:08:26.896238 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:08:26.907527 (kubelet)[1644]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 00:08:26.968389 kubelet[1644]: E0514 00:08:26.968216 1644 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 00:08:26.973663 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 00:08:26.973902 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 00:08:26.974366 systemd[1]: kubelet.service: Consumed 198ms CPU time, 104.6M memory peak. May 14 00:08:36.983891 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 14 00:08:36.987479 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:08:37.149152 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:08:37.161526 (kubelet)[1659]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 00:08:37.216905 kubelet[1659]: E0514 00:08:37.216797 1659 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 00:08:37.218762 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 00:08:37.219003 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 00:08:37.219674 systemd[1]: kubelet.service: Consumed 203ms CPU time, 106M memory peak. May 14 00:08:45.187893 systemd-timesyncd[1399]: Contacted time server 185.228.139.165:123 (2.flatcar.pool.ntp.org). May 14 00:08:45.187988 systemd-timesyncd[1399]: Initial clock synchronization to Wed 2025-05-14 00:08:45.187647 UTC. May 14 00:08:45.188192 systemd-resolved[1367]: Clock change detected. Flushing caches. May 14 00:08:47.701699 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 14 00:08:47.704498 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:08:47.879001 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:08:47.890431 (kubelet)[1673]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 00:08:47.948806 kubelet[1673]: E0514 00:08:47.948729 1673 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 00:08:47.952481 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 00:08:47.952684 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 00:08:47.953071 systemd[1]: kubelet.service: Consumed 211ms CPU time, 106.5M memory peak. May 14 00:08:58.201700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 14 00:08:58.204786 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:08:58.373325 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:08:58.381512 (kubelet)[1689]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 00:08:58.429960 kubelet[1689]: E0514 00:08:58.429883 1689 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 00:08:58.432913 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 00:08:58.433304 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 00:08:58.433859 systemd[1]: kubelet.service: Consumed 193ms CPU time, 101.9M memory peak. May 14 00:08:59.640553 update_engine[1500]: I20250514 00:08:59.640417 1500 update_attempter.cc:509] Updating boot flags... May 14 00:08:59.708488 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1706) May 14 00:08:59.760374 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1702) May 14 00:08:59.818365 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1702) May 14 00:09:08.451747 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. May 14 00:09:08.455199 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:09:08.641004 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:09:08.656561 (kubelet)[1726]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 00:09:08.698534 kubelet[1726]: E0514 00:09:08.698393 1726 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 00:09:08.700352 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 00:09:08.700578 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 00:09:08.701181 systemd[1]: kubelet.service: Consumed 183ms CPU time, 105.1M memory peak. May 14 00:09:18.951772 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. May 14 00:09:18.954579 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:09:19.121817 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:09:19.130431 (kubelet)[1742]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 00:09:19.174367 kubelet[1742]: E0514 00:09:19.174281 1742 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 00:09:19.177939 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 00:09:19.178120 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 00:09:19.178472 systemd[1]: kubelet.service: Consumed 184ms CPU time, 104.3M memory peak. May 14 00:09:29.201733 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. May 14 00:09:29.204166 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:09:29.389148 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:09:29.400569 (kubelet)[1757]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 00:09:29.440773 kubelet[1757]: E0514 00:09:29.440696 1757 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 00:09:29.444287 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 00:09:29.444561 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 00:09:29.445157 systemd[1]: kubelet.service: Consumed 198ms CPU time, 106M memory peak. May 14 00:09:39.451661 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. May 14 00:09:39.454045 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:09:39.609904 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:09:39.619452 (kubelet)[1772]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 00:09:39.667283 kubelet[1772]: E0514 00:09:39.667169 1772 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 00:09:39.671453 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 00:09:39.671654 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 00:09:39.672057 systemd[1]: kubelet.service: Consumed 187ms CPU time, 103.7M memory peak. May 14 00:09:43.619761 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 14 00:09:43.621964 systemd[1]: Started sshd@0-37.27.39.104:22-193.32.162.185:39204.service - OpenSSH per-connection server daemon (193.32.162.185:39204). May 14 00:09:43.721016 sshd[1780]: Connection closed by 193.32.162.185 port 39204 May 14 00:09:43.723196 systemd[1]: sshd@0-37.27.39.104:22-193.32.162.185:39204.service: Deactivated successfully. May 14 00:09:49.701677 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. May 14 00:09:49.704036 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:09:49.893716 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:09:49.906647 (kubelet)[1791]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 00:09:49.979292 kubelet[1791]: E0514 00:09:49.979048 1791 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 00:09:49.983312 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 00:09:49.983621 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 00:09:49.984348 systemd[1]: kubelet.service: Consumed 229ms CPU time, 105.1M memory peak. May 14 00:09:55.322045 systemd[1]: Started sshd@1-37.27.39.104:22-139.178.89.65:60038.service - OpenSSH per-connection server daemon (139.178.89.65:60038). May 14 00:09:56.335456 sshd[1800]: Accepted publickey for core from 139.178.89.65 port 60038 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:09:56.344606 sshd-session[1800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:09:56.363962 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 14 00:09:56.366309 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 14 00:09:56.370873 systemd-logind[1496]: New session 1 of user core. May 14 00:09:56.400960 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 14 00:09:56.404721 systemd[1]: Starting user@500.service - User Manager for UID 500... May 14 00:09:56.421967 (systemd)[1804]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 14 00:09:56.425689 systemd-logind[1496]: New session c1 of user core. May 14 00:09:56.626091 systemd[1804]: Queued start job for default target default.target. May 14 00:09:56.632808 systemd[1804]: Created slice app.slice - User Application Slice. May 14 00:09:56.632842 systemd[1804]: Reached target paths.target - Paths. May 14 00:09:56.632884 systemd[1804]: Reached target timers.target - Timers. May 14 00:09:56.634143 systemd[1804]: Starting dbus.socket - D-Bus User Message Bus Socket... May 14 00:09:56.659671 systemd[1804]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 14 00:09:56.659763 systemd[1804]: Reached target sockets.target - Sockets. May 14 00:09:56.659793 systemd[1804]: Reached target basic.target - Basic System. May 14 00:09:56.659819 systemd[1804]: Reached target default.target - Main User Target. May 14 00:09:56.659839 systemd[1804]: Startup finished in 224ms. May 14 00:09:56.660440 systemd[1]: Started user@500.service - User Manager for UID 500. May 14 00:09:56.674480 systemd[1]: Started session-1.scope - Session 1 of User core. May 14 00:09:57.364411 systemd[1]: Started sshd@2-37.27.39.104:22-139.178.89.65:34914.service - OpenSSH per-connection server daemon (139.178.89.65:34914). May 14 00:09:58.371129 sshd[1815]: Accepted publickey for core from 139.178.89.65 port 34914 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:09:58.373507 sshd-session[1815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:09:58.383044 systemd-logind[1496]: New session 2 of user core. May 14 00:09:58.391459 systemd[1]: Started session-2.scope - Session 2 of User core. May 14 00:09:59.054918 sshd[1817]: Connection closed by 139.178.89.65 port 34914 May 14 00:09:59.055882 sshd-session[1815]: pam_unix(sshd:session): session closed for user core May 14 00:09:59.061993 systemd[1]: sshd@2-37.27.39.104:22-139.178.89.65:34914.service: Deactivated successfully. May 14 00:09:59.062937 systemd-logind[1496]: Session 2 logged out. Waiting for processes to exit. May 14 00:09:59.065835 systemd[1]: session-2.scope: Deactivated successfully. May 14 00:09:59.068750 systemd-logind[1496]: Removed session 2. May 14 00:09:59.229430 systemd[1]: Started sshd@3-37.27.39.104:22-139.178.89.65:34924.service - OpenSSH per-connection server daemon (139.178.89.65:34924). May 14 00:10:00.066598 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. May 14 00:10:00.070297 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:10:00.236442 sshd[1823]: Accepted publickey for core from 139.178.89.65 port 34924 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:10:00.238257 sshd-session[1823]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:10:00.245775 systemd-logind[1496]: New session 3 of user core. May 14 00:10:00.254348 systemd[1]: Started session-3.scope - Session 3 of User core. May 14 00:10:00.260337 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:10:00.263600 (kubelet)[1833]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 00:10:00.321117 kubelet[1833]: E0514 00:10:00.320925 1833 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 00:10:00.323444 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 00:10:00.323682 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 00:10:00.324119 systemd[1]: kubelet.service: Consumed 218ms CPU time, 103.8M memory peak. May 14 00:10:00.914430 sshd[1834]: Connection closed by 139.178.89.65 port 34924 May 14 00:10:00.915509 sshd-session[1823]: pam_unix(sshd:session): session closed for user core May 14 00:10:00.927345 systemd[1]: sshd@3-37.27.39.104:22-139.178.89.65:34924.service: Deactivated successfully. May 14 00:10:00.930303 systemd[1]: session-3.scope: Deactivated successfully. May 14 00:10:00.932028 systemd-logind[1496]: Session 3 logged out. Waiting for processes to exit. May 14 00:10:00.933928 systemd-logind[1496]: Removed session 3. May 14 00:10:01.087676 systemd[1]: Started sshd@4-37.27.39.104:22-139.178.89.65:34936.service - OpenSSH per-connection server daemon (139.178.89.65:34936). May 14 00:10:02.094867 sshd[1846]: Accepted publickey for core from 139.178.89.65 port 34936 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:10:02.096635 sshd-session[1846]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:10:02.102505 systemd-logind[1496]: New session 4 of user core. May 14 00:10:02.110506 systemd[1]: Started session-4.scope - Session 4 of User core. May 14 00:10:02.776635 sshd[1848]: Connection closed by 139.178.89.65 port 34936 May 14 00:10:02.777523 sshd-session[1846]: pam_unix(sshd:session): session closed for user core May 14 00:10:02.782151 systemd[1]: sshd@4-37.27.39.104:22-139.178.89.65:34936.service: Deactivated successfully. May 14 00:10:02.784741 systemd[1]: session-4.scope: Deactivated successfully. May 14 00:10:02.787130 systemd-logind[1496]: Session 4 logged out. Waiting for processes to exit. May 14 00:10:02.789031 systemd-logind[1496]: Removed session 4. May 14 00:10:02.950845 systemd[1]: Started sshd@5-37.27.39.104:22-139.178.89.65:34944.service - OpenSSH per-connection server daemon (139.178.89.65:34944). May 14 00:10:03.968311 sshd[1854]: Accepted publickey for core from 139.178.89.65 port 34944 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:10:03.970897 sshd-session[1854]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:10:03.980582 systemd-logind[1496]: New session 5 of user core. May 14 00:10:03.990679 systemd[1]: Started session-5.scope - Session 5 of User core. May 14 00:10:04.505216 sudo[1857]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 14 00:10:04.505759 sudo[1857]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 00:10:04.524054 sudo[1857]: pam_unix(sudo:session): session closed for user root May 14 00:10:04.682803 sshd[1856]: Connection closed by 139.178.89.65 port 34944 May 14 00:10:04.684119 sshd-session[1854]: pam_unix(sshd:session): session closed for user core May 14 00:10:04.689453 systemd[1]: sshd@5-37.27.39.104:22-139.178.89.65:34944.service: Deactivated successfully. May 14 00:10:04.692469 systemd[1]: session-5.scope: Deactivated successfully. May 14 00:10:04.694351 systemd-logind[1496]: Session 5 logged out. Waiting for processes to exit. May 14 00:10:04.695919 systemd-logind[1496]: Removed session 5. May 14 00:10:04.857665 systemd[1]: Started sshd@6-37.27.39.104:22-139.178.89.65:34954.service - OpenSSH per-connection server daemon (139.178.89.65:34954). May 14 00:10:05.869800 sshd[1863]: Accepted publickey for core from 139.178.89.65 port 34954 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:10:05.872978 sshd-session[1863]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:10:05.882934 systemd-logind[1496]: New session 6 of user core. May 14 00:10:05.889003 systemd[1]: Started session-6.scope - Session 6 of User core. May 14 00:10:06.393382 sudo[1867]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 14 00:10:06.394032 sudo[1867]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 00:10:06.400083 sudo[1867]: pam_unix(sudo:session): session closed for user root May 14 00:10:06.408865 sudo[1866]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 14 00:10:06.409536 sudo[1866]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 00:10:06.427678 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 00:10:06.487824 augenrules[1889]: No rules May 14 00:10:06.489675 systemd[1]: audit-rules.service: Deactivated successfully. May 14 00:10:06.490207 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 00:10:06.492290 sudo[1866]: pam_unix(sudo:session): session closed for user root May 14 00:10:06.650815 sshd[1865]: Connection closed by 139.178.89.65 port 34954 May 14 00:10:06.651627 sshd-session[1863]: pam_unix(sshd:session): session closed for user core May 14 00:10:06.655661 systemd[1]: sshd@6-37.27.39.104:22-139.178.89.65:34954.service: Deactivated successfully. May 14 00:10:06.657833 systemd[1]: session-6.scope: Deactivated successfully. May 14 00:10:06.658792 systemd-logind[1496]: Session 6 logged out. Waiting for processes to exit. May 14 00:10:06.660028 systemd-logind[1496]: Removed session 6. May 14 00:10:06.823654 systemd[1]: Started sshd@7-37.27.39.104:22-139.178.89.65:52998.service - OpenSSH per-connection server daemon (139.178.89.65:52998). May 14 00:10:07.831294 sshd[1898]: Accepted publickey for core from 139.178.89.65 port 52998 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:10:07.834582 sshd-session[1898]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:10:07.842323 systemd-logind[1496]: New session 7 of user core. May 14 00:10:07.853541 systemd[1]: Started session-7.scope - Session 7 of User core. May 14 00:10:08.356204 sudo[1901]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 14 00:10:08.356815 sudo[1901]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 00:10:08.894193 systemd[1]: Starting docker.service - Docker Application Container Engine... May 14 00:10:08.914813 (dockerd)[1918]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 14 00:10:09.270158 dockerd[1918]: time="2025-05-14T00:10:09.270012288Z" level=info msg="Starting up" May 14 00:10:09.274394 dockerd[1918]: time="2025-05-14T00:10:09.274360916Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 14 00:10:09.306565 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1057220310-merged.mount: Deactivated successfully. May 14 00:10:09.328966 systemd[1]: var-lib-docker-metacopy\x2dcheck2378086513-merged.mount: Deactivated successfully. May 14 00:10:09.348064 dockerd[1918]: time="2025-05-14T00:10:09.348005093Z" level=info msg="Loading containers: start." May 14 00:10:09.490357 kernel: Initializing XFRM netlink socket May 14 00:10:09.558090 systemd-networkd[1411]: docker0: Link UP May 14 00:10:09.605441 dockerd[1918]: time="2025-05-14T00:10:09.605361792Z" level=info msg="Loading containers: done." May 14 00:10:09.623869 dockerd[1918]: time="2025-05-14T00:10:09.623809188Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 14 00:10:09.624013 dockerd[1918]: time="2025-05-14T00:10:09.623949652Z" level=info msg="Docker daemon" commit=c710b88579fcb5e0d53f96dcae976d79323b9166 containerd-snapshotter=false storage-driver=overlay2 version=27.4.1 May 14 00:10:09.624146 dockerd[1918]: time="2025-05-14T00:10:09.624113499Z" level=info msg="Daemon has completed initialization" May 14 00:10:09.680765 dockerd[1918]: time="2025-05-14T00:10:09.680307466Z" level=info msg="API listen on /run/docker.sock" May 14 00:10:09.680481 systemd[1]: Started docker.service - Docker Application Container Engine. May 14 00:10:10.451298 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. May 14 00:10:10.453916 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:10:10.627321 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:10:10.637467 (kubelet)[2122]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 00:10:10.674590 kubelet[2122]: E0514 00:10:10.674508 2122 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 00:10:10.678373 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 00:10:10.678558 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 00:10:10.678911 systemd[1]: kubelet.service: Consumed 191ms CPU time, 103.8M memory peak. May 14 00:10:11.093529 containerd[1528]: time="2025-05-14T00:10:11.092929380Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" May 14 00:10:11.790712 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4082975897.mount: Deactivated successfully. May 14 00:10:13.658984 containerd[1528]: time="2025-05-14T00:10:13.658907697Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:10:13.660366 containerd[1528]: time="2025-05-14T00:10:13.660104911Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=28682973" May 14 00:10:13.662524 containerd[1528]: time="2025-05-14T00:10:13.661473767Z" level=info msg="ImageCreate event name:\"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:10:13.664123 containerd[1528]: time="2025-05-14T00:10:13.664091573Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:10:13.664831 containerd[1528]: time="2025-05-14T00:10:13.664812113Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"28679679\" in 2.571829734s" May 14 00:10:13.664907 containerd[1528]: time="2025-05-14T00:10:13.664894378Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\"" May 14 00:10:13.665499 containerd[1528]: time="2025-05-14T00:10:13.665471449Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" May 14 00:10:16.279414 containerd[1528]: time="2025-05-14T00:10:16.279351772Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:10:16.280479 containerd[1528]: time="2025-05-14T00:10:16.280420255Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=24779611" May 14 00:10:16.281579 containerd[1528]: time="2025-05-14T00:10:16.281518523Z" level=info msg="ImageCreate event name:\"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:10:16.284208 containerd[1528]: time="2025-05-14T00:10:16.283909876Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:10:16.284669 containerd[1528]: time="2025-05-14T00:10:16.284639884Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"26267962\" in 2.619137577s" May 14 00:10:16.284715 containerd[1528]: time="2025-05-14T00:10:16.284671704Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\"" May 14 00:10:16.285451 containerd[1528]: time="2025-05-14T00:10:16.285427671Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" May 14 00:10:18.228433 containerd[1528]: time="2025-05-14T00:10:18.228382044Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:10:18.229463 containerd[1528]: time="2025-05-14T00:10:18.229210867Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=19169960" May 14 00:10:18.230817 containerd[1528]: time="2025-05-14T00:10:18.230765030Z" level=info msg="ImageCreate event name:\"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:10:18.233425 containerd[1528]: time="2025-05-14T00:10:18.233362099Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:10:18.236571 containerd[1528]: time="2025-05-14T00:10:18.234529496Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"20658329\" in 1.949020583s" May 14 00:10:18.236571 containerd[1528]: time="2025-05-14T00:10:18.234826364Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\"" May 14 00:10:18.237856 containerd[1528]: time="2025-05-14T00:10:18.237834342Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 14 00:10:19.349243 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount761483626.mount: Deactivated successfully. May 14 00:10:19.719312 containerd[1528]: time="2025-05-14T00:10:19.719244142Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:10:19.720812 containerd[1528]: time="2025-05-14T00:10:19.720573695Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=30917884" May 14 00:10:19.723111 containerd[1528]: time="2025-05-14T00:10:19.721791808Z" level=info msg="ImageCreate event name:\"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:10:19.724354 containerd[1528]: time="2025-05-14T00:10:19.724313695Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:10:19.725311 containerd[1528]: time="2025-05-14T00:10:19.724618767Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"30916875\" in 1.486690449s" May 14 00:10:19.725311 containerd[1528]: time="2025-05-14T00:10:19.724651950Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\"" May 14 00:10:19.725549 containerd[1528]: time="2025-05-14T00:10:19.725513465Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 14 00:10:20.300993 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount371070072.mount: Deactivated successfully. May 14 00:10:20.701261 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. May 14 00:10:20.705118 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:10:20.837902 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:10:20.843547 (kubelet)[2254]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 00:10:20.885886 kubelet[2254]: E0514 00:10:20.885832 2254 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 00:10:20.891408 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 00:10:20.891609 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 00:10:20.892359 systemd[1]: kubelet.service: Consumed 142ms CPU time, 103.8M memory peak. May 14 00:10:21.160374 containerd[1528]: time="2025-05-14T00:10:21.160039577Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:10:21.162236 containerd[1528]: time="2025-05-14T00:10:21.162165102Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565335" May 14 00:10:21.165812 containerd[1528]: time="2025-05-14T00:10:21.165757066Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:10:21.169189 containerd[1528]: time="2025-05-14T00:10:21.169154696Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:10:21.170114 containerd[1528]: time="2025-05-14T00:10:21.170093626Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.444555296s" May 14 00:10:21.170202 containerd[1528]: time="2025-05-14T00:10:21.170189797Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 14 00:10:21.171196 containerd[1528]: time="2025-05-14T00:10:21.171156770Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 14 00:10:21.700960 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1573321821.mount: Deactivated successfully. May 14 00:10:21.713128 containerd[1528]: time="2025-05-14T00:10:21.711888397Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 00:10:21.713741 containerd[1528]: time="2025-05-14T00:10:21.713654673Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321160" May 14 00:10:21.716008 containerd[1528]: time="2025-05-14T00:10:21.715943595Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 00:10:21.720672 containerd[1528]: time="2025-05-14T00:10:21.719247571Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 00:10:21.722061 containerd[1528]: time="2025-05-14T00:10:21.722013453Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 550.814615ms" May 14 00:10:21.722218 containerd[1528]: time="2025-05-14T00:10:21.722192848Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 14 00:10:21.723337 containerd[1528]: time="2025-05-14T00:10:21.723283624Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 14 00:10:22.322650 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2598063396.mount: Deactivated successfully. May 14 00:10:24.093664 containerd[1528]: time="2025-05-14T00:10:24.093602117Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:10:24.095648 containerd[1528]: time="2025-05-14T00:10:24.095393918Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551430" May 14 00:10:24.098114 containerd[1528]: time="2025-05-14T00:10:24.096681030Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:10:24.100319 containerd[1528]: time="2025-05-14T00:10:24.099255283Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:10:24.100319 containerd[1528]: time="2025-05-14T00:10:24.100164823Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.376659677s" May 14 00:10:24.100319 containerd[1528]: time="2025-05-14T00:10:24.100203082Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" May 14 00:10:27.239457 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:10:27.239879 systemd[1]: kubelet.service: Consumed 142ms CPU time, 103.8M memory peak. May 14 00:10:27.244545 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:10:27.291889 systemd[1]: Reload requested from client PID 2351 ('systemctl') (unit session-7.scope)... May 14 00:10:27.291918 systemd[1]: Reloading... May 14 00:10:27.413332 zram_generator::config[2396]: No configuration found. May 14 00:10:27.523374 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 00:10:27.626847 systemd[1]: Reloading finished in 334 ms. May 14 00:10:27.666900 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 14 00:10:27.666964 systemd[1]: kubelet.service: Failed with result 'signal'. May 14 00:10:27.667181 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:10:27.667215 systemd[1]: kubelet.service: Consumed 89ms CPU time, 91.3M memory peak. May 14 00:10:27.668543 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:10:27.796552 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:10:27.805514 (kubelet)[2448]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 00:10:27.851710 kubelet[2448]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 00:10:27.851710 kubelet[2448]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 14 00:10:27.851710 kubelet[2448]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 00:10:27.852142 kubelet[2448]: I0514 00:10:27.851797 2448 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 00:10:28.213856 kubelet[2448]: I0514 00:10:28.213774 2448 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 14 00:10:28.213856 kubelet[2448]: I0514 00:10:28.213844 2448 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 00:10:28.214561 kubelet[2448]: I0514 00:10:28.214523 2448 server.go:954] "Client rotation is on, will bootstrap in background" May 14 00:10:28.255406 kubelet[2448]: I0514 00:10:28.255358 2448 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 00:10:28.260278 kubelet[2448]: E0514 00:10:28.260247 2448 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://37.27.39.104:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 37.27.39.104:6443: connect: connection refused" logger="UnhandledError" May 14 00:10:28.275364 kubelet[2448]: I0514 00:10:28.275312 2448 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 14 00:10:28.286606 kubelet[2448]: I0514 00:10:28.286566 2448 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 00:10:28.291731 kubelet[2448]: I0514 00:10:28.291654 2448 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 00:10:28.291989 kubelet[2448]: I0514 00:10:28.291732 2448 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4284-0-0-n-186718797f","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 14 00:10:28.295289 kubelet[2448]: I0514 00:10:28.295242 2448 topology_manager.go:138] "Creating topology manager with none policy" May 14 00:10:28.295289 kubelet[2448]: I0514 00:10:28.295272 2448 container_manager_linux.go:304] "Creating device plugin manager" May 14 00:10:28.295473 kubelet[2448]: I0514 00:10:28.295434 2448 state_mem.go:36] "Initialized new in-memory state store" May 14 00:10:28.301532 kubelet[2448]: I0514 00:10:28.301456 2448 kubelet.go:446] "Attempting to sync node with API server" May 14 00:10:28.301532 kubelet[2448]: I0514 00:10:28.301514 2448 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 00:10:28.304728 kubelet[2448]: I0514 00:10:28.304153 2448 kubelet.go:352] "Adding apiserver pod source" May 14 00:10:28.304728 kubelet[2448]: I0514 00:10:28.304186 2448 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 00:10:28.316104 kubelet[2448]: W0514 00:10:28.315204 2448 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://37.27.39.104:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284-0-0-n-186718797f&limit=500&resourceVersion=0": dial tcp 37.27.39.104:6443: connect: connection refused May 14 00:10:28.316104 kubelet[2448]: E0514 00:10:28.315803 2448 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://37.27.39.104:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284-0-0-n-186718797f&limit=500&resourceVersion=0\": dial tcp 37.27.39.104:6443: connect: connection refused" logger="UnhandledError" May 14 00:10:28.316104 kubelet[2448]: I0514 00:10:28.315954 2448 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 14 00:10:28.321526 kubelet[2448]: I0514 00:10:28.321374 2448 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 00:10:28.323878 kubelet[2448]: W0514 00:10:28.323289 2448 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 14 00:10:28.323878 kubelet[2448]: W0514 00:10:28.323547 2448 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://37.27.39.104:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 37.27.39.104:6443: connect: connection refused May 14 00:10:28.323878 kubelet[2448]: E0514 00:10:28.323620 2448 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://37.27.39.104:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 37.27.39.104:6443: connect: connection refused" logger="UnhandledError" May 14 00:10:28.324196 kubelet[2448]: I0514 00:10:28.324164 2448 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 14 00:10:28.324291 kubelet[2448]: I0514 00:10:28.324209 2448 server.go:1287] "Started kubelet" May 14 00:10:28.325448 kubelet[2448]: I0514 00:10:28.325398 2448 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 14 00:10:28.332298 kubelet[2448]: I0514 00:10:28.332189 2448 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 00:10:28.334257 kubelet[2448]: I0514 00:10:28.332747 2448 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 00:10:28.334257 kubelet[2448]: I0514 00:10:28.333462 2448 server.go:490] "Adding debug handlers to kubelet server" May 14 00:10:28.338331 kubelet[2448]: E0514 00:10:28.334392 2448 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://37.27.39.104:6443/api/v1/namespaces/default/events\": dial tcp 37.27.39.104:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4284-0-0-n-186718797f.183f3c40896c319b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4284-0-0-n-186718797f,UID:ci-4284-0-0-n-186718797f,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4284-0-0-n-186718797f,},FirstTimestamp:2025-05-14 00:10:28.324184475 +0000 UTC m=+0.516032618,LastTimestamp:2025-05-14 00:10:28.324184475 +0000 UTC m=+0.516032618,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4284-0-0-n-186718797f,}" May 14 00:10:28.342428 kubelet[2448]: I0514 00:10:28.342396 2448 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 00:10:28.343269 kubelet[2448]: I0514 00:10:28.342767 2448 volume_manager.go:297] "Starting Kubelet Volume Manager" May 14 00:10:28.343269 kubelet[2448]: I0514 00:10:28.342864 2448 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 14 00:10:28.347482 kubelet[2448]: I0514 00:10:28.347466 2448 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 14 00:10:28.348325 kubelet[2448]: I0514 00:10:28.348310 2448 reconciler.go:26] "Reconciler: start to sync state" May 14 00:10:28.349076 kubelet[2448]: W0514 00:10:28.349032 2448 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://37.27.39.104:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 37.27.39.104:6443: connect: connection refused May 14 00:10:28.349659 kubelet[2448]: E0514 00:10:28.349638 2448 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://37.27.39.104:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 37.27.39.104:6443: connect: connection refused" logger="UnhandledError" May 14 00:10:28.351468 kubelet[2448]: E0514 00:10:28.351097 2448 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4284-0-0-n-186718797f\" not found" May 14 00:10:28.351468 kubelet[2448]: E0514 00:10:28.351387 2448 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://37.27.39.104:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284-0-0-n-186718797f?timeout=10s\": dial tcp 37.27.39.104:6443: connect: connection refused" interval="200ms" May 14 00:10:28.352733 kubelet[2448]: I0514 00:10:28.352393 2448 factory.go:221] Registration of the systemd container factory successfully May 14 00:10:28.352733 kubelet[2448]: I0514 00:10:28.352519 2448 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 00:10:28.357315 kubelet[2448]: I0514 00:10:28.357295 2448 factory.go:221] Registration of the containerd container factory successfully May 14 00:10:28.372522 kubelet[2448]: I0514 00:10:28.372472 2448 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 00:10:28.373864 kubelet[2448]: I0514 00:10:28.373843 2448 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 00:10:28.373963 kubelet[2448]: I0514 00:10:28.373953 2448 status_manager.go:227] "Starting to sync pod status with apiserver" May 14 00:10:28.374053 kubelet[2448]: I0514 00:10:28.374042 2448 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 14 00:10:28.374111 kubelet[2448]: I0514 00:10:28.374103 2448 kubelet.go:2388] "Starting kubelet main sync loop" May 14 00:10:28.374512 kubelet[2448]: E0514 00:10:28.374209 2448 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 00:10:28.382962 kubelet[2448]: W0514 00:10:28.382901 2448 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://37.27.39.104:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 37.27.39.104:6443: connect: connection refused May 14 00:10:28.383008 kubelet[2448]: E0514 00:10:28.382976 2448 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://37.27.39.104:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 37.27.39.104:6443: connect: connection refused" logger="UnhandledError" May 14 00:10:28.397316 kubelet[2448]: I0514 00:10:28.397262 2448 cpu_manager.go:221] "Starting CPU manager" policy="none" May 14 00:10:28.397316 kubelet[2448]: I0514 00:10:28.397281 2448 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 14 00:10:28.397620 kubelet[2448]: I0514 00:10:28.397545 2448 state_mem.go:36] "Initialized new in-memory state store" May 14 00:10:28.400878 kubelet[2448]: I0514 00:10:28.400865 2448 policy_none.go:49] "None policy: Start" May 14 00:10:28.401011 kubelet[2448]: I0514 00:10:28.400953 2448 memory_manager.go:186] "Starting memorymanager" policy="None" May 14 00:10:28.401011 kubelet[2448]: I0514 00:10:28.400965 2448 state_mem.go:35] "Initializing new in-memory state store" May 14 00:10:28.407266 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 14 00:10:28.417825 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 14 00:10:28.421757 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 14 00:10:28.429024 kubelet[2448]: I0514 00:10:28.429005 2448 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 00:10:28.429408 kubelet[2448]: I0514 00:10:28.429389 2448 eviction_manager.go:189] "Eviction manager: starting control loop" May 14 00:10:28.429509 kubelet[2448]: I0514 00:10:28.429481 2448 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 00:10:28.430448 kubelet[2448]: I0514 00:10:28.430378 2448 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 00:10:28.431198 kubelet[2448]: E0514 00:10:28.430969 2448 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 14 00:10:28.431260 kubelet[2448]: E0514 00:10:28.431251 2448 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4284-0-0-n-186718797f\" not found" May 14 00:10:28.497785 systemd[1]: Created slice kubepods-burstable-podb4f488b485dafcac26d797b0d5f412ff.slice - libcontainer container kubepods-burstable-podb4f488b485dafcac26d797b0d5f412ff.slice. May 14 00:10:28.514358 kubelet[2448]: E0514 00:10:28.514299 2448 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284-0-0-n-186718797f\" not found" node="ci-4284-0-0-n-186718797f" May 14 00:10:28.521876 systemd[1]: Created slice kubepods-burstable-pod1dbaea622ff035f2daf1127e9e864dfd.slice - libcontainer container kubepods-burstable-pod1dbaea622ff035f2daf1127e9e864dfd.slice. May 14 00:10:28.525668 kubelet[2448]: E0514 00:10:28.525635 2448 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284-0-0-n-186718797f\" not found" node="ci-4284-0-0-n-186718797f" May 14 00:10:28.531548 systemd[1]: Created slice kubepods-burstable-pod49f326ba8d692043be41345871f0382b.slice - libcontainer container kubepods-burstable-pod49f326ba8d692043be41345871f0382b.slice. May 14 00:10:28.533474 kubelet[2448]: I0514 00:10:28.533376 2448 kubelet_node_status.go:76] "Attempting to register node" node="ci-4284-0-0-n-186718797f" May 14 00:10:28.533973 kubelet[2448]: E0514 00:10:28.533850 2448 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://37.27.39.104:6443/api/v1/nodes\": dial tcp 37.27.39.104:6443: connect: connection refused" node="ci-4284-0-0-n-186718797f" May 14 00:10:28.536086 kubelet[2448]: E0514 00:10:28.536015 2448 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284-0-0-n-186718797f\" not found" node="ci-4284-0-0-n-186718797f" May 14 00:10:28.552330 kubelet[2448]: E0514 00:10:28.552264 2448 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://37.27.39.104:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284-0-0-n-186718797f?timeout=10s\": dial tcp 37.27.39.104:6443: connect: connection refused" interval="400ms" May 14 00:10:28.650151 kubelet[2448]: I0514 00:10:28.650066 2448 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b4f488b485dafcac26d797b0d5f412ff-k8s-certs\") pod \"kube-apiserver-ci-4284-0-0-n-186718797f\" (UID: \"b4f488b485dafcac26d797b0d5f412ff\") " pod="kube-system/kube-apiserver-ci-4284-0-0-n-186718797f" May 14 00:10:28.650151 kubelet[2448]: I0514 00:10:28.650119 2448 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1dbaea622ff035f2daf1127e9e864dfd-flexvolume-dir\") pod \"kube-controller-manager-ci-4284-0-0-n-186718797f\" (UID: \"1dbaea622ff035f2daf1127e9e864dfd\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-186718797f" May 14 00:10:28.650151 kubelet[2448]: I0514 00:10:28.650151 2448 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1dbaea622ff035f2daf1127e9e864dfd-k8s-certs\") pod \"kube-controller-manager-ci-4284-0-0-n-186718797f\" (UID: \"1dbaea622ff035f2daf1127e9e864dfd\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-186718797f" May 14 00:10:28.650151 kubelet[2448]: I0514 00:10:28.650174 2448 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1dbaea622ff035f2daf1127e9e864dfd-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4284-0-0-n-186718797f\" (UID: \"1dbaea622ff035f2daf1127e9e864dfd\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-186718797f" May 14 00:10:28.650598 kubelet[2448]: I0514 00:10:28.650201 2448 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b4f488b485dafcac26d797b0d5f412ff-ca-certs\") pod \"kube-apiserver-ci-4284-0-0-n-186718797f\" (UID: \"b4f488b485dafcac26d797b0d5f412ff\") " pod="kube-system/kube-apiserver-ci-4284-0-0-n-186718797f" May 14 00:10:28.650598 kubelet[2448]: I0514 00:10:28.650254 2448 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b4f488b485dafcac26d797b0d5f412ff-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4284-0-0-n-186718797f\" (UID: \"b4f488b485dafcac26d797b0d5f412ff\") " pod="kube-system/kube-apiserver-ci-4284-0-0-n-186718797f" May 14 00:10:28.650598 kubelet[2448]: I0514 00:10:28.650275 2448 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1dbaea622ff035f2daf1127e9e864dfd-ca-certs\") pod \"kube-controller-manager-ci-4284-0-0-n-186718797f\" (UID: \"1dbaea622ff035f2daf1127e9e864dfd\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-186718797f" May 14 00:10:28.650598 kubelet[2448]: I0514 00:10:28.650294 2448 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1dbaea622ff035f2daf1127e9e864dfd-kubeconfig\") pod \"kube-controller-manager-ci-4284-0-0-n-186718797f\" (UID: \"1dbaea622ff035f2daf1127e9e864dfd\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-186718797f" May 14 00:10:28.650598 kubelet[2448]: I0514 00:10:28.650315 2448 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/49f326ba8d692043be41345871f0382b-kubeconfig\") pod \"kube-scheduler-ci-4284-0-0-n-186718797f\" (UID: \"49f326ba8d692043be41345871f0382b\") " pod="kube-system/kube-scheduler-ci-4284-0-0-n-186718797f" May 14 00:10:28.737314 kubelet[2448]: I0514 00:10:28.737256 2448 kubelet_node_status.go:76] "Attempting to register node" node="ci-4284-0-0-n-186718797f" May 14 00:10:28.737954 kubelet[2448]: E0514 00:10:28.737875 2448 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://37.27.39.104:6443/api/v1/nodes\": dial tcp 37.27.39.104:6443: connect: connection refused" node="ci-4284-0-0-n-186718797f" May 14 00:10:28.817003 containerd[1528]: time="2025-05-14T00:10:28.816801086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4284-0-0-n-186718797f,Uid:b4f488b485dafcac26d797b0d5f412ff,Namespace:kube-system,Attempt:0,}" May 14 00:10:28.837320 containerd[1528]: time="2025-05-14T00:10:28.837199293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4284-0-0-n-186718797f,Uid:1dbaea622ff035f2daf1127e9e864dfd,Namespace:kube-system,Attempt:0,}" May 14 00:10:28.838505 containerd[1528]: time="2025-05-14T00:10:28.838385933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4284-0-0-n-186718797f,Uid:49f326ba8d692043be41345871f0382b,Namespace:kube-system,Attempt:0,}" May 14 00:10:28.955538 kubelet[2448]: E0514 00:10:28.955307 2448 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://37.27.39.104:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284-0-0-n-186718797f?timeout=10s\": dial tcp 37.27.39.104:6443: connect: connection refused" interval="800ms" May 14 00:10:29.001522 containerd[1528]: time="2025-05-14T00:10:29.001444277Z" level=info msg="connecting to shim 024849be89ce7e2b6ed84b6b2e452c229ecde70fbe69a3e0561f6a4385025b1a" address="unix:///run/containerd/s/ef5adbd54ff37745875d3e428e6c3370f92b771ec77e811d9c788aba4daff7ff" namespace=k8s.io protocol=ttrpc version=3 May 14 00:10:29.007759 containerd[1528]: time="2025-05-14T00:10:29.007606288Z" level=info msg="connecting to shim fd610c6d18e452debc2fdf59a261132433483fe76d54562e0ec4ae4074a300fc" address="unix:///run/containerd/s/99c5acdf7cc7191ac0ed72ef94687ac1cc8438ba41f0d6b01d764222e60afcc6" namespace=k8s.io protocol=ttrpc version=3 May 14 00:10:29.008081 containerd[1528]: time="2025-05-14T00:10:29.008041447Z" level=info msg="connecting to shim e583d60b07e969388f97cdec8747804179cd438ce537aa023310eb7835fa5368" address="unix:///run/containerd/s/10117b86afe5a4edbdf8f7cffb37d3c5c06dd09519fd64794c663b2ce12367df" namespace=k8s.io protocol=ttrpc version=3 May 14 00:10:29.096458 systemd[1]: Started cri-containerd-024849be89ce7e2b6ed84b6b2e452c229ecde70fbe69a3e0561f6a4385025b1a.scope - libcontainer container 024849be89ce7e2b6ed84b6b2e452c229ecde70fbe69a3e0561f6a4385025b1a. May 14 00:10:29.108419 systemd[1]: Started cri-containerd-e583d60b07e969388f97cdec8747804179cd438ce537aa023310eb7835fa5368.scope - libcontainer container e583d60b07e969388f97cdec8747804179cd438ce537aa023310eb7835fa5368. May 14 00:10:29.110715 systemd[1]: Started cri-containerd-fd610c6d18e452debc2fdf59a261132433483fe76d54562e0ec4ae4074a300fc.scope - libcontainer container fd610c6d18e452debc2fdf59a261132433483fe76d54562e0ec4ae4074a300fc. May 14 00:10:29.140391 kubelet[2448]: I0514 00:10:29.140327 2448 kubelet_node_status.go:76] "Attempting to register node" node="ci-4284-0-0-n-186718797f" May 14 00:10:29.140642 kubelet[2448]: E0514 00:10:29.140612 2448 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://37.27.39.104:6443/api/v1/nodes\": dial tcp 37.27.39.104:6443: connect: connection refused" node="ci-4284-0-0-n-186718797f" May 14 00:10:29.174927 containerd[1528]: time="2025-05-14T00:10:29.174811605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4284-0-0-n-186718797f,Uid:1dbaea622ff035f2daf1127e9e864dfd,Namespace:kube-system,Attempt:0,} returns sandbox id \"e583d60b07e969388f97cdec8747804179cd438ce537aa023310eb7835fa5368\"" May 14 00:10:29.181355 containerd[1528]: time="2025-05-14T00:10:29.181260376Z" level=info msg="CreateContainer within sandbox \"e583d60b07e969388f97cdec8747804179cd438ce537aa023310eb7835fa5368\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 14 00:10:29.188050 containerd[1528]: time="2025-05-14T00:10:29.187947009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4284-0-0-n-186718797f,Uid:49f326ba8d692043be41345871f0382b,Namespace:kube-system,Attempt:0,} returns sandbox id \"024849be89ce7e2b6ed84b6b2e452c229ecde70fbe69a3e0561f6a4385025b1a\"" May 14 00:10:29.192536 containerd[1528]: time="2025-05-14T00:10:29.192498551Z" level=info msg="CreateContainer within sandbox \"024849be89ce7e2b6ed84b6b2e452c229ecde70fbe69a3e0561f6a4385025b1a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 14 00:10:29.194506 containerd[1528]: time="2025-05-14T00:10:29.194482448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4284-0-0-n-186718797f,Uid:b4f488b485dafcac26d797b0d5f412ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"fd610c6d18e452debc2fdf59a261132433483fe76d54562e0ec4ae4074a300fc\"" May 14 00:10:29.197487 containerd[1528]: time="2025-05-14T00:10:29.196963256Z" level=info msg="CreateContainer within sandbox \"fd610c6d18e452debc2fdf59a261132433483fe76d54562e0ec4ae4074a300fc\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 14 00:10:29.200308 containerd[1528]: time="2025-05-14T00:10:29.200271483Z" level=info msg="Container d929fa1c141e686f858270220b2fcc9369ead14133e2875465954b778015fe78: CDI devices from CRI Config.CDIDevices: []" May 14 00:10:29.210978 containerd[1528]: time="2025-05-14T00:10:29.210913097Z" level=info msg="Container a38d66800fdd01af7649bdce1ba713f4e103d174604cdab30143b36f4658cd96: CDI devices from CRI Config.CDIDevices: []" May 14 00:10:29.214041 containerd[1528]: time="2025-05-14T00:10:29.213999913Z" level=info msg="Container e154c3ed7489944894d52ff1ed24bd8e66c4afa88351d2ef1968b46fd0fbe1fe: CDI devices from CRI Config.CDIDevices: []" May 14 00:10:29.218315 containerd[1528]: time="2025-05-14T00:10:29.218204527Z" level=info msg="CreateContainer within sandbox \"e583d60b07e969388f97cdec8747804179cd438ce537aa023310eb7835fa5368\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d929fa1c141e686f858270220b2fcc9369ead14133e2875465954b778015fe78\"" May 14 00:10:29.218974 containerd[1528]: time="2025-05-14T00:10:29.218939509Z" level=info msg="StartContainer for \"d929fa1c141e686f858270220b2fcc9369ead14133e2875465954b778015fe78\"" May 14 00:10:29.221942 containerd[1528]: time="2025-05-14T00:10:29.221906128Z" level=info msg="connecting to shim d929fa1c141e686f858270220b2fcc9369ead14133e2875465954b778015fe78" address="unix:///run/containerd/s/10117b86afe5a4edbdf8f7cffb37d3c5c06dd09519fd64794c663b2ce12367df" protocol=ttrpc version=3 May 14 00:10:29.228672 containerd[1528]: time="2025-05-14T00:10:29.228632382Z" level=info msg="CreateContainer within sandbox \"fd610c6d18e452debc2fdf59a261132433483fe76d54562e0ec4ae4074a300fc\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e154c3ed7489944894d52ff1ed24bd8e66c4afa88351d2ef1968b46fd0fbe1fe\"" May 14 00:10:29.229433 containerd[1528]: time="2025-05-14T00:10:29.229390257Z" level=info msg="CreateContainer within sandbox \"024849be89ce7e2b6ed84b6b2e452c229ecde70fbe69a3e0561f6a4385025b1a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a38d66800fdd01af7649bdce1ba713f4e103d174604cdab30143b36f4658cd96\"" May 14 00:10:29.230792 containerd[1528]: time="2025-05-14T00:10:29.230654398Z" level=info msg="StartContainer for \"e154c3ed7489944894d52ff1ed24bd8e66c4afa88351d2ef1968b46fd0fbe1fe\"" May 14 00:10:29.234635 containerd[1528]: time="2025-05-14T00:10:29.234412702Z" level=info msg="connecting to shim e154c3ed7489944894d52ff1ed24bd8e66c4afa88351d2ef1968b46fd0fbe1fe" address="unix:///run/containerd/s/99c5acdf7cc7191ac0ed72ef94687ac1cc8438ba41f0d6b01d764222e60afcc6" protocol=ttrpc version=3 May 14 00:10:29.238085 containerd[1528]: time="2025-05-14T00:10:29.238052100Z" level=info msg="StartContainer for \"a38d66800fdd01af7649bdce1ba713f4e103d174604cdab30143b36f4658cd96\"" May 14 00:10:29.241954 containerd[1528]: time="2025-05-14T00:10:29.241917168Z" level=info msg="connecting to shim a38d66800fdd01af7649bdce1ba713f4e103d174604cdab30143b36f4658cd96" address="unix:///run/containerd/s/ef5adbd54ff37745875d3e428e6c3370f92b771ec77e811d9c788aba4daff7ff" protocol=ttrpc version=3 May 14 00:10:29.250440 systemd[1]: Started cri-containerd-d929fa1c141e686f858270220b2fcc9369ead14133e2875465954b778015fe78.scope - libcontainer container d929fa1c141e686f858270220b2fcc9369ead14133e2875465954b778015fe78. May 14 00:10:29.277415 systemd[1]: Started cri-containerd-e154c3ed7489944894d52ff1ed24bd8e66c4afa88351d2ef1968b46fd0fbe1fe.scope - libcontainer container e154c3ed7489944894d52ff1ed24bd8e66c4afa88351d2ef1968b46fd0fbe1fe. May 14 00:10:29.287845 systemd[1]: Started cri-containerd-a38d66800fdd01af7649bdce1ba713f4e103d174604cdab30143b36f4658cd96.scope - libcontainer container a38d66800fdd01af7649bdce1ba713f4e103d174604cdab30143b36f4658cd96. May 14 00:10:29.336590 containerd[1528]: time="2025-05-14T00:10:29.336298373Z" level=info msg="StartContainer for \"d929fa1c141e686f858270220b2fcc9369ead14133e2875465954b778015fe78\" returns successfully" May 14 00:10:29.374355 containerd[1528]: time="2025-05-14T00:10:29.373735557Z" level=info msg="StartContainer for \"e154c3ed7489944894d52ff1ed24bd8e66c4afa88351d2ef1968b46fd0fbe1fe\" returns successfully" May 14 00:10:29.390410 containerd[1528]: time="2025-05-14T00:10:29.390321547Z" level=info msg="StartContainer for \"a38d66800fdd01af7649bdce1ba713f4e103d174604cdab30143b36f4658cd96\" returns successfully" May 14 00:10:29.399083 kubelet[2448]: E0514 00:10:29.399041 2448 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284-0-0-n-186718797f\" not found" node="ci-4284-0-0-n-186718797f" May 14 00:10:29.400867 kubelet[2448]: E0514 00:10:29.400799 2448 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284-0-0-n-186718797f\" not found" node="ci-4284-0-0-n-186718797f" May 14 00:10:29.404701 kubelet[2448]: E0514 00:10:29.404679 2448 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284-0-0-n-186718797f\" not found" node="ci-4284-0-0-n-186718797f" May 14 00:10:29.419685 kubelet[2448]: W0514 00:10:29.419620 2448 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://37.27.39.104:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 37.27.39.104:6443: connect: connection refused May 14 00:10:29.419685 kubelet[2448]: E0514 00:10:29.419685 2448 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://37.27.39.104:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 37.27.39.104:6443: connect: connection refused" logger="UnhandledError" May 14 00:10:29.550367 kubelet[2448]: W0514 00:10:29.550284 2448 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://37.27.39.104:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 37.27.39.104:6443: connect: connection refused May 14 00:10:29.550367 kubelet[2448]: E0514 00:10:29.550363 2448 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://37.27.39.104:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 37.27.39.104:6443: connect: connection refused" logger="UnhandledError" May 14 00:10:29.943664 kubelet[2448]: I0514 00:10:29.943514 2448 kubelet_node_status.go:76] "Attempting to register node" node="ci-4284-0-0-n-186718797f" May 14 00:10:30.409143 kubelet[2448]: E0514 00:10:30.409093 2448 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284-0-0-n-186718797f\" not found" node="ci-4284-0-0-n-186718797f" May 14 00:10:30.409679 kubelet[2448]: E0514 00:10:30.409471 2448 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284-0-0-n-186718797f\" not found" node="ci-4284-0-0-n-186718797f" May 14 00:10:31.057175 kubelet[2448]: E0514 00:10:31.057144 2448 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4284-0-0-n-186718797f\" not found" node="ci-4284-0-0-n-186718797f" May 14 00:10:31.155412 kubelet[2448]: E0514 00:10:31.155158 2448 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4284-0-0-n-186718797f.183f3c40896c319b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4284-0-0-n-186718797f,UID:ci-4284-0-0-n-186718797f,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4284-0-0-n-186718797f,},FirstTimestamp:2025-05-14 00:10:28.324184475 +0000 UTC m=+0.516032618,LastTimestamp:2025-05-14 00:10:28.324184475 +0000 UTC m=+0.516032618,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4284-0-0-n-186718797f,}" May 14 00:10:31.218612 kubelet[2448]: I0514 00:10:31.218550 2448 kubelet_node_status.go:79] "Successfully registered node" node="ci-4284-0-0-n-186718797f" May 14 00:10:31.252823 kubelet[2448]: I0514 00:10:31.251565 2448 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4284-0-0-n-186718797f" May 14 00:10:31.269269 kubelet[2448]: E0514 00:10:31.269171 2448 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4284-0-0-n-186718797f\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4284-0-0-n-186718797f" May 14 00:10:31.269269 kubelet[2448]: I0514 00:10:31.269214 2448 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4284-0-0-n-186718797f" May 14 00:10:31.274428 kubelet[2448]: E0514 00:10:31.274389 2448 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4284-0-0-n-186718797f\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4284-0-0-n-186718797f" May 14 00:10:31.274428 kubelet[2448]: I0514 00:10:31.274420 2448 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4284-0-0-n-186718797f" May 14 00:10:31.277123 kubelet[2448]: E0514 00:10:31.277060 2448 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4284-0-0-n-186718797f\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4284-0-0-n-186718797f" May 14 00:10:31.324949 kubelet[2448]: I0514 00:10:31.324807 2448 apiserver.go:52] "Watching apiserver" May 14 00:10:31.351254 kubelet[2448]: I0514 00:10:31.348418 2448 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 14 00:10:31.411268 kubelet[2448]: I0514 00:10:31.409739 2448 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4284-0-0-n-186718797f" May 14 00:10:31.411711 kubelet[2448]: E0514 00:10:31.411304 2448 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4284-0-0-n-186718797f\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4284-0-0-n-186718797f" May 14 00:10:32.229272 kubelet[2448]: I0514 00:10:32.226866 2448 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4284-0-0-n-186718797f" May 14 00:10:33.801805 systemd[1]: Reload requested from client PID 2713 ('systemctl') (unit session-7.scope)... May 14 00:10:33.801838 systemd[1]: Reloading... May 14 00:10:33.928321 zram_generator::config[2758]: No configuration found. May 14 00:10:34.041355 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 00:10:34.165919 systemd[1]: Reloading finished in 358 ms. May 14 00:10:34.193045 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:10:34.195507 kubelet[2448]: I0514 00:10:34.194283 2448 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 00:10:34.224485 systemd[1]: kubelet.service: Deactivated successfully. May 14 00:10:34.224709 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:10:34.224756 systemd[1]: kubelet.service: Consumed 996ms CPU time, 121.9M memory peak. May 14 00:10:34.227658 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:10:34.439345 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:10:34.450275 (kubelet)[2809]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 00:10:34.563257 kubelet[2809]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 00:10:34.563257 kubelet[2809]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 14 00:10:34.563257 kubelet[2809]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 00:10:34.569919 kubelet[2809]: I0514 00:10:34.569823 2809 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 00:10:34.581899 kubelet[2809]: I0514 00:10:34.581863 2809 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 14 00:10:34.582082 kubelet[2809]: I0514 00:10:34.582074 2809 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 00:10:34.582775 kubelet[2809]: I0514 00:10:34.582764 2809 server.go:954] "Client rotation is on, will bootstrap in background" May 14 00:10:34.585576 kubelet[2809]: I0514 00:10:34.585546 2809 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 14 00:10:34.598954 kubelet[2809]: I0514 00:10:34.598921 2809 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 00:10:34.615539 kubelet[2809]: I0514 00:10:34.615517 2809 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 14 00:10:34.620574 kubelet[2809]: I0514 00:10:34.620459 2809 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 00:10:34.621347 kubelet[2809]: I0514 00:10:34.620875 2809 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 00:10:34.621347 kubelet[2809]: I0514 00:10:34.620907 2809 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4284-0-0-n-186718797f","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 14 00:10:34.621347 kubelet[2809]: I0514 00:10:34.621199 2809 topology_manager.go:138] "Creating topology manager with none policy" May 14 00:10:34.621347 kubelet[2809]: I0514 00:10:34.621214 2809 container_manager_linux.go:304] "Creating device plugin manager" May 14 00:10:34.621613 kubelet[2809]: I0514 00:10:34.621297 2809 state_mem.go:36] "Initialized new in-memory state store" May 14 00:10:34.622044 kubelet[2809]: I0514 00:10:34.621859 2809 kubelet.go:446] "Attempting to sync node with API server" May 14 00:10:34.622044 kubelet[2809]: I0514 00:10:34.621884 2809 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 00:10:34.622389 kubelet[2809]: I0514 00:10:34.622309 2809 kubelet.go:352] "Adding apiserver pod source" May 14 00:10:34.622389 kubelet[2809]: I0514 00:10:34.622330 2809 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 00:10:34.641351 kubelet[2809]: I0514 00:10:34.641246 2809 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 14 00:10:34.642200 kubelet[2809]: I0514 00:10:34.642067 2809 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 00:10:34.646548 kubelet[2809]: I0514 00:10:34.645122 2809 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 14 00:10:34.646548 kubelet[2809]: I0514 00:10:34.645157 2809 server.go:1287] "Started kubelet" May 14 00:10:34.648723 kubelet[2809]: I0514 00:10:34.648710 2809 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 00:10:34.653062 kubelet[2809]: I0514 00:10:34.653012 2809 volume_manager.go:297] "Starting Kubelet Volume Manager" May 14 00:10:34.655850 kubelet[2809]: I0514 00:10:34.655807 2809 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 14 00:10:34.655963 kubelet[2809]: I0514 00:10:34.655938 2809 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 14 00:10:34.656115 kubelet[2809]: I0514 00:10:34.656088 2809 reconciler.go:26] "Reconciler: start to sync state" May 14 00:10:34.658754 kubelet[2809]: I0514 00:10:34.658125 2809 server.go:490] "Adding debug handlers to kubelet server" May 14 00:10:34.663138 kubelet[2809]: I0514 00:10:34.663079 2809 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 00:10:34.663432 kubelet[2809]: I0514 00:10:34.663422 2809 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 00:10:34.663821 kubelet[2809]: I0514 00:10:34.663792 2809 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 14 00:10:34.664904 kubelet[2809]: I0514 00:10:34.664892 2809 factory.go:221] Registration of the systemd container factory successfully May 14 00:10:34.666282 kubelet[2809]: I0514 00:10:34.665151 2809 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 00:10:34.673607 kubelet[2809]: E0514 00:10:34.673588 2809 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 00:10:34.674050 kubelet[2809]: I0514 00:10:34.674020 2809 factory.go:221] Registration of the containerd container factory successfully May 14 00:10:34.685104 kubelet[2809]: I0514 00:10:34.685079 2809 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 00:10:34.686701 kubelet[2809]: I0514 00:10:34.686680 2809 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 00:10:34.686792 kubelet[2809]: I0514 00:10:34.686780 2809 status_manager.go:227] "Starting to sync pod status with apiserver" May 14 00:10:34.686851 kubelet[2809]: I0514 00:10:34.686844 2809 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 14 00:10:34.686896 kubelet[2809]: I0514 00:10:34.686891 2809 kubelet.go:2388] "Starting kubelet main sync loop" May 14 00:10:34.686969 kubelet[2809]: E0514 00:10:34.686958 2809 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 00:10:34.715140 kubelet[2809]: I0514 00:10:34.715120 2809 cpu_manager.go:221] "Starting CPU manager" policy="none" May 14 00:10:34.715313 kubelet[2809]: I0514 00:10:34.715304 2809 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 14 00:10:34.716148 kubelet[2809]: I0514 00:10:34.715357 2809 state_mem.go:36] "Initialized new in-memory state store" May 14 00:10:34.716383 kubelet[2809]: I0514 00:10:34.716371 2809 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 14 00:10:34.716446 kubelet[2809]: I0514 00:10:34.716429 2809 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 14 00:10:34.717114 kubelet[2809]: I0514 00:10:34.716481 2809 policy_none.go:49] "None policy: Start" May 14 00:10:34.717114 kubelet[2809]: I0514 00:10:34.716494 2809 memory_manager.go:186] "Starting memorymanager" policy="None" May 14 00:10:34.717114 kubelet[2809]: I0514 00:10:34.716504 2809 state_mem.go:35] "Initializing new in-memory state store" May 14 00:10:34.717114 kubelet[2809]: I0514 00:10:34.716592 2809 state_mem.go:75] "Updated machine memory state" May 14 00:10:34.720930 kubelet[2809]: I0514 00:10:34.720917 2809 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 00:10:34.721270 kubelet[2809]: I0514 00:10:34.721260 2809 eviction_manager.go:189] "Eviction manager: starting control loop" May 14 00:10:34.721358 kubelet[2809]: I0514 00:10:34.721323 2809 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 00:10:34.721846 kubelet[2809]: I0514 00:10:34.721836 2809 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 00:10:34.726636 kubelet[2809]: E0514 00:10:34.726111 2809 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 14 00:10:34.789810 kubelet[2809]: I0514 00:10:34.789753 2809 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4284-0-0-n-186718797f" May 14 00:10:34.790259 kubelet[2809]: I0514 00:10:34.790104 2809 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4284-0-0-n-186718797f" May 14 00:10:34.790500 kubelet[2809]: I0514 00:10:34.790475 2809 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4284-0-0-n-186718797f" May 14 00:10:34.808371 kubelet[2809]: E0514 00:10:34.808289 2809 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4284-0-0-n-186718797f\" already exists" pod="kube-system/kube-controller-manager-ci-4284-0-0-n-186718797f" May 14 00:10:34.837787 kubelet[2809]: I0514 00:10:34.837726 2809 kubelet_node_status.go:76] "Attempting to register node" node="ci-4284-0-0-n-186718797f" May 14 00:10:34.853536 kubelet[2809]: I0514 00:10:34.853402 2809 kubelet_node_status.go:125] "Node was previously registered" node="ci-4284-0-0-n-186718797f" May 14 00:10:34.853712 kubelet[2809]: I0514 00:10:34.853564 2809 kubelet_node_status.go:79] "Successfully registered node" node="ci-4284-0-0-n-186718797f" May 14 00:10:34.957605 kubelet[2809]: I0514 00:10:34.957116 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b4f488b485dafcac26d797b0d5f412ff-k8s-certs\") pod \"kube-apiserver-ci-4284-0-0-n-186718797f\" (UID: \"b4f488b485dafcac26d797b0d5f412ff\") " pod="kube-system/kube-apiserver-ci-4284-0-0-n-186718797f" May 14 00:10:34.957605 kubelet[2809]: I0514 00:10:34.957171 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b4f488b485dafcac26d797b0d5f412ff-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4284-0-0-n-186718797f\" (UID: \"b4f488b485dafcac26d797b0d5f412ff\") " pod="kube-system/kube-apiserver-ci-4284-0-0-n-186718797f" May 14 00:10:34.957605 kubelet[2809]: I0514 00:10:34.957258 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1dbaea622ff035f2daf1127e9e864dfd-ca-certs\") pod \"kube-controller-manager-ci-4284-0-0-n-186718797f\" (UID: \"1dbaea622ff035f2daf1127e9e864dfd\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-186718797f" May 14 00:10:34.957605 kubelet[2809]: I0514 00:10:34.957290 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1dbaea622ff035f2daf1127e9e864dfd-kubeconfig\") pod \"kube-controller-manager-ci-4284-0-0-n-186718797f\" (UID: \"1dbaea622ff035f2daf1127e9e864dfd\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-186718797f" May 14 00:10:34.957605 kubelet[2809]: I0514 00:10:34.957314 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1dbaea622ff035f2daf1127e9e864dfd-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4284-0-0-n-186718797f\" (UID: \"1dbaea622ff035f2daf1127e9e864dfd\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-186718797f" May 14 00:10:34.957918 kubelet[2809]: I0514 00:10:34.957339 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/49f326ba8d692043be41345871f0382b-kubeconfig\") pod \"kube-scheduler-ci-4284-0-0-n-186718797f\" (UID: \"49f326ba8d692043be41345871f0382b\") " pod="kube-system/kube-scheduler-ci-4284-0-0-n-186718797f" May 14 00:10:34.957918 kubelet[2809]: I0514 00:10:34.957360 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b4f488b485dafcac26d797b0d5f412ff-ca-certs\") pod \"kube-apiserver-ci-4284-0-0-n-186718797f\" (UID: \"b4f488b485dafcac26d797b0d5f412ff\") " pod="kube-system/kube-apiserver-ci-4284-0-0-n-186718797f" May 14 00:10:34.957918 kubelet[2809]: I0514 00:10:34.957401 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1dbaea622ff035f2daf1127e9e864dfd-flexvolume-dir\") pod \"kube-controller-manager-ci-4284-0-0-n-186718797f\" (UID: \"1dbaea622ff035f2daf1127e9e864dfd\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-186718797f" May 14 00:10:34.957918 kubelet[2809]: I0514 00:10:34.957422 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1dbaea622ff035f2daf1127e9e864dfd-k8s-certs\") pod \"kube-controller-manager-ci-4284-0-0-n-186718797f\" (UID: \"1dbaea622ff035f2daf1127e9e864dfd\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-186718797f" May 14 00:10:35.637287 kubelet[2809]: I0514 00:10:35.636833 2809 apiserver.go:52] "Watching apiserver" May 14 00:10:35.656895 kubelet[2809]: I0514 00:10:35.656763 2809 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 14 00:10:35.712385 kubelet[2809]: I0514 00:10:35.710550 2809 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4284-0-0-n-186718797f" May 14 00:10:35.712385 kubelet[2809]: I0514 00:10:35.710989 2809 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4284-0-0-n-186718797f" May 14 00:10:35.739001 kubelet[2809]: E0514 00:10:35.738789 2809 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4284-0-0-n-186718797f\" already exists" pod="kube-system/kube-apiserver-ci-4284-0-0-n-186718797f" May 14 00:10:35.741327 kubelet[2809]: E0514 00:10:35.741308 2809 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4284-0-0-n-186718797f\" already exists" pod="kube-system/kube-scheduler-ci-4284-0-0-n-186718797f" May 14 00:10:35.779272 kubelet[2809]: I0514 00:10:35.779074 2809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4284-0-0-n-186718797f" podStartSLOduration=3.7790503429999998 podStartE2EDuration="3.779050343s" podCreationTimestamp="2025-05-14 00:10:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:10:35.768382202 +0000 UTC m=+1.308866069" watchObservedRunningTime="2025-05-14 00:10:35.779050343 +0000 UTC m=+1.319534230" May 14 00:10:35.780579 kubelet[2809]: I0514 00:10:35.780533 2809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4284-0-0-n-186718797f" podStartSLOduration=1.780522767 podStartE2EDuration="1.780522767s" podCreationTimestamp="2025-05-14 00:10:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:10:35.780390196 +0000 UTC m=+1.320874072" watchObservedRunningTime="2025-05-14 00:10:35.780522767 +0000 UTC m=+1.321006654" May 14 00:10:35.815004 kubelet[2809]: I0514 00:10:35.814917 2809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4284-0-0-n-186718797f" podStartSLOduration=1.814895098 podStartE2EDuration="1.814895098s" podCreationTimestamp="2025-05-14 00:10:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:10:35.791291222 +0000 UTC m=+1.331775108" watchObservedRunningTime="2025-05-14 00:10:35.814895098 +0000 UTC m=+1.355378983" May 14 00:10:39.151400 kubelet[2809]: I0514 00:10:39.151359 2809 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 14 00:10:39.152009 containerd[1528]: time="2025-05-14T00:10:39.151831715Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 14 00:10:39.152316 kubelet[2809]: I0514 00:10:39.152177 2809 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 14 00:10:39.962148 systemd[1]: Created slice kubepods-besteffort-pod069ed8b7_9e82_4eb9_9734_74b899bf073f.slice - libcontainer container kubepods-besteffort-pod069ed8b7_9e82_4eb9_9734_74b899bf073f.slice. May 14 00:10:39.988525 kubelet[2809]: I0514 00:10:39.988482 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/069ed8b7-9e82-4eb9-9734-74b899bf073f-kube-proxy\") pod \"kube-proxy-hk2cc\" (UID: \"069ed8b7-9e82-4eb9-9734-74b899bf073f\") " pod="kube-system/kube-proxy-hk2cc" May 14 00:10:39.988841 kubelet[2809]: I0514 00:10:39.988818 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/069ed8b7-9e82-4eb9-9734-74b899bf073f-lib-modules\") pod \"kube-proxy-hk2cc\" (UID: \"069ed8b7-9e82-4eb9-9734-74b899bf073f\") " pod="kube-system/kube-proxy-hk2cc" May 14 00:10:39.989004 kubelet[2809]: I0514 00:10:39.988987 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/069ed8b7-9e82-4eb9-9734-74b899bf073f-xtables-lock\") pod \"kube-proxy-hk2cc\" (UID: \"069ed8b7-9e82-4eb9-9734-74b899bf073f\") " pod="kube-system/kube-proxy-hk2cc" May 14 00:10:39.989155 kubelet[2809]: I0514 00:10:39.989134 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt84q\" (UniqueName: \"kubernetes.io/projected/069ed8b7-9e82-4eb9-9734-74b899bf073f-kube-api-access-jt84q\") pod \"kube-proxy-hk2cc\" (UID: \"069ed8b7-9e82-4eb9-9734-74b899bf073f\") " pod="kube-system/kube-proxy-hk2cc" May 14 00:10:40.278593 containerd[1528]: time="2025-05-14T00:10:40.278454345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hk2cc,Uid:069ed8b7-9e82-4eb9-9734-74b899bf073f,Namespace:kube-system,Attempt:0,}" May 14 00:10:40.319868 systemd[1]: Created slice kubepods-besteffort-pod592483e6_b14b_4389_b1cf_c738b6c3c71e.slice - libcontainer container kubepods-besteffort-pod592483e6_b14b_4389_b1cf_c738b6c3c71e.slice. May 14 00:10:40.321359 containerd[1528]: time="2025-05-14T00:10:40.321324152Z" level=info msg="connecting to shim 6ad3526e8b7b86dcc7c9e73e082cd5d23daf8c54738b7abc338efd0c31b47d0d" address="unix:///run/containerd/s/9e02f1cd005a43b17615ace6f45c2874b6f219f81791b207d05438d19256f193" namespace=k8s.io protocol=ttrpc version=3 May 14 00:10:40.352351 systemd[1]: Started cri-containerd-6ad3526e8b7b86dcc7c9e73e082cd5d23daf8c54738b7abc338efd0c31b47d0d.scope - libcontainer container 6ad3526e8b7b86dcc7c9e73e082cd5d23daf8c54738b7abc338efd0c31b47d0d. May 14 00:10:40.384501 containerd[1528]: time="2025-05-14T00:10:40.384426516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hk2cc,Uid:069ed8b7-9e82-4eb9-9734-74b899bf073f,Namespace:kube-system,Attempt:0,} returns sandbox id \"6ad3526e8b7b86dcc7c9e73e082cd5d23daf8c54738b7abc338efd0c31b47d0d\"" May 14 00:10:40.388011 containerd[1528]: time="2025-05-14T00:10:40.387984724Z" level=info msg="CreateContainer within sandbox \"6ad3526e8b7b86dcc7c9e73e082cd5d23daf8c54738b7abc338efd0c31b47d0d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 14 00:10:40.392154 kubelet[2809]: I0514 00:10:40.392121 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/592483e6-b14b-4389-b1cf-c738b6c3c71e-var-lib-calico\") pod \"tigera-operator-789496d6f5-bzqwr\" (UID: \"592483e6-b14b-4389-b1cf-c738b6c3c71e\") " pod="tigera-operator/tigera-operator-789496d6f5-bzqwr" May 14 00:10:40.392154 kubelet[2809]: I0514 00:10:40.392151 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4b8h\" (UniqueName: \"kubernetes.io/projected/592483e6-b14b-4389-b1cf-c738b6c3c71e-kube-api-access-m4b8h\") pod \"tigera-operator-789496d6f5-bzqwr\" (UID: \"592483e6-b14b-4389-b1cf-c738b6c3c71e\") " pod="tigera-operator/tigera-operator-789496d6f5-bzqwr" May 14 00:10:40.404859 containerd[1528]: time="2025-05-14T00:10:40.404820345Z" level=info msg="Container 944b1d046938f47b48d20ec511f399ba920fc4da925862273a544e4613878ad6: CDI devices from CRI Config.CDIDevices: []" May 14 00:10:40.408241 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1628712456.mount: Deactivated successfully. May 14 00:10:40.421207 containerd[1528]: time="2025-05-14T00:10:40.421177392Z" level=info msg="CreateContainer within sandbox \"6ad3526e8b7b86dcc7c9e73e082cd5d23daf8c54738b7abc338efd0c31b47d0d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"944b1d046938f47b48d20ec511f399ba920fc4da925862273a544e4613878ad6\"" May 14 00:10:40.423429 containerd[1528]: time="2025-05-14T00:10:40.421783070Z" level=info msg="StartContainer for \"944b1d046938f47b48d20ec511f399ba920fc4da925862273a544e4613878ad6\"" May 14 00:10:40.423686 containerd[1528]: time="2025-05-14T00:10:40.422934125Z" level=info msg="connecting to shim 944b1d046938f47b48d20ec511f399ba920fc4da925862273a544e4613878ad6" address="unix:///run/containerd/s/9e02f1cd005a43b17615ace6f45c2874b6f219f81791b207d05438d19256f193" protocol=ttrpc version=3 May 14 00:10:40.452350 systemd[1]: Started cri-containerd-944b1d046938f47b48d20ec511f399ba920fc4da925862273a544e4613878ad6.scope - libcontainer container 944b1d046938f47b48d20ec511f399ba920fc4da925862273a544e4613878ad6. May 14 00:10:40.453927 sudo[1901]: pam_unix(sudo:session): session closed for user root May 14 00:10:40.498126 containerd[1528]: time="2025-05-14T00:10:40.497969703Z" level=info msg="StartContainer for \"944b1d046938f47b48d20ec511f399ba920fc4da925862273a544e4613878ad6\" returns successfully" May 14 00:10:40.612473 sshd[1900]: Connection closed by 139.178.89.65 port 52998 May 14 00:10:40.614149 sshd-session[1898]: pam_unix(sshd:session): session closed for user core May 14 00:10:40.629215 containerd[1528]: time="2025-05-14T00:10:40.628874184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-789496d6f5-bzqwr,Uid:592483e6-b14b-4389-b1cf-c738b6c3c71e,Namespace:tigera-operator,Attempt:0,}" May 14 00:10:40.630943 systemd[1]: sshd@7-37.27.39.104:22-139.178.89.65:52998.service: Deactivated successfully. May 14 00:10:40.638761 systemd[1]: session-7.scope: Deactivated successfully. May 14 00:10:40.639680 systemd[1]: session-7.scope: Consumed 5.369s CPU time, 152.3M memory peak. May 14 00:10:40.644015 systemd-logind[1496]: Session 7 logged out. Waiting for processes to exit. May 14 00:10:40.646242 systemd-logind[1496]: Removed session 7. May 14 00:10:40.684332 containerd[1528]: time="2025-05-14T00:10:40.683792458Z" level=info msg="connecting to shim 7ad623e2f4e8318218c8b161ecfb2f68b228fe7d8b3049e8013d47fb567c9d7d" address="unix:///run/containerd/s/b902cd7d742a369811799d177eb8ad90bf9d1b3bead4b49c660db31e1444530d" namespace=k8s.io protocol=ttrpc version=3 May 14 00:10:40.721938 systemd[1]: Started cri-containerd-7ad623e2f4e8318218c8b161ecfb2f68b228fe7d8b3049e8013d47fb567c9d7d.scope - libcontainer container 7ad623e2f4e8318218c8b161ecfb2f68b228fe7d8b3049e8013d47fb567c9d7d. May 14 00:10:40.758560 kubelet[2809]: I0514 00:10:40.758402 2809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hk2cc" podStartSLOduration=1.7583807089999999 podStartE2EDuration="1.758380709s" podCreationTimestamp="2025-05-14 00:10:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:10:40.756404054 +0000 UTC m=+6.296887919" watchObservedRunningTime="2025-05-14 00:10:40.758380709 +0000 UTC m=+6.298864575" May 14 00:10:40.797973 containerd[1528]: time="2025-05-14T00:10:40.797930153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-789496d6f5-bzqwr,Uid:592483e6-b14b-4389-b1cf-c738b6c3c71e,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"7ad623e2f4e8318218c8b161ecfb2f68b228fe7d8b3049e8013d47fb567c9d7d\"" May 14 00:10:40.802158 containerd[1528]: time="2025-05-14T00:10:40.802130456Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 14 00:10:43.149257 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1438317792.mount: Deactivated successfully. May 14 00:10:43.511350 containerd[1528]: time="2025-05-14T00:10:43.511296603Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:10:43.512703 containerd[1528]: time="2025-05-14T00:10:43.512657507Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=22002662" May 14 00:10:43.514784 containerd[1528]: time="2025-05-14T00:10:43.513812182Z" level=info msg="ImageCreate event name:\"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:10:43.517236 containerd[1528]: time="2025-05-14T00:10:43.516113950Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:10:43.517236 containerd[1528]: time="2025-05-14T00:10:43.516962124Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"21998657\" in 2.714802596s" May 14 00:10:43.517236 containerd[1528]: time="2025-05-14T00:10:43.516986429Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\"" May 14 00:10:43.519489 containerd[1528]: time="2025-05-14T00:10:43.519456315Z" level=info msg="CreateContainer within sandbox \"7ad623e2f4e8318218c8b161ecfb2f68b228fe7d8b3049e8013d47fb567c9d7d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 14 00:10:43.531243 containerd[1528]: time="2025-05-14T00:10:43.530129363Z" level=info msg="Container e9d15bbcd68dfa4ca34035aa6c2bb3ad026e863a74ac6c1024f87bcedf61c3d1: CDI devices from CRI Config.CDIDevices: []" May 14 00:10:43.546599 containerd[1528]: time="2025-05-14T00:10:43.546534245Z" level=info msg="CreateContainer within sandbox \"7ad623e2f4e8318218c8b161ecfb2f68b228fe7d8b3049e8013d47fb567c9d7d\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"e9d15bbcd68dfa4ca34035aa6c2bb3ad026e863a74ac6c1024f87bcedf61c3d1\"" May 14 00:10:43.548422 containerd[1528]: time="2025-05-14T00:10:43.547282094Z" level=info msg="StartContainer for \"e9d15bbcd68dfa4ca34035aa6c2bb3ad026e863a74ac6c1024f87bcedf61c3d1\"" May 14 00:10:43.548422 containerd[1528]: time="2025-05-14T00:10:43.548120912Z" level=info msg="connecting to shim e9d15bbcd68dfa4ca34035aa6c2bb3ad026e863a74ac6c1024f87bcedf61c3d1" address="unix:///run/containerd/s/b902cd7d742a369811799d177eb8ad90bf9d1b3bead4b49c660db31e1444530d" protocol=ttrpc version=3 May 14 00:10:43.580370 systemd[1]: Started cri-containerd-e9d15bbcd68dfa4ca34035aa6c2bb3ad026e863a74ac6c1024f87bcedf61c3d1.scope - libcontainer container e9d15bbcd68dfa4ca34035aa6c2bb3ad026e863a74ac6c1024f87bcedf61c3d1. May 14 00:10:43.607351 containerd[1528]: time="2025-05-14T00:10:43.607310473Z" level=info msg="StartContainer for \"e9d15bbcd68dfa4ca34035aa6c2bb3ad026e863a74ac6c1024f87bcedf61c3d1\" returns successfully" May 14 00:10:43.745510 kubelet[2809]: I0514 00:10:43.745431 2809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-789496d6f5-bzqwr" podStartSLOduration=1.029106318 podStartE2EDuration="3.745403731s" podCreationTimestamp="2025-05-14 00:10:40 +0000 UTC" firstStartedPulling="2025-05-14 00:10:40.801532573 +0000 UTC m=+6.342016438" lastFinishedPulling="2025-05-14 00:10:43.517829986 +0000 UTC m=+9.058313851" observedRunningTime="2025-05-14 00:10:43.744645833 +0000 UTC m=+9.285129698" watchObservedRunningTime="2025-05-14 00:10:43.745403731 +0000 UTC m=+9.285887627" May 14 00:10:46.760510 systemd[1]: Created slice kubepods-besteffort-pod1dc9af7d_96cc_4850_b357_4472802b0efd.slice - libcontainer container kubepods-besteffort-pod1dc9af7d_96cc_4850_b357_4472802b0efd.slice. May 14 00:10:46.892684 systemd[1]: Created slice kubepods-besteffort-pod0bb1ae72_207d_4908_a83b_fd4d816d67b9.slice - libcontainer container kubepods-besteffort-pod0bb1ae72_207d_4908_a83b_fd4d816d67b9.slice. May 14 00:10:46.936770 kubelet[2809]: I0514 00:10:46.936620 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdt5g\" (UniqueName: \"kubernetes.io/projected/1dc9af7d-96cc-4850-b357-4472802b0efd-kube-api-access-bdt5g\") pod \"calico-typha-75867b9799-7hksd\" (UID: \"1dc9af7d-96cc-4850-b357-4472802b0efd\") " pod="calico-system/calico-typha-75867b9799-7hksd" May 14 00:10:46.936770 kubelet[2809]: I0514 00:10:46.936677 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/1dc9af7d-96cc-4850-b357-4472802b0efd-typha-certs\") pod \"calico-typha-75867b9799-7hksd\" (UID: \"1dc9af7d-96cc-4850-b357-4472802b0efd\") " pod="calico-system/calico-typha-75867b9799-7hksd" May 14 00:10:46.936770 kubelet[2809]: I0514 00:10:46.936702 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1dc9af7d-96cc-4850-b357-4472802b0efd-tigera-ca-bundle\") pod \"calico-typha-75867b9799-7hksd\" (UID: \"1dc9af7d-96cc-4850-b357-4472802b0efd\") " pod="calico-system/calico-typha-75867b9799-7hksd" May 14 00:10:47.004478 kubelet[2809]: E0514 00:10:47.004374 2809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-86hgs" podUID="fe53988a-3756-47b4-b495-d0f23a69a35f" May 14 00:10:47.037713 kubelet[2809]: I0514 00:10:47.037264 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0bb1ae72-207d-4908-a83b-fd4d816d67b9-lib-modules\") pod \"calico-node-qhscz\" (UID: \"0bb1ae72-207d-4908-a83b-fd4d816d67b9\") " pod="calico-system/calico-node-qhscz" May 14 00:10:47.037713 kubelet[2809]: I0514 00:10:47.037323 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0bb1ae72-207d-4908-a83b-fd4d816d67b9-policysync\") pod \"calico-node-qhscz\" (UID: \"0bb1ae72-207d-4908-a83b-fd4d816d67b9\") " pod="calico-system/calico-node-qhscz" May 14 00:10:47.037713 kubelet[2809]: I0514 00:10:47.037336 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0bb1ae72-207d-4908-a83b-fd4d816d67b9-tigera-ca-bundle\") pod \"calico-node-qhscz\" (UID: \"0bb1ae72-207d-4908-a83b-fd4d816d67b9\") " pod="calico-system/calico-node-qhscz" May 14 00:10:47.037713 kubelet[2809]: I0514 00:10:47.037350 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0bb1ae72-207d-4908-a83b-fd4d816d67b9-node-certs\") pod \"calico-node-qhscz\" (UID: \"0bb1ae72-207d-4908-a83b-fd4d816d67b9\") " pod="calico-system/calico-node-qhscz" May 14 00:10:47.037713 kubelet[2809]: I0514 00:10:47.037364 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0bb1ae72-207d-4908-a83b-fd4d816d67b9-var-run-calico\") pod \"calico-node-qhscz\" (UID: \"0bb1ae72-207d-4908-a83b-fd4d816d67b9\") " pod="calico-system/calico-node-qhscz" May 14 00:10:47.038447 kubelet[2809]: I0514 00:10:47.037380 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0bb1ae72-207d-4908-a83b-fd4d816d67b9-cni-log-dir\") pod \"calico-node-qhscz\" (UID: \"0bb1ae72-207d-4908-a83b-fd4d816d67b9\") " pod="calico-system/calico-node-qhscz" May 14 00:10:47.038447 kubelet[2809]: I0514 00:10:47.037403 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0bb1ae72-207d-4908-a83b-fd4d816d67b9-xtables-lock\") pod \"calico-node-qhscz\" (UID: \"0bb1ae72-207d-4908-a83b-fd4d816d67b9\") " pod="calico-system/calico-node-qhscz" May 14 00:10:47.038447 kubelet[2809]: I0514 00:10:47.037442 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0bb1ae72-207d-4908-a83b-fd4d816d67b9-var-lib-calico\") pod \"calico-node-qhscz\" (UID: \"0bb1ae72-207d-4908-a83b-fd4d816d67b9\") " pod="calico-system/calico-node-qhscz" May 14 00:10:47.038447 kubelet[2809]: I0514 00:10:47.037456 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0bb1ae72-207d-4908-a83b-fd4d816d67b9-cni-bin-dir\") pod \"calico-node-qhscz\" (UID: \"0bb1ae72-207d-4908-a83b-fd4d816d67b9\") " pod="calico-system/calico-node-qhscz" May 14 00:10:47.038447 kubelet[2809]: I0514 00:10:47.037469 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0bb1ae72-207d-4908-a83b-fd4d816d67b9-cni-net-dir\") pod \"calico-node-qhscz\" (UID: \"0bb1ae72-207d-4908-a83b-fd4d816d67b9\") " pod="calico-system/calico-node-qhscz" May 14 00:10:47.040392 kubelet[2809]: I0514 00:10:47.037483 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0bb1ae72-207d-4908-a83b-fd4d816d67b9-flexvol-driver-host\") pod \"calico-node-qhscz\" (UID: \"0bb1ae72-207d-4908-a83b-fd4d816d67b9\") " pod="calico-system/calico-node-qhscz" May 14 00:10:47.040392 kubelet[2809]: I0514 00:10:47.037497 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlkz8\" (UniqueName: \"kubernetes.io/projected/0bb1ae72-207d-4908-a83b-fd4d816d67b9-kube-api-access-tlkz8\") pod \"calico-node-qhscz\" (UID: \"0bb1ae72-207d-4908-a83b-fd4d816d67b9\") " pod="calico-system/calico-node-qhscz" May 14 00:10:47.067805 containerd[1528]: time="2025-05-14T00:10:47.067638638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-75867b9799-7hksd,Uid:1dc9af7d-96cc-4850-b357-4472802b0efd,Namespace:calico-system,Attempt:0,}" May 14 00:10:47.105318 containerd[1528]: time="2025-05-14T00:10:47.104044641Z" level=info msg="connecting to shim c68edb6db6f4e3dff6336790687425439f884113334df71a916307a56431b00d" address="unix:///run/containerd/s/2077d572eae3e4a02c537a56500cc6b48ba79353c8fd462e703d4869409eae5f" namespace=k8s.io protocol=ttrpc version=3 May 14 00:10:47.127425 systemd[1]: Started cri-containerd-c68edb6db6f4e3dff6336790687425439f884113334df71a916307a56431b00d.scope - libcontainer container c68edb6db6f4e3dff6336790687425439f884113334df71a916307a56431b00d. May 14 00:10:47.138034 kubelet[2809]: I0514 00:10:47.137995 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/fe53988a-3756-47b4-b495-d0f23a69a35f-varrun\") pod \"csi-node-driver-86hgs\" (UID: \"fe53988a-3756-47b4-b495-d0f23a69a35f\") " pod="calico-system/csi-node-driver-86hgs" May 14 00:10:47.138183 kubelet[2809]: I0514 00:10:47.138063 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/fe53988a-3756-47b4-b495-d0f23a69a35f-socket-dir\") pod \"csi-node-driver-86hgs\" (UID: \"fe53988a-3756-47b4-b495-d0f23a69a35f\") " pod="calico-system/csi-node-driver-86hgs" May 14 00:10:47.138183 kubelet[2809]: I0514 00:10:47.138102 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fe53988a-3756-47b4-b495-d0f23a69a35f-kubelet-dir\") pod \"csi-node-driver-86hgs\" (UID: \"fe53988a-3756-47b4-b495-d0f23a69a35f\") " pod="calico-system/csi-node-driver-86hgs" May 14 00:10:47.138183 kubelet[2809]: I0514 00:10:47.138119 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/fe53988a-3756-47b4-b495-d0f23a69a35f-registration-dir\") pod \"csi-node-driver-86hgs\" (UID: \"fe53988a-3756-47b4-b495-d0f23a69a35f\") " pod="calico-system/csi-node-driver-86hgs" May 14 00:10:47.138183 kubelet[2809]: I0514 00:10:47.138132 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8rrp\" (UniqueName: \"kubernetes.io/projected/fe53988a-3756-47b4-b495-d0f23a69a35f-kube-api-access-j8rrp\") pod \"csi-node-driver-86hgs\" (UID: \"fe53988a-3756-47b4-b495-d0f23a69a35f\") " pod="calico-system/csi-node-driver-86hgs" May 14 00:10:47.150163 kubelet[2809]: E0514 00:10:47.150133 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:47.150163 kubelet[2809]: W0514 00:10:47.150159 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:47.150344 kubelet[2809]: E0514 00:10:47.150183 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:47.155170 kubelet[2809]: E0514 00:10:47.155118 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:47.155170 kubelet[2809]: W0514 00:10:47.155133 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:47.155170 kubelet[2809]: E0514 00:10:47.155149 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:47.183346 containerd[1528]: time="2025-05-14T00:10:47.183214543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-75867b9799-7hksd,Uid:1dc9af7d-96cc-4850-b357-4472802b0efd,Namespace:calico-system,Attempt:0,} returns sandbox id \"c68edb6db6f4e3dff6336790687425439f884113334df71a916307a56431b00d\"" May 14 00:10:47.185016 containerd[1528]: time="2025-05-14T00:10:47.184872598Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 14 00:10:47.196681 containerd[1528]: time="2025-05-14T00:10:47.196440304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qhscz,Uid:0bb1ae72-207d-4908-a83b-fd4d816d67b9,Namespace:calico-system,Attempt:0,}" May 14 00:10:47.219548 containerd[1528]: time="2025-05-14T00:10:47.219369175Z" level=info msg="connecting to shim 44ee60be99d3f4f5a1d079644cb13f79da20f0c09ea5abcd31c28c77c0b051f7" address="unix:///run/containerd/s/8b98191525fdb65d93424dbea43c1516f3d56e302b8f2511d5696f40ed529d7b" namespace=k8s.io protocol=ttrpc version=3 May 14 00:10:47.239568 kubelet[2809]: E0514 00:10:47.239483 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:47.239568 kubelet[2809]: W0514 00:10:47.239502 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:47.239568 kubelet[2809]: E0514 00:10:47.239522 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:47.240124 kubelet[2809]: E0514 00:10:47.240004 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:47.240124 kubelet[2809]: W0514 00:10:47.240031 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:47.240124 kubelet[2809]: E0514 00:10:47.240048 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:47.241029 kubelet[2809]: E0514 00:10:47.240953 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:47.241029 kubelet[2809]: W0514 00:10:47.240962 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:47.241029 kubelet[2809]: E0514 00:10:47.240972 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:47.241506 kubelet[2809]: E0514 00:10:47.241469 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:47.241506 kubelet[2809]: W0514 00:10:47.241478 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:47.241781 kubelet[2809]: E0514 00:10:47.241661 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:47.242035 kubelet[2809]: E0514 00:10:47.241881 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:47.242035 kubelet[2809]: W0514 00:10:47.241888 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:47.242155 kubelet[2809]: E0514 00:10:47.242116 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:47.242155 kubelet[2809]: W0514 00:10:47.242125 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:47.242462 kubelet[2809]: E0514 00:10:47.242252 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:47.242462 kubelet[2809]: E0514 00:10:47.242276 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:47.242462 kubelet[2809]: W0514 00:10:47.242282 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:47.242462 kubelet[2809]: E0514 00:10:47.242288 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:47.242462 kubelet[2809]: E0514 00:10:47.242290 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:47.242983 kubelet[2809]: E0514 00:10:47.242880 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:47.242983 kubelet[2809]: W0514 00:10:47.242890 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:47.243172 kubelet[2809]: E0514 00:10:47.242904 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:47.243172 kubelet[2809]: E0514 00:10:47.243119 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:47.243172 kubelet[2809]: W0514 00:10:47.243152 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:47.243566 kubelet[2809]: E0514 00:10:47.243339 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:47.243944 kubelet[2809]: E0514 00:10:47.243936 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:47.244040 kubelet[2809]: W0514 00:10:47.244032 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:47.244179 kubelet[2809]: E0514 00:10:47.244167 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:47.244382 kubelet[2809]: E0514 00:10:47.244374 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:47.244627 kubelet[2809]: W0514 00:10:47.244417 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:47.244749 kubelet[2809]: E0514 00:10:47.244664 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:47.244752 systemd[1]: Started cri-containerd-44ee60be99d3f4f5a1d079644cb13f79da20f0c09ea5abcd31c28c77c0b051f7.scope - libcontainer container 44ee60be99d3f4f5a1d079644cb13f79da20f0c09ea5abcd31c28c77c0b051f7. May 14 00:10:47.245005 kubelet[2809]: E0514 00:10:47.244895 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:47.245005 kubelet[2809]: W0514 00:10:47.244903 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:47.245005 kubelet[2809]: E0514 00:10:47.244954 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:47.245250 kubelet[2809]: E0514 00:10:47.245122 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:47.245250 kubelet[2809]: W0514 00:10:47.245132 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:47.245610 kubelet[2809]: E0514 00:10:47.245333 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:47.245692 kubelet[2809]: E0514 00:10:47.245683 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:47.245775 kubelet[2809]: W0514 00:10:47.245766 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:47.246087 kubelet[2809]: E0514 00:10:47.246015 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:47.246403 kubelet[2809]: E0514 00:10:47.246379 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:47.246609 kubelet[2809]: W0514 00:10:47.246498 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:47.246609 kubelet[2809]: E0514 00:10:47.246574 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:47.247157 kubelet[2809]: E0514 00:10:47.247092 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:47.247157 kubelet[2809]: W0514 00:10:47.247101 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:47.247835 kubelet[2809]: E0514 00:10:47.247320 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:47.247835 kubelet[2809]: W0514 00:10:47.247328 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:47.247835 kubelet[2809]: E0514 00:10:47.247466 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:47.247835 kubelet[2809]: E0514 00:10:47.247477 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:47.247835 kubelet[2809]: E0514 00:10:47.247600 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:47.247835 kubelet[2809]: W0514 00:10:47.247606 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:47.248557 kubelet[2809]: E0514 00:10:47.248153 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:47.248557 kubelet[2809]: E0514 00:10:47.248381 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:47.248557 kubelet[2809]: W0514 00:10:47.248397 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:47.248557 kubelet[2809]: E0514 00:10:47.248408 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:47.248908 kubelet[2809]: E0514 00:10:47.248791 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:47.248908 kubelet[2809]: W0514 00:10:47.248802 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:47.249289 kubelet[2809]: E0514 00:10:47.249181 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:47.249358 kubelet[2809]: E0514 00:10:47.249351 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:47.249591 kubelet[2809]: W0514 00:10:47.249495 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:47.249676 kubelet[2809]: E0514 00:10:47.249638 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:47.250065 kubelet[2809]: E0514 00:10:47.249989 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:47.250065 kubelet[2809]: W0514 00:10:47.249997 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:47.250065 kubelet[2809]: E0514 00:10:47.250007 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:47.250408 kubelet[2809]: E0514 00:10:47.250344 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:47.250408 kubelet[2809]: W0514 00:10:47.250353 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:47.250607 kubelet[2809]: E0514 00:10:47.250492 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:47.250986 kubelet[2809]: E0514 00:10:47.250978 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:47.251068 kubelet[2809]: W0514 00:10:47.251059 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:47.251203 kubelet[2809]: E0514 00:10:47.251156 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:47.251947 kubelet[2809]: E0514 00:10:47.251874 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:47.251947 kubelet[2809]: W0514 00:10:47.251883 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:47.251947 kubelet[2809]: E0514 00:10:47.251892 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:47.262089 kubelet[2809]: E0514 00:10:47.262067 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:47.262277 kubelet[2809]: W0514 00:10:47.262216 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:47.262277 kubelet[2809]: E0514 00:10:47.262250 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:47.299748 containerd[1528]: time="2025-05-14T00:10:47.299596328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qhscz,Uid:0bb1ae72-207d-4908-a83b-fd4d816d67b9,Namespace:calico-system,Attempt:0,} returns sandbox id \"44ee60be99d3f4f5a1d079644cb13f79da20f0c09ea5abcd31c28c77c0b051f7\"" May 14 00:10:48.690018 kubelet[2809]: E0514 00:10:48.687961 2809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-86hgs" podUID="fe53988a-3756-47b4-b495-d0f23a69a35f" May 14 00:10:50.055897 containerd[1528]: time="2025-05-14T00:10:50.055833671Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:10:50.057166 containerd[1528]: time="2025-05-14T00:10:50.057041432Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=30426870" May 14 00:10:50.059200 containerd[1528]: time="2025-05-14T00:10:50.058297864Z" level=info msg="ImageCreate event name:\"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:10:50.087144 containerd[1528]: time="2025-05-14T00:10:50.087091656Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:10:50.087966 containerd[1528]: time="2025-05-14T00:10:50.087778228Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"31919484\" in 2.902880184s" May 14 00:10:50.087966 containerd[1528]: time="2025-05-14T00:10:50.087812211Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\"" May 14 00:10:50.089183 containerd[1528]: time="2025-05-14T00:10:50.089154070Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 14 00:10:50.104053 containerd[1528]: time="2025-05-14T00:10:50.104021978Z" level=info msg="CreateContainer within sandbox \"c68edb6db6f4e3dff6336790687425439f884113334df71a916307a56431b00d\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 14 00:10:50.148259 containerd[1528]: time="2025-05-14T00:10:50.148052185Z" level=info msg="Container 027678d19c5103161b4fd7f1b3404b0c42082cfc92c7139d793ff87fd04f8e1b: CDI devices from CRI Config.CDIDevices: []" May 14 00:10:50.167587 containerd[1528]: time="2025-05-14T00:10:50.167526174Z" level=info msg="CreateContainer within sandbox \"c68edb6db6f4e3dff6336790687425439f884113334df71a916307a56431b00d\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"027678d19c5103161b4fd7f1b3404b0c42082cfc92c7139d793ff87fd04f8e1b\"" May 14 00:10:50.171113 containerd[1528]: time="2025-05-14T00:10:50.170966793Z" level=info msg="StartContainer for \"027678d19c5103161b4fd7f1b3404b0c42082cfc92c7139d793ff87fd04f8e1b\"" May 14 00:10:50.171859 containerd[1528]: time="2025-05-14T00:10:50.171822617Z" level=info msg="connecting to shim 027678d19c5103161b4fd7f1b3404b0c42082cfc92c7139d793ff87fd04f8e1b" address="unix:///run/containerd/s/2077d572eae3e4a02c537a56500cc6b48ba79353c8fd462e703d4869409eae5f" protocol=ttrpc version=3 May 14 00:10:50.206411 systemd[1]: Started cri-containerd-027678d19c5103161b4fd7f1b3404b0c42082cfc92c7139d793ff87fd04f8e1b.scope - libcontainer container 027678d19c5103161b4fd7f1b3404b0c42082cfc92c7139d793ff87fd04f8e1b. May 14 00:10:50.255382 containerd[1528]: time="2025-05-14T00:10:50.255309680Z" level=info msg="StartContainer for \"027678d19c5103161b4fd7f1b3404b0c42082cfc92c7139d793ff87fd04f8e1b\" returns successfully" May 14 00:10:50.690139 kubelet[2809]: E0514 00:10:50.690011 2809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-86hgs" podUID="fe53988a-3756-47b4-b495-d0f23a69a35f" May 14 00:10:50.863587 kubelet[2809]: E0514 00:10:50.863479 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:50.863587 kubelet[2809]: W0514 00:10:50.863530 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:50.863587 kubelet[2809]: E0514 00:10:50.863570 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:50.864131 kubelet[2809]: E0514 00:10:50.863917 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:50.864131 kubelet[2809]: W0514 00:10:50.863936 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:50.864131 kubelet[2809]: E0514 00:10:50.863957 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:50.864377 kubelet[2809]: E0514 00:10:50.864351 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:50.864377 kubelet[2809]: W0514 00:10:50.864368 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:50.864566 kubelet[2809]: E0514 00:10:50.864389 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:50.886855 kubelet[2809]: E0514 00:10:50.886767 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:50.886855 kubelet[2809]: W0514 00:10:50.886843 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:50.886855 kubelet[2809]: E0514 00:10:50.886885 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:50.887651 kubelet[2809]: E0514 00:10:50.887344 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:50.887651 kubelet[2809]: W0514 00:10:50.887360 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:50.887651 kubelet[2809]: E0514 00:10:50.887376 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:50.887651 kubelet[2809]: E0514 00:10:50.887610 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:50.887651 kubelet[2809]: W0514 00:10:50.887623 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:50.887651 kubelet[2809]: E0514 00:10:50.887637 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:50.888688 kubelet[2809]: E0514 00:10:50.887937 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:50.888688 kubelet[2809]: W0514 00:10:50.887957 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:50.888688 kubelet[2809]: E0514 00:10:50.887972 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:50.888688 kubelet[2809]: E0514 00:10:50.888386 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:50.888688 kubelet[2809]: W0514 00:10:50.888403 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:50.888688 kubelet[2809]: E0514 00:10:50.888418 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:50.888688 kubelet[2809]: E0514 00:10:50.888680 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:50.888688 kubelet[2809]: W0514 00:10:50.888694 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:50.890606 kubelet[2809]: E0514 00:10:50.888709 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:50.890606 kubelet[2809]: E0514 00:10:50.888915 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:50.890606 kubelet[2809]: W0514 00:10:50.888926 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:50.890606 kubelet[2809]: E0514 00:10:50.888940 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:50.890606 kubelet[2809]: E0514 00:10:50.889275 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:50.890606 kubelet[2809]: W0514 00:10:50.889290 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:50.890606 kubelet[2809]: E0514 00:10:50.889306 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:50.890606 kubelet[2809]: E0514 00:10:50.889556 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:50.890606 kubelet[2809]: W0514 00:10:50.889568 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:50.890606 kubelet[2809]: E0514 00:10:50.889582 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:50.892284 kubelet[2809]: E0514 00:10:50.889907 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:50.892284 kubelet[2809]: W0514 00:10:50.889924 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:50.892284 kubelet[2809]: E0514 00:10:50.889943 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:50.892284 kubelet[2809]: E0514 00:10:50.890320 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:50.892284 kubelet[2809]: W0514 00:10:50.890336 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:50.892284 kubelet[2809]: E0514 00:10:50.890351 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:50.892284 kubelet[2809]: E0514 00:10:50.890581 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:50.892284 kubelet[2809]: W0514 00:10:50.890593 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:50.892284 kubelet[2809]: E0514 00:10:50.890606 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:50.892284 kubelet[2809]: E0514 00:10:50.890989 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:50.893214 kubelet[2809]: W0514 00:10:50.891007 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:50.893214 kubelet[2809]: E0514 00:10:50.891037 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:50.893214 kubelet[2809]: E0514 00:10:50.891417 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:50.893214 kubelet[2809]: W0514 00:10:50.891430 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:50.893214 kubelet[2809]: E0514 00:10:50.891448 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:50.893214 kubelet[2809]: E0514 00:10:50.892741 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:50.893214 kubelet[2809]: W0514 00:10:50.892779 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:50.894827 kubelet[2809]: E0514 00:10:50.894355 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:50.896992 kubelet[2809]: E0514 00:10:50.896674 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:50.896992 kubelet[2809]: W0514 00:10:50.896701 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:50.896992 kubelet[2809]: E0514 00:10:50.896727 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:50.897873 kubelet[2809]: E0514 00:10:50.897719 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:50.897873 kubelet[2809]: W0514 00:10:50.897740 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:50.898297 kubelet[2809]: E0514 00:10:50.898041 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:50.898666 kubelet[2809]: E0514 00:10:50.898634 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:50.898666 kubelet[2809]: W0514 00:10:50.898722 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:50.899274 kubelet[2809]: E0514 00:10:50.899029 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:50.901670 kubelet[2809]: E0514 00:10:50.901626 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:50.901670 kubelet[2809]: W0514 00:10:50.901654 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:50.902135 kubelet[2809]: E0514 00:10:50.902093 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:50.902135 kubelet[2809]: W0514 00:10:50.902121 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:50.902565 kubelet[2809]: E0514 00:10:50.902523 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:50.902565 kubelet[2809]: W0514 00:10:50.902548 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:50.902695 kubelet[2809]: E0514 00:10:50.902567 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:50.902915 kubelet[2809]: E0514 00:10:50.902876 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:50.902915 kubelet[2809]: W0514 00:10:50.902897 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:50.902915 kubelet[2809]: E0514 00:10:50.902911 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:50.903327 kubelet[2809]: E0514 00:10:50.903250 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:50.903327 kubelet[2809]: W0514 00:10:50.903267 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:50.903327 kubelet[2809]: E0514 00:10:50.903283 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:50.906628 kubelet[2809]: E0514 00:10:50.906551 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:50.906628 kubelet[2809]: W0514 00:10:50.906586 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:50.906628 kubelet[2809]: E0514 00:10:50.906616 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:50.907111 kubelet[2809]: E0514 00:10:50.906661 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:50.907111 kubelet[2809]: E0514 00:10:50.906895 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:50.907594 kubelet[2809]: E0514 00:10:50.907111 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:50.907594 kubelet[2809]: W0514 00:10:50.907124 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:50.907594 kubelet[2809]: E0514 00:10:50.907146 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:50.907594 kubelet[2809]: E0514 00:10:50.907547 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:50.907594 kubelet[2809]: W0514 00:10:50.907561 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:50.907836 kubelet[2809]: E0514 00:10:50.907596 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:50.909178 kubelet[2809]: E0514 00:10:50.909069 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:50.909178 kubelet[2809]: W0514 00:10:50.909094 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:50.909178 kubelet[2809]: E0514 00:10:50.909133 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:50.909981 kubelet[2809]: E0514 00:10:50.909868 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:50.909981 kubelet[2809]: W0514 00:10:50.909889 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:50.909981 kubelet[2809]: E0514 00:10:50.909913 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:50.911541 kubelet[2809]: E0514 00:10:50.911398 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:50.911541 kubelet[2809]: W0514 00:10:50.911413 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:50.911661 kubelet[2809]: E0514 00:10:50.911649 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:50.912414 kubelet[2809]: E0514 00:10:50.912360 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:50.912414 kubelet[2809]: W0514 00:10:50.912378 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:50.912414 kubelet[2809]: E0514 00:10:50.912393 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:51.785530 kubelet[2809]: I0514 00:10:51.784974 2809 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 00:10:51.801908 kubelet[2809]: E0514 00:10:51.801845 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:51.801908 kubelet[2809]: W0514 00:10:51.801876 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:51.801908 kubelet[2809]: E0514 00:10:51.801902 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:51.802213 kubelet[2809]: E0514 00:10:51.802163 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:51.802213 kubelet[2809]: W0514 00:10:51.802185 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:51.803022 kubelet[2809]: E0514 00:10:51.802247 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:51.803022 kubelet[2809]: E0514 00:10:51.802571 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:51.803022 kubelet[2809]: W0514 00:10:51.802590 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:51.803022 kubelet[2809]: E0514 00:10:51.802608 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:51.803022 kubelet[2809]: E0514 00:10:51.802904 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:51.803022 kubelet[2809]: W0514 00:10:51.802917 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:51.803022 kubelet[2809]: E0514 00:10:51.802932 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:51.803418 kubelet[2809]: E0514 00:10:51.803175 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:51.803418 kubelet[2809]: W0514 00:10:51.803188 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:51.803418 kubelet[2809]: E0514 00:10:51.803246 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:51.803594 kubelet[2809]: E0514 00:10:51.803467 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:51.803594 kubelet[2809]: W0514 00:10:51.803480 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:51.803594 kubelet[2809]: E0514 00:10:51.803496 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:51.803846 kubelet[2809]: E0514 00:10:51.803702 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:51.803846 kubelet[2809]: W0514 00:10:51.803713 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:51.803846 kubelet[2809]: E0514 00:10:51.803727 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:51.804071 kubelet[2809]: E0514 00:10:51.803941 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:51.804071 kubelet[2809]: W0514 00:10:51.803953 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:51.804071 kubelet[2809]: E0514 00:10:51.803967 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:51.804363 kubelet[2809]: E0514 00:10:51.804331 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:51.804363 kubelet[2809]: W0514 00:10:51.804344 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:51.804363 kubelet[2809]: E0514 00:10:51.804359 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:51.804598 kubelet[2809]: E0514 00:10:51.804581 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:51.804598 kubelet[2809]: W0514 00:10:51.804598 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:51.804753 kubelet[2809]: E0514 00:10:51.804613 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:51.804845 kubelet[2809]: E0514 00:10:51.804832 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:51.804891 kubelet[2809]: W0514 00:10:51.804844 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:51.804891 kubelet[2809]: E0514 00:10:51.804857 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:51.805160 kubelet[2809]: E0514 00:10:51.805117 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:51.805160 kubelet[2809]: W0514 00:10:51.805146 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:51.805375 kubelet[2809]: E0514 00:10:51.805170 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:51.805617 kubelet[2809]: E0514 00:10:51.805585 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:51.805617 kubelet[2809]: W0514 00:10:51.805607 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:51.805722 kubelet[2809]: E0514 00:10:51.805622 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:51.805894 kubelet[2809]: E0514 00:10:51.805867 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:51.805894 kubelet[2809]: W0514 00:10:51.805890 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:51.806035 kubelet[2809]: E0514 00:10:51.805908 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:51.806287 kubelet[2809]: E0514 00:10:51.806192 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:51.806287 kubelet[2809]: W0514 00:10:51.806287 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:51.806416 kubelet[2809]: E0514 00:10:51.806307 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:51.806730 kubelet[2809]: E0514 00:10:51.806695 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:51.806730 kubelet[2809]: W0514 00:10:51.806719 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:51.806853 kubelet[2809]: E0514 00:10:51.806733 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:51.807127 kubelet[2809]: E0514 00:10:51.807094 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:51.807127 kubelet[2809]: W0514 00:10:51.807115 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:51.807391 kubelet[2809]: E0514 00:10:51.807149 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:51.807635 kubelet[2809]: E0514 00:10:51.807599 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:51.807635 kubelet[2809]: W0514 00:10:51.807623 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:51.807842 kubelet[2809]: E0514 00:10:51.807648 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:51.807980 kubelet[2809]: E0514 00:10:51.807940 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:51.807980 kubelet[2809]: W0514 00:10:51.807967 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:51.808173 kubelet[2809]: E0514 00:10:51.807988 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:51.808360 kubelet[2809]: E0514 00:10:51.808264 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:51.808360 kubelet[2809]: W0514 00:10:51.808278 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:51.808360 kubelet[2809]: E0514 00:10:51.808319 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:51.808565 kubelet[2809]: E0514 00:10:51.808542 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:51.808565 kubelet[2809]: W0514 00:10:51.808555 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:51.808772 kubelet[2809]: E0514 00:10:51.808593 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:51.808828 kubelet[2809]: E0514 00:10:51.808772 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:51.808828 kubelet[2809]: W0514 00:10:51.808787 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:51.809162 kubelet[2809]: E0514 00:10:51.808860 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:51.809162 kubelet[2809]: E0514 00:10:51.808987 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:51.809162 kubelet[2809]: W0514 00:10:51.808999 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:51.809162 kubelet[2809]: E0514 00:10:51.809025 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:51.809645 kubelet[2809]: E0514 00:10:51.809344 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:51.809645 kubelet[2809]: W0514 00:10:51.809358 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:51.809645 kubelet[2809]: E0514 00:10:51.809383 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:51.810110 kubelet[2809]: E0514 00:10:51.809962 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:51.810110 kubelet[2809]: W0514 00:10:51.809982 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:51.810110 kubelet[2809]: E0514 00:10:51.810009 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:51.810506 kubelet[2809]: E0514 00:10:51.810454 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:51.810506 kubelet[2809]: W0514 00:10:51.810493 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:51.810636 kubelet[2809]: E0514 00:10:51.810617 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:51.810902 kubelet[2809]: E0514 00:10:51.810876 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:51.811006 kubelet[2809]: W0514 00:10:51.810943 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:51.811067 kubelet[2809]: E0514 00:10:51.811050 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:51.811428 kubelet[2809]: E0514 00:10:51.811390 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:51.811428 kubelet[2809]: W0514 00:10:51.811414 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:51.811570 kubelet[2809]: E0514 00:10:51.811497 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:51.811857 kubelet[2809]: E0514 00:10:51.811821 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:51.811857 kubelet[2809]: W0514 00:10:51.811843 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:51.811857 kubelet[2809]: E0514 00:10:51.811865 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:51.812803 kubelet[2809]: E0514 00:10:51.812596 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:51.812803 kubelet[2809]: W0514 00:10:51.812620 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:51.812803 kubelet[2809]: E0514 00:10:51.812678 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:51.813250 kubelet[2809]: E0514 00:10:51.813161 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:51.813331 kubelet[2809]: W0514 00:10:51.813267 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:51.813331 kubelet[2809]: E0514 00:10:51.813296 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:51.813664 kubelet[2809]: E0514 00:10:51.813629 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:51.813664 kubelet[2809]: W0514 00:10:51.813652 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:51.813795 kubelet[2809]: E0514 00:10:51.813667 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:51.814150 kubelet[2809]: E0514 00:10:51.814116 2809 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 00:10:51.814150 kubelet[2809]: W0514 00:10:51.814140 2809 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 00:10:51.814340 kubelet[2809]: E0514 00:10:51.814154 2809 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 00:10:52.066260 containerd[1528]: time="2025-05-14T00:10:52.064933958Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:10:52.068216 containerd[1528]: time="2025-05-14T00:10:52.068158104Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5366937" May 14 00:10:52.070582 containerd[1528]: time="2025-05-14T00:10:52.070504414Z" level=info msg="ImageCreate event name:\"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:10:52.073577 containerd[1528]: time="2025-05-14T00:10:52.072884856Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:10:52.073577 containerd[1528]: time="2025-05-14T00:10:52.073243246Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6859519\" in 1.984063579s" May 14 00:10:52.073577 containerd[1528]: time="2025-05-14T00:10:52.073265727Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\"" May 14 00:10:52.076413 containerd[1528]: time="2025-05-14T00:10:52.076381233Z" level=info msg="CreateContainer within sandbox \"44ee60be99d3f4f5a1d079644cb13f79da20f0c09ea5abcd31c28c77c0b051f7\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 14 00:10:52.086248 containerd[1528]: time="2025-05-14T00:10:52.084424602Z" level=info msg="Container dc0803b917f54cd1174f52dd6bdee598ee638fbccc8d5b1a34bf2fdda32e7d23: CDI devices from CRI Config.CDIDevices: []" May 14 00:10:52.101785 containerd[1528]: time="2025-05-14T00:10:52.101718541Z" level=info msg="CreateContainer within sandbox \"44ee60be99d3f4f5a1d079644cb13f79da20f0c09ea5abcd31c28c77c0b051f7\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"dc0803b917f54cd1174f52dd6bdee598ee638fbccc8d5b1a34bf2fdda32e7d23\"" May 14 00:10:52.102974 containerd[1528]: time="2025-05-14T00:10:52.102933919Z" level=info msg="StartContainer for \"dc0803b917f54cd1174f52dd6bdee598ee638fbccc8d5b1a34bf2fdda32e7d23\"" May 14 00:10:52.104709 containerd[1528]: time="2025-05-14T00:10:52.104665146Z" level=info msg="connecting to shim dc0803b917f54cd1174f52dd6bdee598ee638fbccc8d5b1a34bf2fdda32e7d23" address="unix:///run/containerd/s/8b98191525fdb65d93424dbea43c1516f3d56e302b8f2511d5696f40ed529d7b" protocol=ttrpc version=3 May 14 00:10:52.129354 systemd[1]: Started cri-containerd-dc0803b917f54cd1174f52dd6bdee598ee638fbccc8d5b1a34bf2fdda32e7d23.scope - libcontainer container dc0803b917f54cd1174f52dd6bdee598ee638fbccc8d5b1a34bf2fdda32e7d23. May 14 00:10:52.190371 containerd[1528]: time="2025-05-14T00:10:52.190325382Z" level=info msg="StartContainer for \"dc0803b917f54cd1174f52dd6bdee598ee638fbccc8d5b1a34bf2fdda32e7d23\" returns successfully" May 14 00:10:52.213817 systemd[1]: cri-containerd-dc0803b917f54cd1174f52dd6bdee598ee638fbccc8d5b1a34bf2fdda32e7d23.scope: Deactivated successfully. May 14 00:10:52.260281 containerd[1528]: time="2025-05-14T00:10:52.260073914Z" level=info msg="received exit event container_id:\"dc0803b917f54cd1174f52dd6bdee598ee638fbccc8d5b1a34bf2fdda32e7d23\" id:\"dc0803b917f54cd1174f52dd6bdee598ee638fbccc8d5b1a34bf2fdda32e7d23\" pid:3436 exited_at:{seconds:1747181452 nanos:215036928}" May 14 00:10:52.286063 containerd[1528]: time="2025-05-14T00:10:52.286009493Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dc0803b917f54cd1174f52dd6bdee598ee638fbccc8d5b1a34bf2fdda32e7d23\" id:\"dc0803b917f54cd1174f52dd6bdee598ee638fbccc8d5b1a34bf2fdda32e7d23\" pid:3436 exited_at:{seconds:1747181452 nanos:215036928}" May 14 00:10:52.302981 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dc0803b917f54cd1174f52dd6bdee598ee638fbccc8d5b1a34bf2fdda32e7d23-rootfs.mount: Deactivated successfully. May 14 00:10:52.714392 kubelet[2809]: E0514 00:10:52.714327 2809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-86hgs" podUID="fe53988a-3756-47b4-b495-d0f23a69a35f" May 14 00:10:52.798956 containerd[1528]: time="2025-05-14T00:10:52.798487420Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 14 00:10:52.878945 kubelet[2809]: I0514 00:10:52.878810 2809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-75867b9799-7hksd" podStartSLOduration=3.9744194779999997 podStartE2EDuration="6.878775416s" podCreationTimestamp="2025-05-14 00:10:46 +0000 UTC" firstStartedPulling="2025-05-14 00:10:47.184487451 +0000 UTC m=+12.724971317" lastFinishedPulling="2025-05-14 00:10:50.088843379 +0000 UTC m=+15.629327255" observedRunningTime="2025-05-14 00:10:50.811442168 +0000 UTC m=+16.351926054" watchObservedRunningTime="2025-05-14 00:10:52.878775416 +0000 UTC m=+18.419259321" May 14 00:10:54.691124 kubelet[2809]: E0514 00:10:54.689344 2809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-86hgs" podUID="fe53988a-3756-47b4-b495-d0f23a69a35f" May 14 00:10:55.722582 kubelet[2809]: I0514 00:10:55.722490 2809 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 00:10:56.689910 kubelet[2809]: E0514 00:10:56.689821 2809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-86hgs" podUID="fe53988a-3756-47b4-b495-d0f23a69a35f" May 14 00:10:58.091306 containerd[1528]: time="2025-05-14T00:10:58.091248609Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:10:58.092363 containerd[1528]: time="2025-05-14T00:10:58.092298718Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=97793683" May 14 00:10:58.093543 containerd[1528]: time="2025-05-14T00:10:58.093503570Z" level=info msg="ImageCreate event name:\"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:10:58.095719 containerd[1528]: time="2025-05-14T00:10:58.095665331Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:10:58.096128 containerd[1528]: time="2025-05-14T00:10:58.096104831Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"99286305\" in 5.297568532s" May 14 00:10:58.096164 containerd[1528]: time="2025-05-14T00:10:58.096132342Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\"" May 14 00:10:58.103525 containerd[1528]: time="2025-05-14T00:10:58.103494460Z" level=info msg="CreateContainer within sandbox \"44ee60be99d3f4f5a1d079644cb13f79da20f0c09ea5abcd31c28c77c0b051f7\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 14 00:10:58.152531 containerd[1528]: time="2025-05-14T00:10:58.148361687Z" level=info msg="Container e9721195ea45d37f715057074c9b7632108963526b6d75843213e38ad2ed7f6c: CDI devices from CRI Config.CDIDevices: []" May 14 00:10:58.162783 containerd[1528]: time="2025-05-14T00:10:58.162745914Z" level=info msg="CreateContainer within sandbox \"44ee60be99d3f4f5a1d079644cb13f79da20f0c09ea5abcd31c28c77c0b051f7\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e9721195ea45d37f715057074c9b7632108963526b6d75843213e38ad2ed7f6c\"" May 14 00:10:58.165260 containerd[1528]: time="2025-05-14T00:10:58.164342611Z" level=info msg="StartContainer for \"e9721195ea45d37f715057074c9b7632108963526b6d75843213e38ad2ed7f6c\"" May 14 00:10:58.165782 containerd[1528]: time="2025-05-14T00:10:58.165625128Z" level=info msg="connecting to shim e9721195ea45d37f715057074c9b7632108963526b6d75843213e38ad2ed7f6c" address="unix:///run/containerd/s/8b98191525fdb65d93424dbea43c1516f3d56e302b8f2511d5696f40ed529d7b" protocol=ttrpc version=3 May 14 00:10:58.218456 systemd[1]: Started cri-containerd-e9721195ea45d37f715057074c9b7632108963526b6d75843213e38ad2ed7f6c.scope - libcontainer container e9721195ea45d37f715057074c9b7632108963526b6d75843213e38ad2ed7f6c. May 14 00:10:58.268215 containerd[1528]: time="2025-05-14T00:10:58.268170494Z" level=info msg="StartContainer for \"e9721195ea45d37f715057074c9b7632108963526b6d75843213e38ad2ed7f6c\" returns successfully" May 14 00:10:58.693452 kubelet[2809]: E0514 00:10:58.691931 2809 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-86hgs" podUID="fe53988a-3756-47b4-b495-d0f23a69a35f" May 14 00:10:58.780049 containerd[1528]: time="2025-05-14T00:10:58.779875925Z" level=info msg="received exit event container_id:\"e9721195ea45d37f715057074c9b7632108963526b6d75843213e38ad2ed7f6c\" id:\"e9721195ea45d37f715057074c9b7632108963526b6d75843213e38ad2ed7f6c\" pid:3494 exited_at:{seconds:1747181458 nanos:779487909}" May 14 00:10:58.779906 systemd[1]: cri-containerd-e9721195ea45d37f715057074c9b7632108963526b6d75843213e38ad2ed7f6c.scope: Deactivated successfully. May 14 00:10:58.781820 containerd[1528]: time="2025-05-14T00:10:58.781661662Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e9721195ea45d37f715057074c9b7632108963526b6d75843213e38ad2ed7f6c\" id:\"e9721195ea45d37f715057074c9b7632108963526b6d75843213e38ad2ed7f6c\" pid:3494 exited_at:{seconds:1747181458 nanos:779487909}" May 14 00:10:58.782803 systemd[1]: cri-containerd-e9721195ea45d37f715057074c9b7632108963526b6d75843213e38ad2ed7f6c.scope: Consumed 514ms CPU time, 152M memory peak, 4.4M read from disk, 154M written to disk. May 14 00:10:58.807664 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e9721195ea45d37f715057074c9b7632108963526b6d75843213e38ad2ed7f6c-rootfs.mount: Deactivated successfully. May 14 00:10:58.883257 kubelet[2809]: I0514 00:10:58.880826 2809 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 14 00:10:58.937245 systemd[1]: Created slice kubepods-burstable-pod58cfd7d7_0524_417e_85d9_480b10f05393.slice - libcontainer container kubepods-burstable-pod58cfd7d7_0524_417e_85d9_480b10f05393.slice. May 14 00:10:58.943003 kubelet[2809]: I0514 00:10:58.942421 2809 status_manager.go:890] "Failed to get status for pod" podUID="58cfd7d7-0524-417e-85d9-480b10f05393" pod="kube-system/coredns-668d6bf9bc-r6jz6" err="pods \"coredns-668d6bf9bc-r6jz6\" is forbidden: User \"system:node:ci-4284-0-0-n-186718797f\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4284-0-0-n-186718797f' and this object" May 14 00:10:58.955098 kubelet[2809]: I0514 00:10:58.955059 2809 status_manager.go:890] "Failed to get status for pod" podUID="1239bae5-1a19-4589-baa4-ecbb42a30c35" pod="calico-apiserver/calico-apiserver-568f957dcc-zsl87" err="pods \"calico-apiserver-568f957dcc-zsl87\" is forbidden: User \"system:node:ci-4284-0-0-n-186718797f\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ci-4284-0-0-n-186718797f' and this object" May 14 00:10:58.963049 systemd[1]: Created slice kubepods-besteffort-pod1239bae5_1a19_4589_baa4_ecbb42a30c35.slice - libcontainer container kubepods-besteffort-pod1239bae5_1a19_4589_baa4_ecbb42a30c35.slice. May 14 00:10:58.970558 systemd[1]: Created slice kubepods-besteffort-poddc857965_ac00_4dfe_a553_796409c1c761.slice - libcontainer container kubepods-besteffort-poddc857965_ac00_4dfe_a553_796409c1c761.slice. May 14 00:10:58.978927 systemd[1]: Created slice kubepods-besteffort-pod1ab90b1d_86be_4930_a872_d9c7c6c86940.slice - libcontainer container kubepods-besteffort-pod1ab90b1d_86be_4930_a872_d9c7c6c86940.slice. May 14 00:10:58.984175 systemd[1]: Created slice kubepods-burstable-pod7d28e089_069d_4ca7_b4e0_7bc9c3ef4192.slice - libcontainer container kubepods-burstable-pod7d28e089_069d_4ca7_b4e0_7bc9c3ef4192.slice. May 14 00:10:58.986368 kubelet[2809]: I0514 00:10:58.985862 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1ab90b1d-86be-4930-a872-d9c7c6c86940-tigera-ca-bundle\") pod \"calico-kube-controllers-7d8cdccdc5-z2wjq\" (UID: \"1ab90b1d-86be-4930-a872-d9c7c6c86940\") " pod="calico-system/calico-kube-controllers-7d8cdccdc5-z2wjq" May 14 00:10:58.986368 kubelet[2809]: I0514 00:10:58.985893 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vj4w8\" (UniqueName: \"kubernetes.io/projected/7d28e089-069d-4ca7-b4e0-7bc9c3ef4192-kube-api-access-vj4w8\") pod \"coredns-668d6bf9bc-sw7b6\" (UID: \"7d28e089-069d-4ca7-b4e0-7bc9c3ef4192\") " pod="kube-system/coredns-668d6bf9bc-sw7b6" May 14 00:10:58.986368 kubelet[2809]: I0514 00:10:58.985909 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/dc857965-ac00-4dfe-a553-796409c1c761-calico-apiserver-certs\") pod \"calico-apiserver-568f957dcc-7qblh\" (UID: \"dc857965-ac00-4dfe-a553-796409c1c761\") " pod="calico-apiserver/calico-apiserver-568f957dcc-7qblh" May 14 00:10:58.986368 kubelet[2809]: I0514 00:10:58.985926 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/58cfd7d7-0524-417e-85d9-480b10f05393-config-volume\") pod \"coredns-668d6bf9bc-r6jz6\" (UID: \"58cfd7d7-0524-417e-85d9-480b10f05393\") " pod="kube-system/coredns-668d6bf9bc-r6jz6" May 14 00:10:58.986368 kubelet[2809]: I0514 00:10:58.985953 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrwbm\" (UniqueName: \"kubernetes.io/projected/58cfd7d7-0524-417e-85d9-480b10f05393-kube-api-access-lrwbm\") pod \"coredns-668d6bf9bc-r6jz6\" (UID: \"58cfd7d7-0524-417e-85d9-480b10f05393\") " pod="kube-system/coredns-668d6bf9bc-r6jz6" May 14 00:10:58.986520 kubelet[2809]: I0514 00:10:58.985966 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d28e089-069d-4ca7-b4e0-7bc9c3ef4192-config-volume\") pod \"coredns-668d6bf9bc-sw7b6\" (UID: \"7d28e089-069d-4ca7-b4e0-7bc9c3ef4192\") " pod="kube-system/coredns-668d6bf9bc-sw7b6" May 14 00:10:58.986520 kubelet[2809]: I0514 00:10:58.985981 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzd6m\" (UniqueName: \"kubernetes.io/projected/1239bae5-1a19-4589-baa4-ecbb42a30c35-kube-api-access-pzd6m\") pod \"calico-apiserver-568f957dcc-zsl87\" (UID: \"1239bae5-1a19-4589-baa4-ecbb42a30c35\") " pod="calico-apiserver/calico-apiserver-568f957dcc-zsl87" May 14 00:10:58.986520 kubelet[2809]: I0514 00:10:58.985996 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn4hw\" (UniqueName: \"kubernetes.io/projected/dc857965-ac00-4dfe-a553-796409c1c761-kube-api-access-mn4hw\") pod \"calico-apiserver-568f957dcc-7qblh\" (UID: \"dc857965-ac00-4dfe-a553-796409c1c761\") " pod="calico-apiserver/calico-apiserver-568f957dcc-7qblh" May 14 00:10:58.986520 kubelet[2809]: I0514 00:10:58.986009 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1239bae5-1a19-4589-baa4-ecbb42a30c35-calico-apiserver-certs\") pod \"calico-apiserver-568f957dcc-zsl87\" (UID: \"1239bae5-1a19-4589-baa4-ecbb42a30c35\") " pod="calico-apiserver/calico-apiserver-568f957dcc-zsl87" May 14 00:10:58.986520 kubelet[2809]: I0514 00:10:58.986025 2809 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxvqd\" (UniqueName: \"kubernetes.io/projected/1ab90b1d-86be-4930-a872-d9c7c6c86940-kube-api-access-xxvqd\") pod \"calico-kube-controllers-7d8cdccdc5-z2wjq\" (UID: \"1ab90b1d-86be-4930-a872-d9c7c6c86940\") " pod="calico-system/calico-kube-controllers-7d8cdccdc5-z2wjq" May 14 00:10:59.247407 containerd[1528]: time="2025-05-14T00:10:59.246574046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-r6jz6,Uid:58cfd7d7-0524-417e-85d9-480b10f05393,Namespace:kube-system,Attempt:0,}" May 14 00:10:59.270426 containerd[1528]: time="2025-05-14T00:10:59.270363057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-568f957dcc-zsl87,Uid:1239bae5-1a19-4589-baa4-ecbb42a30c35,Namespace:calico-apiserver,Attempt:0,}" May 14 00:10:59.300872 containerd[1528]: time="2025-05-14T00:10:59.297695330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-sw7b6,Uid:7d28e089-069d-4ca7-b4e0-7bc9c3ef4192,Namespace:kube-system,Attempt:0,}" May 14 00:10:59.315432 containerd[1528]: time="2025-05-14T00:10:59.315382385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-568f957dcc-7qblh,Uid:dc857965-ac00-4dfe-a553-796409c1c761,Namespace:calico-apiserver,Attempt:0,}" May 14 00:10:59.320251 containerd[1528]: time="2025-05-14T00:10:59.315664075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d8cdccdc5-z2wjq,Uid:1ab90b1d-86be-4930-a872-d9c7c6c86940,Namespace:calico-system,Attempt:0,}" May 14 00:10:59.537526 containerd[1528]: time="2025-05-14T00:10:59.537374572Z" level=error msg="Failed to destroy network for sandbox \"6ff6aadfefde970e6b6ef22a65adece429b24fd2a48ac4365c5500ad5c587b08\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:10:59.543797 containerd[1528]: time="2025-05-14T00:10:59.543762466Z" level=error msg="Failed to destroy network for sandbox \"2d2904896a311be3150a00923e06b55525f4dc3996034353b140890d123e1316\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:10:59.544591 containerd[1528]: time="2025-05-14T00:10:59.543973887Z" level=error msg="Failed to destroy network for sandbox \"7af20f5e29208310a665ea2f4a31435a5befb551ecc865af9428852e11554747\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:10:59.545404 containerd[1528]: time="2025-05-14T00:10:59.545360647Z" level=error msg="Failed to destroy network for sandbox \"1fd236557b7e46e020ed1213243311800f4ac5a288b3cada5426a5e932fe4721\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:10:59.557900 containerd[1528]: time="2025-05-14T00:10:59.546607618Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d8cdccdc5-z2wjq,Uid:1ab90b1d-86be-4930-a872-d9c7c6c86940,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d2904896a311be3150a00923e06b55525f4dc3996034353b140890d123e1316\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:10:59.558652 containerd[1528]: time="2025-05-14T00:10:59.547757351Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-sw7b6,Uid:7d28e089-069d-4ca7-b4e0-7bc9c3ef4192,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ff6aadfefde970e6b6ef22a65adece429b24fd2a48ac4365c5500ad5c587b08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:10:59.568727 containerd[1528]: time="2025-05-14T00:10:59.548726591Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-568f957dcc-zsl87,Uid:1239bae5-1a19-4589-baa4-ecbb42a30c35,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7af20f5e29208310a665ea2f4a31435a5befb551ecc865af9428852e11554747\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:10:59.568727 containerd[1528]: time="2025-05-14T00:10:59.549662869Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-568f957dcc-7qblh,Uid:dc857965-ac00-4dfe-a553-796409c1c761,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fd236557b7e46e020ed1213243311800f4ac5a288b3cada5426a5e932fe4721\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:10:59.568727 containerd[1528]: time="2025-05-14T00:10:59.553904150Z" level=error msg="Failed to destroy network for sandbox \"67e4eccb0d8f38cd1fdcb65de286c51acff509a162ffa7235460c6c679f2fb2d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:10:59.570810 containerd[1528]: time="2025-05-14T00:10:59.570707594Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-r6jz6,Uid:58cfd7d7-0524-417e-85d9-480b10f05393,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"67e4eccb0d8f38cd1fdcb65de286c51acff509a162ffa7235460c6c679f2fb2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:10:59.572271 kubelet[2809]: E0514 00:10:59.571413 2809 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d2904896a311be3150a00923e06b55525f4dc3996034353b140890d123e1316\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:10:59.572271 kubelet[2809]: E0514 00:10:59.571626 2809 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d2904896a311be3150a00923e06b55525f4dc3996034353b140890d123e1316\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7d8cdccdc5-z2wjq" May 14 00:10:59.572271 kubelet[2809]: E0514 00:10:59.571648 2809 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d2904896a311be3150a00923e06b55525f4dc3996034353b140890d123e1316\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7d8cdccdc5-z2wjq" May 14 00:10:59.572423 kubelet[2809]: E0514 00:10:59.571686 2809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7d8cdccdc5-z2wjq_calico-system(1ab90b1d-86be-4930-a872-d9c7c6c86940)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7d8cdccdc5-z2wjq_calico-system(1ab90b1d-86be-4930-a872-d9c7c6c86940)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2d2904896a311be3150a00923e06b55525f4dc3996034353b140890d123e1316\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7d8cdccdc5-z2wjq" podUID="1ab90b1d-86be-4930-a872-d9c7c6c86940" May 14 00:10:59.572423 kubelet[2809]: E0514 00:10:59.571936 2809 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fd236557b7e46e020ed1213243311800f4ac5a288b3cada5426a5e932fe4721\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:10:59.572423 kubelet[2809]: E0514 00:10:59.571961 2809 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fd236557b7e46e020ed1213243311800f4ac5a288b3cada5426a5e932fe4721\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-568f957dcc-7qblh" May 14 00:10:59.572517 kubelet[2809]: E0514 00:10:59.571975 2809 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fd236557b7e46e020ed1213243311800f4ac5a288b3cada5426a5e932fe4721\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-568f957dcc-7qblh" May 14 00:10:59.572517 kubelet[2809]: E0514 00:10:59.571999 2809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-568f957dcc-7qblh_calico-apiserver(dc857965-ac00-4dfe-a553-796409c1c761)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-568f957dcc-7qblh_calico-apiserver(dc857965-ac00-4dfe-a553-796409c1c761)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1fd236557b7e46e020ed1213243311800f4ac5a288b3cada5426a5e932fe4721\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-568f957dcc-7qblh" podUID="dc857965-ac00-4dfe-a553-796409c1c761" May 14 00:10:59.572517 kubelet[2809]: E0514 00:10:59.572026 2809 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ff6aadfefde970e6b6ef22a65adece429b24fd2a48ac4365c5500ad5c587b08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:10:59.572599 kubelet[2809]: E0514 00:10:59.572039 2809 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ff6aadfefde970e6b6ef22a65adece429b24fd2a48ac4365c5500ad5c587b08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-sw7b6" May 14 00:10:59.572599 kubelet[2809]: E0514 00:10:59.572052 2809 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ff6aadfefde970e6b6ef22a65adece429b24fd2a48ac4365c5500ad5c587b08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-sw7b6" May 14 00:10:59.572599 kubelet[2809]: E0514 00:10:59.572070 2809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-sw7b6_kube-system(7d28e089-069d-4ca7-b4e0-7bc9c3ef4192)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-sw7b6_kube-system(7d28e089-069d-4ca7-b4e0-7bc9c3ef4192)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6ff6aadfefde970e6b6ef22a65adece429b24fd2a48ac4365c5500ad5c587b08\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-sw7b6" podUID="7d28e089-069d-4ca7-b4e0-7bc9c3ef4192" May 14 00:10:59.572681 kubelet[2809]: E0514 00:10:59.572090 2809 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7af20f5e29208310a665ea2f4a31435a5befb551ecc865af9428852e11554747\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:10:59.572681 kubelet[2809]: E0514 00:10:59.572103 2809 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7af20f5e29208310a665ea2f4a31435a5befb551ecc865af9428852e11554747\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-568f957dcc-zsl87" May 14 00:10:59.572681 kubelet[2809]: E0514 00:10:59.572114 2809 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7af20f5e29208310a665ea2f4a31435a5befb551ecc865af9428852e11554747\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-568f957dcc-zsl87" May 14 00:10:59.572755 kubelet[2809]: E0514 00:10:59.572134 2809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-568f957dcc-zsl87_calico-apiserver(1239bae5-1a19-4589-baa4-ecbb42a30c35)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-568f957dcc-zsl87_calico-apiserver(1239bae5-1a19-4589-baa4-ecbb42a30c35)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7af20f5e29208310a665ea2f4a31435a5befb551ecc865af9428852e11554747\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-568f957dcc-zsl87" podUID="1239bae5-1a19-4589-baa4-ecbb42a30c35" May 14 00:10:59.572755 kubelet[2809]: E0514 00:10:59.572157 2809 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67e4eccb0d8f38cd1fdcb65de286c51acff509a162ffa7235460c6c679f2fb2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:10:59.572755 kubelet[2809]: E0514 00:10:59.572171 2809 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67e4eccb0d8f38cd1fdcb65de286c51acff509a162ffa7235460c6c679f2fb2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-r6jz6" May 14 00:10:59.572873 kubelet[2809]: E0514 00:10:59.572181 2809 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67e4eccb0d8f38cd1fdcb65de286c51acff509a162ffa7235460c6c679f2fb2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-r6jz6" May 14 00:10:59.572873 kubelet[2809]: E0514 00:10:59.572209 2809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-r6jz6_kube-system(58cfd7d7-0524-417e-85d9-480b10f05393)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-r6jz6_kube-system(58cfd7d7-0524-417e-85d9-480b10f05393)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"67e4eccb0d8f38cd1fdcb65de286c51acff509a162ffa7235460c6c679f2fb2d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-r6jz6" podUID="58cfd7d7-0524-417e-85d9-480b10f05393" May 14 00:10:59.897250 containerd[1528]: time="2025-05-14T00:10:59.896567659Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 14 00:11:00.153528 systemd[1]: run-netns-cni\x2d404c461b\x2ddb24\x2d1e52\x2d9f50\x2dcf9ec85dc0cb.mount: Deactivated successfully. May 14 00:11:00.153715 systemd[1]: run-netns-cni\x2d54356e2f\x2d5a58\x2df8db\x2df8bf\x2d87b41a289944.mount: Deactivated successfully. May 14 00:11:00.153824 systemd[1]: run-netns-cni\x2dad372f55\x2d1776\x2d829d\x2def72\x2d1139e69f7ff5.mount: Deactivated successfully. May 14 00:11:00.153991 systemd[1]: run-netns-cni\x2d43cf6333\x2d6a41\x2d3327\x2d4a7c\x2d3f5ce2a99fad.mount: Deactivated successfully. May 14 00:11:00.699865 systemd[1]: Created slice kubepods-besteffort-podfe53988a_3756_47b4_b495_d0f23a69a35f.slice - libcontainer container kubepods-besteffort-podfe53988a_3756_47b4_b495_d0f23a69a35f.slice. May 14 00:11:00.707606 containerd[1528]: time="2025-05-14T00:11:00.707553376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-86hgs,Uid:fe53988a-3756-47b4-b495-d0f23a69a35f,Namespace:calico-system,Attempt:0,}" May 14 00:11:00.787556 containerd[1528]: time="2025-05-14T00:11:00.787480185Z" level=error msg="Failed to destroy network for sandbox \"6121772b3022ea687d2d4f92dad6928ecdb0dac9d7bc894d57cce3380d5e1b19\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:11:00.793606 containerd[1528]: time="2025-05-14T00:11:00.793468885Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-86hgs,Uid:fe53988a-3756-47b4-b495-d0f23a69a35f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6121772b3022ea687d2d4f92dad6928ecdb0dac9d7bc894d57cce3380d5e1b19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:11:00.794545 systemd[1]: run-netns-cni\x2d9610add1\x2dbf02\x2d3f45\x2d0c0a\x2dd4652a4defec.mount: Deactivated successfully. May 14 00:11:00.796763 kubelet[2809]: E0514 00:11:00.794728 2809 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6121772b3022ea687d2d4f92dad6928ecdb0dac9d7bc894d57cce3380d5e1b19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 00:11:00.796763 kubelet[2809]: E0514 00:11:00.794834 2809 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6121772b3022ea687d2d4f92dad6928ecdb0dac9d7bc894d57cce3380d5e1b19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-86hgs" May 14 00:11:00.796763 kubelet[2809]: E0514 00:11:00.794891 2809 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6121772b3022ea687d2d4f92dad6928ecdb0dac9d7bc894d57cce3380d5e1b19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-86hgs" May 14 00:11:00.797071 kubelet[2809]: E0514 00:11:00.794956 2809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-86hgs_calico-system(fe53988a-3756-47b4-b495-d0f23a69a35f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-86hgs_calico-system(fe53988a-3756-47b4-b495-d0f23a69a35f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6121772b3022ea687d2d4f92dad6928ecdb0dac9d7bc894d57cce3380d5e1b19\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-86hgs" podUID="fe53988a-3756-47b4-b495-d0f23a69a35f" May 14 00:11:07.999194 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3376375284.mount: Deactivated successfully. May 14 00:11:08.209903 containerd[1528]: time="2025-05-14T00:11:08.209167311Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:11:08.274730 containerd[1528]: time="2025-05-14T00:11:08.186504462Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" May 14 00:11:08.274730 containerd[1528]: time="2025-05-14T00:11:08.258892935Z" level=info msg="ImageCreate event name:\"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:11:08.289529 containerd[1528]: time="2025-05-14T00:11:08.288958351Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:11:08.298776 containerd[1528]: time="2025-05-14T00:11:08.298700568Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"144068610\" in 8.398755747s" May 14 00:11:08.305175 containerd[1528]: time="2025-05-14T00:11:08.305128880Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\"" May 14 00:11:08.425910 containerd[1528]: time="2025-05-14T00:11:08.425822077Z" level=info msg="CreateContainer within sandbox \"44ee60be99d3f4f5a1d079644cb13f79da20f0c09ea5abcd31c28c77c0b051f7\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 14 00:11:08.520311 containerd[1528]: time="2025-05-14T00:11:08.520252642Z" level=info msg="Container eab12bd0739f390cc9dd331e60ff26d292938081c479e7fff02af86d04f23f0d: CDI devices from CRI Config.CDIDevices: []" May 14 00:11:08.711267 containerd[1528]: time="2025-05-14T00:11:08.711139591Z" level=info msg="CreateContainer within sandbox \"44ee60be99d3f4f5a1d079644cb13f79da20f0c09ea5abcd31c28c77c0b051f7\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"eab12bd0739f390cc9dd331e60ff26d292938081c479e7fff02af86d04f23f0d\"" May 14 00:11:08.719209 containerd[1528]: time="2025-05-14T00:11:08.718309075Z" level=info msg="StartContainer for \"eab12bd0739f390cc9dd331e60ff26d292938081c479e7fff02af86d04f23f0d\"" May 14 00:11:08.738586 containerd[1528]: time="2025-05-14T00:11:08.738521337Z" level=info msg="connecting to shim eab12bd0739f390cc9dd331e60ff26d292938081c479e7fff02af86d04f23f0d" address="unix:///run/containerd/s/8b98191525fdb65d93424dbea43c1516f3d56e302b8f2511d5696f40ed529d7b" protocol=ttrpc version=3 May 14 00:11:08.942996 systemd[1]: Started cri-containerd-eab12bd0739f390cc9dd331e60ff26d292938081c479e7fff02af86d04f23f0d.scope - libcontainer container eab12bd0739f390cc9dd331e60ff26d292938081c479e7fff02af86d04f23f0d. May 14 00:11:09.010291 containerd[1528]: time="2025-05-14T00:11:09.010079109Z" level=info msg="StartContainer for \"eab12bd0739f390cc9dd331e60ff26d292938081c479e7fff02af86d04f23f0d\" returns successfully" May 14 00:11:09.081495 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 14 00:11:09.082747 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 14 00:11:10.302856 containerd[1528]: time="2025-05-14T00:11:10.302801457Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eab12bd0739f390cc9dd331e60ff26d292938081c479e7fff02af86d04f23f0d\" id:\"deba33056c05d3557555e6063a30353d961b9d6269cd9d53c1ac5cb05e1d4007\" pid:3791 exit_status:1 exited_at:{seconds:1747181470 nanos:302313844}" May 14 00:11:10.694328 containerd[1528]: time="2025-05-14T00:11:10.691179347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d8cdccdc5-z2wjq,Uid:1ab90b1d-86be-4930-a872-d9c7c6c86940,Namespace:calico-system,Attempt:0,}" May 14 00:11:10.850257 kernel: bpftool[3936]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 14 00:11:11.100283 systemd-networkd[1411]: calie8f777c6c2b: Link UP May 14 00:11:11.100422 systemd-networkd[1411]: calie8f777c6c2b: Gained carrier May 14 00:11:11.128615 containerd[1528]: 2025-05-14 00:11:10.761 [INFO][3894] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 14 00:11:11.128615 containerd[1528]: 2025-05-14 00:11:10.806 [INFO][3894] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4284--0--0--n--186718797f-k8s-calico--kube--controllers--7d8cdccdc5--z2wjq-eth0 calico-kube-controllers-7d8cdccdc5- calico-system 1ab90b1d-86be-4930-a872-d9c7c6c86940 710 0 2025-05-14 00:10:47 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7d8cdccdc5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4284-0-0-n-186718797f calico-kube-controllers-7d8cdccdc5-z2wjq eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calie8f777c6c2b [] []}} ContainerID="b836ba75c5925a66dfaf7919332b583a960a85b1b83b47256f86439328904fa4" Namespace="calico-system" Pod="calico-kube-controllers-7d8cdccdc5-z2wjq" WorkloadEndpoint="ci--4284--0--0--n--186718797f-k8s-calico--kube--controllers--7d8cdccdc5--z2wjq-" May 14 00:11:11.128615 containerd[1528]: 2025-05-14 00:11:10.806 [INFO][3894] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b836ba75c5925a66dfaf7919332b583a960a85b1b83b47256f86439328904fa4" Namespace="calico-system" Pod="calico-kube-controllers-7d8cdccdc5-z2wjq" WorkloadEndpoint="ci--4284--0--0--n--186718797f-k8s-calico--kube--controllers--7d8cdccdc5--z2wjq-eth0" May 14 00:11:11.128615 containerd[1528]: 2025-05-14 00:11:11.006 [INFO][3929] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b836ba75c5925a66dfaf7919332b583a960a85b1b83b47256f86439328904fa4" HandleID="k8s-pod-network.b836ba75c5925a66dfaf7919332b583a960a85b1b83b47256f86439328904fa4" Workload="ci--4284--0--0--n--186718797f-k8s-calico--kube--controllers--7d8cdccdc5--z2wjq-eth0" May 14 00:11:11.129194 containerd[1528]: 2025-05-14 00:11:11.021 [INFO][3929] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b836ba75c5925a66dfaf7919332b583a960a85b1b83b47256f86439328904fa4" HandleID="k8s-pod-network.b836ba75c5925a66dfaf7919332b583a960a85b1b83b47256f86439328904fa4" Workload="ci--4284--0--0--n--186718797f-k8s-calico--kube--controllers--7d8cdccdc5--z2wjq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003b1bf0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4284-0-0-n-186718797f", "pod":"calico-kube-controllers-7d8cdccdc5-z2wjq", "timestamp":"2025-05-14 00:11:11.006411813 +0000 UTC"}, Hostname:"ci-4284-0-0-n-186718797f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 00:11:11.129194 containerd[1528]: 2025-05-14 00:11:11.021 [INFO][3929] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 00:11:11.129194 containerd[1528]: 2025-05-14 00:11:11.021 [INFO][3929] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 00:11:11.129194 containerd[1528]: 2025-05-14 00:11:11.021 [INFO][3929] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4284-0-0-n-186718797f' May 14 00:11:11.129194 containerd[1528]: 2025-05-14 00:11:11.028 [INFO][3929] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b836ba75c5925a66dfaf7919332b583a960a85b1b83b47256f86439328904fa4" host="ci-4284-0-0-n-186718797f" May 14 00:11:11.129194 containerd[1528]: 2025-05-14 00:11:11.043 [INFO][3929] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4284-0-0-n-186718797f" May 14 00:11:11.129194 containerd[1528]: 2025-05-14 00:11:11.055 [INFO][3929] ipam/ipam.go 489: Trying affinity for 192.168.54.128/26 host="ci-4284-0-0-n-186718797f" May 14 00:11:11.129194 containerd[1528]: 2025-05-14 00:11:11.058 [INFO][3929] ipam/ipam.go 155: Attempting to load block cidr=192.168.54.128/26 host="ci-4284-0-0-n-186718797f" May 14 00:11:11.129194 containerd[1528]: 2025-05-14 00:11:11.061 [INFO][3929] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.54.128/26 host="ci-4284-0-0-n-186718797f" May 14 00:11:11.132073 containerd[1528]: 2025-05-14 00:11:11.061 [INFO][3929] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.54.128/26 handle="k8s-pod-network.b836ba75c5925a66dfaf7919332b583a960a85b1b83b47256f86439328904fa4" host="ci-4284-0-0-n-186718797f" May 14 00:11:11.132073 containerd[1528]: 2025-05-14 00:11:11.064 [INFO][3929] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b836ba75c5925a66dfaf7919332b583a960a85b1b83b47256f86439328904fa4 May 14 00:11:11.132073 containerd[1528]: 2025-05-14 00:11:11.069 [INFO][3929] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.54.128/26 handle="k8s-pod-network.b836ba75c5925a66dfaf7919332b583a960a85b1b83b47256f86439328904fa4" host="ci-4284-0-0-n-186718797f" May 14 00:11:11.132073 containerd[1528]: 2025-05-14 00:11:11.078 [INFO][3929] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.54.129/26] block=192.168.54.128/26 handle="k8s-pod-network.b836ba75c5925a66dfaf7919332b583a960a85b1b83b47256f86439328904fa4" host="ci-4284-0-0-n-186718797f" May 14 00:11:11.132073 containerd[1528]: 2025-05-14 00:11:11.078 [INFO][3929] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.54.129/26] handle="k8s-pod-network.b836ba75c5925a66dfaf7919332b583a960a85b1b83b47256f86439328904fa4" host="ci-4284-0-0-n-186718797f" May 14 00:11:11.132073 containerd[1528]: 2025-05-14 00:11:11.078 [INFO][3929] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 00:11:11.132073 containerd[1528]: 2025-05-14 00:11:11.079 [INFO][3929] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.54.129/26] IPv6=[] ContainerID="b836ba75c5925a66dfaf7919332b583a960a85b1b83b47256f86439328904fa4" HandleID="k8s-pod-network.b836ba75c5925a66dfaf7919332b583a960a85b1b83b47256f86439328904fa4" Workload="ci--4284--0--0--n--186718797f-k8s-calico--kube--controllers--7d8cdccdc5--z2wjq-eth0" May 14 00:11:11.132839 containerd[1528]: 2025-05-14 00:11:11.084 [INFO][3894] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b836ba75c5925a66dfaf7919332b583a960a85b1b83b47256f86439328904fa4" Namespace="calico-system" Pod="calico-kube-controllers-7d8cdccdc5-z2wjq" WorkloadEndpoint="ci--4284--0--0--n--186718797f-k8s-calico--kube--controllers--7d8cdccdc5--z2wjq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284--0--0--n--186718797f-k8s-calico--kube--controllers--7d8cdccdc5--z2wjq-eth0", GenerateName:"calico-kube-controllers-7d8cdccdc5-", Namespace:"calico-system", SelfLink:"", UID:"1ab90b1d-86be-4930-a872-d9c7c6c86940", ResourceVersion:"710", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 0, 10, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d8cdccdc5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284-0-0-n-186718797f", ContainerID:"", Pod:"calico-kube-controllers-7d8cdccdc5-z2wjq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.54.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie8f777c6c2b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 00:11:11.132898 containerd[1528]: 2025-05-14 00:11:11.084 [INFO][3894] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.54.129/32] ContainerID="b836ba75c5925a66dfaf7919332b583a960a85b1b83b47256f86439328904fa4" Namespace="calico-system" Pod="calico-kube-controllers-7d8cdccdc5-z2wjq" WorkloadEndpoint="ci--4284--0--0--n--186718797f-k8s-calico--kube--controllers--7d8cdccdc5--z2wjq-eth0" May 14 00:11:11.132898 containerd[1528]: 2025-05-14 00:11:11.084 [INFO][3894] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie8f777c6c2b ContainerID="b836ba75c5925a66dfaf7919332b583a960a85b1b83b47256f86439328904fa4" Namespace="calico-system" Pod="calico-kube-controllers-7d8cdccdc5-z2wjq" WorkloadEndpoint="ci--4284--0--0--n--186718797f-k8s-calico--kube--controllers--7d8cdccdc5--z2wjq-eth0" May 14 00:11:11.132898 containerd[1528]: 2025-05-14 00:11:11.099 [INFO][3894] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b836ba75c5925a66dfaf7919332b583a960a85b1b83b47256f86439328904fa4" Namespace="calico-system" Pod="calico-kube-controllers-7d8cdccdc5-z2wjq" WorkloadEndpoint="ci--4284--0--0--n--186718797f-k8s-calico--kube--controllers--7d8cdccdc5--z2wjq-eth0" May 14 00:11:11.132967 containerd[1528]: 2025-05-14 00:11:11.100 [INFO][3894] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b836ba75c5925a66dfaf7919332b583a960a85b1b83b47256f86439328904fa4" Namespace="calico-system" Pod="calico-kube-controllers-7d8cdccdc5-z2wjq" WorkloadEndpoint="ci--4284--0--0--n--186718797f-k8s-calico--kube--controllers--7d8cdccdc5--z2wjq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284--0--0--n--186718797f-k8s-calico--kube--controllers--7d8cdccdc5--z2wjq-eth0", GenerateName:"calico-kube-controllers-7d8cdccdc5-", Namespace:"calico-system", SelfLink:"", UID:"1ab90b1d-86be-4930-a872-d9c7c6c86940", ResourceVersion:"710", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 0, 10, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d8cdccdc5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284-0-0-n-186718797f", ContainerID:"b836ba75c5925a66dfaf7919332b583a960a85b1b83b47256f86439328904fa4", Pod:"calico-kube-controllers-7d8cdccdc5-z2wjq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.54.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie8f777c6c2b", MAC:"e6:95:a8:60:ec:65", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 00:11:11.133016 containerd[1528]: 2025-05-14 00:11:11.116 [INFO][3894] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b836ba75c5925a66dfaf7919332b583a960a85b1b83b47256f86439328904fa4" Namespace="calico-system" Pod="calico-kube-controllers-7d8cdccdc5-z2wjq" WorkloadEndpoint="ci--4284--0--0--n--186718797f-k8s-calico--kube--controllers--7d8cdccdc5--z2wjq-eth0" May 14 00:11:11.138339 kubelet[2809]: I0514 00:11:11.122891 2809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-qhscz" podStartSLOduration=4.109939411 podStartE2EDuration="25.112673373s" podCreationTimestamp="2025-05-14 00:10:46 +0000 UTC" firstStartedPulling="2025-05-14 00:10:47.303152289 +0000 UTC m=+12.843636155" lastFinishedPulling="2025-05-14 00:11:08.305886241 +0000 UTC m=+33.846370117" observedRunningTime="2025-05-14 00:11:10.172371946 +0000 UTC m=+35.712855852" watchObservedRunningTime="2025-05-14 00:11:11.112673373 +0000 UTC m=+36.653157239" May 14 00:11:11.179038 systemd-networkd[1411]: vxlan.calico: Link UP May 14 00:11:11.179047 systemd-networkd[1411]: vxlan.calico: Gained carrier May 14 00:11:11.200145 containerd[1528]: time="2025-05-14T00:11:11.200112528Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eab12bd0739f390cc9dd331e60ff26d292938081c479e7fff02af86d04f23f0d\" id:\"e7b521e9a0055ecd885d89a639aac96ce5cc94d262d26ed6793fad222d01c649\" pid:3971 exit_status:1 exited_at:{seconds:1747181471 nanos:199767178}" May 14 00:11:12.135865 containerd[1528]: time="2025-05-14T00:11:12.135796912Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eab12bd0739f390cc9dd331e60ff26d292938081c479e7fff02af86d04f23f0d\" id:\"9a1809c6bdcb4925e5ff2119bddfa7a086189ccb3b0e17ecac81d39163ac34c3\" pid:4062 exit_status:1 exited_at:{seconds:1747181472 nanos:135424592}" May 14 00:11:12.397473 systemd-networkd[1411]: calie8f777c6c2b: Gained IPv6LL May 14 00:11:12.589428 systemd-networkd[1411]: vxlan.calico: Gained IPv6LL May 14 00:11:12.689954 containerd[1528]: time="2025-05-14T00:11:12.689100465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-86hgs,Uid:fe53988a-3756-47b4-b495-d0f23a69a35f,Namespace:calico-system,Attempt:0,}" May 14 00:11:13.688321 containerd[1528]: time="2025-05-14T00:11:13.688267579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-568f957dcc-zsl87,Uid:1239bae5-1a19-4589-baa4-ecbb42a30c35,Namespace:calico-apiserver,Attempt:0,}" May 14 00:11:14.696137 containerd[1528]: time="2025-05-14T00:11:14.694332010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-sw7b6,Uid:7d28e089-069d-4ca7-b4e0-7bc9c3ef4192,Namespace:kube-system,Attempt:0,}" May 14 00:11:14.697050 containerd[1528]: time="2025-05-14T00:11:14.696940506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-568f957dcc-7qblh,Uid:dc857965-ac00-4dfe-a553-796409c1c761,Namespace:calico-apiserver,Attempt:0,}" May 14 00:11:14.697415 containerd[1528]: time="2025-05-14T00:11:14.697324528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-r6jz6,Uid:58cfd7d7-0524-417e-85d9-480b10f05393,Namespace:kube-system,Attempt:0,}" May 14 00:11:40.690849 kubelet[2809]: E0514 00:11:40.690778 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:11:40.794934 kubelet[2809]: E0514 00:11:40.791066 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:11:40.991506 kubelet[2809]: E0514 00:11:40.991449 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:11:41.392514 kubelet[2809]: E0514 00:11:41.392312 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:11:42.104688 containerd[1528]: time="2025-05-14T00:11:42.104634253Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eab12bd0739f390cc9dd331e60ff26d292938081c479e7fff02af86d04f23f0d\" id:\"ce4c217b2a12179619410775402a86657080c0d894f6436f85600fc62303e3fb\" pid:4121 exited_at:{seconds:1747181502 nanos:103966149}" May 14 00:11:42.193524 kubelet[2809]: E0514 00:11:42.193427 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:11:43.794379 kubelet[2809]: E0514 00:11:43.794276 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:11:46.817088 kubelet[2809]: I0514 00:11:46.816976 2809 setters.go:602] "Node became not ready" node="ci-4284-0-0-n-186718797f" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-14T00:11:46Z","lastTransitionTime":"2025-05-14T00:11:46Z","reason":"KubeletNotReady","message":"container runtime is down"} May 14 00:11:46.994592 kubelet[2809]: E0514 00:11:46.994487 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:11:51.995140 kubelet[2809]: E0514 00:11:51.995009 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:11:56.995779 kubelet[2809]: E0514 00:11:56.995665 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:12:01.997280 kubelet[2809]: E0514 00:12:01.997165 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:12:06.997427 kubelet[2809]: E0514 00:12:06.997330 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:12:11.997916 kubelet[2809]: E0514 00:12:11.997843 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:12:12.117114 containerd[1528]: time="2025-05-14T00:12:12.117044252Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eab12bd0739f390cc9dd331e60ff26d292938081c479e7fff02af86d04f23f0d\" id:\"6286d0e030647d7f8efedb8f234a2f58950864284f2ca3bf078df68041bfd67d\" pid:4155 exited_at:{seconds:1747181532 nanos:116308730}" May 14 00:12:16.998556 kubelet[2809]: E0514 00:12:16.998497 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:12:21.999514 kubelet[2809]: E0514 00:12:21.999428 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:12:26.999770 kubelet[2809]: E0514 00:12:26.999624 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:12:31.999909 kubelet[2809]: E0514 00:12:31.999825 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:12:37.000248 kubelet[2809]: E0514 00:12:37.000162 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:12:42.001110 kubelet[2809]: E0514 00:12:42.001043 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:12:42.114435 containerd[1528]: time="2025-05-14T00:12:42.114338338Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eab12bd0739f390cc9dd331e60ff26d292938081c479e7fff02af86d04f23f0d\" id:\"1ea395e91a27790a81a937e733506c19191005aee148a00879636a4561d92b9e\" pid:4192 exited_at:{seconds:1747181562 nanos:113436977}" May 14 00:12:47.002307 kubelet[2809]: E0514 00:12:47.002211 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:12:52.002672 kubelet[2809]: E0514 00:12:52.002612 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:12:57.002878 kubelet[2809]: E0514 00:12:57.002789 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:13:02.003352 kubelet[2809]: E0514 00:13:02.003283 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:13:07.004391 kubelet[2809]: E0514 00:13:07.004282 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:13:12.005431 kubelet[2809]: E0514 00:13:12.005367 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:13:12.110047 containerd[1528]: time="2025-05-14T00:13:12.109972207Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eab12bd0739f390cc9dd331e60ff26d292938081c479e7fff02af86d04f23f0d\" id:\"6bc6af5cc0f9b6e635a8d75dec8ea57829188188c2533b727dd5e316629b39f9\" pid:4232 exited_at:{seconds:1747181592 nanos:109273117}" May 14 00:13:14.814756 kubelet[2809]: E0514 00:13:14.814662 2809 log.go:32] "Status from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" May 14 00:13:14.814756 kubelet[2809]: E0514 00:13:14.814757 2809 kubelet.go:2993] "Container runtime sanity check failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" May 14 00:13:17.005945 kubelet[2809]: E0514 00:13:17.005888 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:13:22.006542 kubelet[2809]: E0514 00:13:22.006434 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:13:27.007352 kubelet[2809]: E0514 00:13:27.007279 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:13:32.007812 kubelet[2809]: E0514 00:13:32.007729 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:13:37.008998 kubelet[2809]: E0514 00:13:37.008932 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:13:42.009481 kubelet[2809]: E0514 00:13:42.009388 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:13:42.130188 containerd[1528]: time="2025-05-14T00:13:42.130115836Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eab12bd0739f390cc9dd331e60ff26d292938081c479e7fff02af86d04f23f0d\" id:\"707885ecc9ad4b587fa33d2580db3908847bfb93d5704ba3653e661c77898344\" pid:4265 exited_at:{seconds:1747181622 nanos:129364863}" May 14 00:13:47.009965 kubelet[2809]: E0514 00:13:47.009906 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:13:52.011163 kubelet[2809]: E0514 00:13:52.011084 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:13:57.011308 kubelet[2809]: E0514 00:13:57.011199 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:14:02.012139 kubelet[2809]: E0514 00:14:02.012071 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:14:07.013373 kubelet[2809]: E0514 00:14:07.013284 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:14:12.013759 kubelet[2809]: E0514 00:14:12.013680 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:14:12.128479 containerd[1528]: time="2025-05-14T00:14:12.128416614Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eab12bd0739f390cc9dd331e60ff26d292938081c479e7fff02af86d04f23f0d\" id:\"02aef779c76379d00d7be12f55126b5d274a13afe7aeff81212441c7b6179b19\" pid:4302 exited_at:{seconds:1747181652 nanos:127944167}" May 14 00:14:17.013910 kubelet[2809]: E0514 00:14:17.013846 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:14:22.014998 kubelet[2809]: E0514 00:14:22.014928 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:14:27.017065 kubelet[2809]: E0514 00:14:27.016005 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:14:32.016244 kubelet[2809]: E0514 00:14:32.016172 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:14:37.016671 kubelet[2809]: E0514 00:14:37.016602 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:14:42.017584 kubelet[2809]: E0514 00:14:42.017513 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:14:42.146002 containerd[1528]: time="2025-05-14T00:14:42.145956087Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eab12bd0739f390cc9dd331e60ff26d292938081c479e7fff02af86d04f23f0d\" id:\"6b2e7eefc5691d6cf2048cf2c76c8fea832a745ec092fe0e7266f5587ce8c75b\" pid:4342 exited_at:{seconds:1747181682 nanos:145521011}" May 14 00:14:47.018617 kubelet[2809]: E0514 00:14:47.018536 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:14:52.018866 kubelet[2809]: E0514 00:14:52.018761 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:14:57.020113 kubelet[2809]: E0514 00:14:57.019628 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:15:02.020624 kubelet[2809]: E0514 00:15:02.020560 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:15:07.021399 kubelet[2809]: E0514 00:15:07.021290 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:15:10.691095 kubelet[2809]: E0514 00:15:10.691025 2809 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" May 14 00:15:10.691095 kubelet[2809]: E0514 00:15:10.691103 2809 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="calico-system/calico-kube-controllers-7d8cdccdc5-z2wjq" May 14 00:15:10.691730 kubelet[2809]: E0514 00:15:10.691131 2809 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="calico-system/calico-kube-controllers-7d8cdccdc5-z2wjq" May 14 00:15:10.691730 kubelet[2809]: E0514 00:15:10.691182 2809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7d8cdccdc5-z2wjq_calico-system(1ab90b1d-86be-4930-a872-d9c7c6c86940)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7d8cdccdc5-z2wjq_calico-system(1ab90b1d-86be-4930-a872-d9c7c6c86940)\\\": rpc error: code = DeadlineExceeded desc = context deadline exceeded\"" pod="calico-system/calico-kube-controllers-7d8cdccdc5-z2wjq" podUID="1ab90b1d-86be-4930-a872-d9c7c6c86940" May 14 00:15:11.695962 containerd[1528]: time="2025-05-14T00:15:11.695892417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d8cdccdc5-z2wjq,Uid:1ab90b1d-86be-4930-a872-d9c7c6c86940,Namespace:calico-system,Attempt:0,}" May 14 00:15:11.695962 containerd[1528]: time="2025-05-14T00:15:11.695973860Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d8cdccdc5-z2wjq,Uid:1ab90b1d-86be-4930-a872-d9c7c6c86940,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to reserve sandbox name \"calico-kube-controllers-7d8cdccdc5-z2wjq_calico-system_1ab90b1d-86be-4930-a872-d9c7c6c86940_0\": name \"calico-kube-controllers-7d8cdccdc5-z2wjq_calico-system_1ab90b1d-86be-4930-a872-d9c7c6c86940_0\" is reserved for \"b836ba75c5925a66dfaf7919332b583a960a85b1b83b47256f86439328904fa4\"" May 14 00:15:11.696808 kubelet[2809]: E0514 00:15:11.696111 2809 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to reserve sandbox name \"calico-kube-controllers-7d8cdccdc5-z2wjq_calico-system_1ab90b1d-86be-4930-a872-d9c7c6c86940_0\": name \"calico-kube-controllers-7d8cdccdc5-z2wjq_calico-system_1ab90b1d-86be-4930-a872-d9c7c6c86940_0\" is reserved for \"b836ba75c5925a66dfaf7919332b583a960a85b1b83b47256f86439328904fa4\"" May 14 00:15:11.696808 kubelet[2809]: E0514 00:15:11.696162 2809 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to reserve sandbox name \"calico-kube-controllers-7d8cdccdc5-z2wjq_calico-system_1ab90b1d-86be-4930-a872-d9c7c6c86940_0\": name \"calico-kube-controllers-7d8cdccdc5-z2wjq_calico-system_1ab90b1d-86be-4930-a872-d9c7c6c86940_0\" is reserved for \"b836ba75c5925a66dfaf7919332b583a960a85b1b83b47256f86439328904fa4\"" pod="calico-system/calico-kube-controllers-7d8cdccdc5-z2wjq" May 14 00:15:11.696808 kubelet[2809]: E0514 00:15:11.696191 2809 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to reserve sandbox name \"calico-kube-controllers-7d8cdccdc5-z2wjq_calico-system_1ab90b1d-86be-4930-a872-d9c7c6c86940_0\": name \"calico-kube-controllers-7d8cdccdc5-z2wjq_calico-system_1ab90b1d-86be-4930-a872-d9c7c6c86940_0\" is reserved for \"b836ba75c5925a66dfaf7919332b583a960a85b1b83b47256f86439328904fa4\"" pod="calico-system/calico-kube-controllers-7d8cdccdc5-z2wjq" May 14 00:15:11.697524 kubelet[2809]: E0514 00:15:11.696255 2809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7d8cdccdc5-z2wjq_calico-system(1ab90b1d-86be-4930-a872-d9c7c6c86940)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7d8cdccdc5-z2wjq_calico-system(1ab90b1d-86be-4930-a872-d9c7c6c86940)\\\": rpc error: code = Unknown desc = failed to reserve sandbox name \\\"calico-kube-controllers-7d8cdccdc5-z2wjq_calico-system_1ab90b1d-86be-4930-a872-d9c7c6c86940_0\\\": name \\\"calico-kube-controllers-7d8cdccdc5-z2wjq_calico-system_1ab90b1d-86be-4930-a872-d9c7c6c86940_0\\\" is reserved for \\\"b836ba75c5925a66dfaf7919332b583a960a85b1b83b47256f86439328904fa4\\\"\"" pod="calico-system/calico-kube-controllers-7d8cdccdc5-z2wjq" podUID="1ab90b1d-86be-4930-a872-d9c7c6c86940" May 14 00:15:12.022279 kubelet[2809]: E0514 00:15:12.022141 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:15:12.125725 containerd[1528]: time="2025-05-14T00:15:12.125652923Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eab12bd0739f390cc9dd331e60ff26d292938081c479e7fff02af86d04f23f0d\" id:\"5f3306fe6199f2bfb060645d5357aa1056c068a52b9e77b7d42d74cb9113df35\" pid:4368 exited_at:{seconds:1747181712 nanos:125030015}" May 14 00:15:12.689926 kubelet[2809]: E0514 00:15:12.689815 2809 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" May 14 00:15:12.689926 kubelet[2809]: E0514 00:15:12.689927 2809 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="calico-system/csi-node-driver-86hgs" May 14 00:15:12.690304 kubelet[2809]: E0514 00:15:12.689964 2809 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="calico-system/csi-node-driver-86hgs" May 14 00:15:12.690304 kubelet[2809]: E0514 00:15:12.690026 2809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-86hgs_calico-system(fe53988a-3756-47b4-b495-d0f23a69a35f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-86hgs_calico-system(fe53988a-3756-47b4-b495-d0f23a69a35f)\\\": rpc error: code = DeadlineExceeded desc = context deadline exceeded\"" pod="calico-system/csi-node-driver-86hgs" podUID="fe53988a-3756-47b4-b495-d0f23a69a35f" May 14 00:15:13.688743 kubelet[2809]: E0514 00:15:13.688639 2809 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" May 14 00:15:13.688743 kubelet[2809]: E0514 00:15:13.688771 2809 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="calico-apiserver/calico-apiserver-568f957dcc-zsl87" May 14 00:15:13.690021 kubelet[2809]: E0514 00:15:13.688810 2809 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="calico-apiserver/calico-apiserver-568f957dcc-zsl87" May 14 00:15:13.690021 kubelet[2809]: E0514 00:15:13.688881 2809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-568f957dcc-zsl87_calico-apiserver(1239bae5-1a19-4589-baa4-ecbb42a30c35)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-568f957dcc-zsl87_calico-apiserver(1239bae5-1a19-4589-baa4-ecbb42a30c35)\\\": rpc error: code = DeadlineExceeded desc = context deadline exceeded\"" pod="calico-apiserver/calico-apiserver-568f957dcc-zsl87" podUID="1239bae5-1a19-4589-baa4-ecbb42a30c35" May 14 00:15:14.692769 kubelet[2809]: E0514 00:15:14.692684 2809 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" May 14 00:15:14.694002 kubelet[2809]: E0514 00:15:14.692787 2809 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="kube-system/coredns-668d6bf9bc-sw7b6" May 14 00:15:14.694002 kubelet[2809]: E0514 00:15:14.692822 2809 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="kube-system/coredns-668d6bf9bc-sw7b6" May 14 00:15:14.694002 kubelet[2809]: E0514 00:15:14.692887 2809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-sw7b6_kube-system(7d28e089-069d-4ca7-b4e0-7bc9c3ef4192)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-sw7b6_kube-system(7d28e089-069d-4ca7-b4e0-7bc9c3ef4192)\\\": rpc error: code = DeadlineExceeded desc = context deadline exceeded\"" pod="kube-system/coredns-668d6bf9bc-sw7b6" podUID="7d28e089-069d-4ca7-b4e0-7bc9c3ef4192" May 14 00:15:14.694002 kubelet[2809]: E0514 00:15:14.692942 2809 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" May 14 00:15:14.694002 kubelet[2809]: E0514 00:15:14.692974 2809 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="kube-system/coredns-668d6bf9bc-r6jz6" May 14 00:15:14.694002 kubelet[2809]: E0514 00:15:14.692999 2809 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="kube-system/coredns-668d6bf9bc-r6jz6" May 14 00:15:14.694002 kubelet[2809]: E0514 00:15:14.693032 2809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-r6jz6_kube-system(58cfd7d7-0524-417e-85d9-480b10f05393)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-r6jz6_kube-system(58cfd7d7-0524-417e-85d9-480b10f05393)\\\": rpc error: code = DeadlineExceeded desc = context deadline exceeded\"" pod="kube-system/coredns-668d6bf9bc-r6jz6" podUID="58cfd7d7-0524-417e-85d9-480b10f05393" May 14 00:15:14.694562 kubelet[2809]: E0514 00:15:14.693522 2809 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" May 14 00:15:14.694562 kubelet[2809]: E0514 00:15:14.693560 2809 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="calico-apiserver/calico-apiserver-568f957dcc-7qblh" May 14 00:15:14.694562 kubelet[2809]: E0514 00:15:14.693577 2809 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="calico-apiserver/calico-apiserver-568f957dcc-7qblh" May 14 00:15:14.694562 kubelet[2809]: E0514 00:15:14.693613 2809 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-568f957dcc-7qblh_calico-apiserver(dc857965-ac00-4dfe-a553-796409c1c761)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-568f957dcc-7qblh_calico-apiserver(dc857965-ac00-4dfe-a553-796409c1c761)\\\": rpc error: code = DeadlineExceeded desc = context deadline exceeded\"" pod="calico-apiserver/calico-apiserver-568f957dcc-7qblh" podUID="dc857965-ac00-4dfe-a553-796409c1c761" May 14 00:15:17.023289 kubelet[2809]: E0514 00:15:17.023187 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:15:19.816651 kubelet[2809]: E0514 00:15:19.816536 2809 log.go:32] "Status from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" May 14 00:15:19.816651 kubelet[2809]: E0514 00:15:19.816611 2809 kubelet.go:2993] "Container runtime sanity check failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" May 14 00:15:22.023454 kubelet[2809]: E0514 00:15:22.023332 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:15:27.023578 kubelet[2809]: E0514 00:15:27.023478 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:15:29.179527 containerd[1528]: time="2025-05-14T00:15:29.174871460Z" level=warning msg="container event discarded" container=e583d60b07e969388f97cdec8747804179cd438ce537aa023310eb7835fa5368 type=CONTAINER_CREATED_EVENT May 14 00:15:29.179527 containerd[1528]: time="2025-05-14T00:15:29.179505720Z" level=warning msg="container event discarded" container=e583d60b07e969388f97cdec8747804179cd438ce537aa023310eb7835fa5368 type=CONTAINER_STARTED_EVENT May 14 00:15:29.199605 containerd[1528]: time="2025-05-14T00:15:29.199422072Z" level=warning msg="container event discarded" container=024849be89ce7e2b6ed84b6b2e452c229ecde70fbe69a3e0561f6a4385025b1a type=CONTAINER_CREATED_EVENT May 14 00:15:29.199605 containerd[1528]: time="2025-05-14T00:15:29.199568426Z" level=warning msg="container event discarded" container=024849be89ce7e2b6ed84b6b2e452c229ecde70fbe69a3e0561f6a4385025b1a type=CONTAINER_STARTED_EVENT May 14 00:15:29.199605 containerd[1528]: time="2025-05-14T00:15:29.199587131Z" level=warning msg="container event discarded" container=fd610c6d18e452debc2fdf59a261132433483fe76d54562e0ec4ae4074a300fc type=CONTAINER_CREATED_EVENT May 14 00:15:29.199605 containerd[1528]: time="2025-05-14T00:15:29.199601127Z" level=warning msg="container event discarded" container=fd610c6d18e452debc2fdf59a261132433483fe76d54562e0ec4ae4074a300fc type=CONTAINER_STARTED_EVENT May 14 00:15:29.228074 containerd[1528]: time="2025-05-14T00:15:29.227942208Z" level=warning msg="container event discarded" container=d929fa1c141e686f858270220b2fcc9369ead14133e2875465954b778015fe78 type=CONTAINER_CREATED_EVENT May 14 00:15:29.228074 containerd[1528]: time="2025-05-14T00:15:29.228005707Z" level=warning msg="container event discarded" container=e154c3ed7489944894d52ff1ed24bd8e66c4afa88351d2ef1968b46fd0fbe1fe type=CONTAINER_CREATED_EVENT May 14 00:15:29.239411 containerd[1528]: time="2025-05-14T00:15:29.239294270Z" level=warning msg="container event discarded" container=a38d66800fdd01af7649bdce1ba713f4e103d174604cdab30143b36f4658cd96 type=CONTAINER_CREATED_EVENT May 14 00:15:29.345141 containerd[1528]: time="2025-05-14T00:15:29.345006593Z" level=warning msg="container event discarded" container=d929fa1c141e686f858270220b2fcc9369ead14133e2875465954b778015fe78 type=CONTAINER_STARTED_EVENT May 14 00:15:29.382472 containerd[1528]: time="2025-05-14T00:15:29.382387456Z" level=warning msg="container event discarded" container=e154c3ed7489944894d52ff1ed24bd8e66c4afa88351d2ef1968b46fd0fbe1fe type=CONTAINER_STARTED_EVENT May 14 00:15:29.398144 containerd[1528]: time="2025-05-14T00:15:29.398058967Z" level=warning msg="container event discarded" container=a38d66800fdd01af7649bdce1ba713f4e103d174604cdab30143b36f4658cd96 type=CONTAINER_STARTED_EVENT May 14 00:15:32.024589 kubelet[2809]: E0514 00:15:32.024384 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:15:37.025583 kubelet[2809]: E0514 00:15:37.025486 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:15:40.395178 containerd[1528]: time="2025-05-14T00:15:40.395035024Z" level=warning msg="container event discarded" container=6ad3526e8b7b86dcc7c9e73e082cd5d23daf8c54738b7abc338efd0c31b47d0d type=CONTAINER_CREATED_EVENT May 14 00:15:40.395178 containerd[1528]: time="2025-05-14T00:15:40.395141744Z" level=warning msg="container event discarded" container=6ad3526e8b7b86dcc7c9e73e082cd5d23daf8c54738b7abc338efd0c31b47d0d type=CONTAINER_STARTED_EVENT May 14 00:15:40.430517 containerd[1528]: time="2025-05-14T00:15:40.430383118Z" level=warning msg="container event discarded" container=944b1d046938f47b48d20ec511f399ba920fc4da925862273a544e4613878ad6 type=CONTAINER_CREATED_EVENT May 14 00:15:40.506778 containerd[1528]: time="2025-05-14T00:15:40.506523842Z" level=warning msg="container event discarded" container=944b1d046938f47b48d20ec511f399ba920fc4da925862273a544e4613878ad6 type=CONTAINER_STARTED_EVENT May 14 00:15:40.808779 containerd[1528]: time="2025-05-14T00:15:40.808696725Z" level=warning msg="container event discarded" container=7ad623e2f4e8318218c8b161ecfb2f68b228fe7d8b3049e8013d47fb567c9d7d type=CONTAINER_CREATED_EVENT May 14 00:15:40.808779 containerd[1528]: time="2025-05-14T00:15:40.808765633Z" level=warning msg="container event discarded" container=7ad623e2f4e8318218c8b161ecfb2f68b228fe7d8b3049e8013d47fb567c9d7d type=CONTAINER_STARTED_EVENT May 14 00:15:42.026601 kubelet[2809]: E0514 00:15:42.026501 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:15:42.120270 containerd[1528]: time="2025-05-14T00:15:42.120168354Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eab12bd0739f390cc9dd331e60ff26d292938081c479e7fff02af86d04f23f0d\" id:\"342d0e5944eb7003c1ae28012a137e6cdf78c3a2bf0a45c9947461c8a76deccd\" pid:4395 exited_at:{seconds:1747181742 nanos:119659952}" May 14 00:15:43.557204 containerd[1528]: time="2025-05-14T00:15:43.557072672Z" level=warning msg="container event discarded" container=e9d15bbcd68dfa4ca34035aa6c2bb3ad026e863a74ac6c1024f87bcedf61c3d1 type=CONTAINER_CREATED_EVENT May 14 00:15:43.616905 containerd[1528]: time="2025-05-14T00:15:43.616756673Z" level=warning msg="container event discarded" container=e9d15bbcd68dfa4ca34035aa6c2bb3ad026e863a74ac6c1024f87bcedf61c3d1 type=CONTAINER_STARTED_EVENT May 14 00:15:47.028050 kubelet[2809]: E0514 00:15:47.027967 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:15:47.194905 containerd[1528]: time="2025-05-14T00:15:47.194725471Z" level=warning msg="container event discarded" container=c68edb6db6f4e3dff6336790687425439f884113334df71a916307a56431b00d type=CONTAINER_CREATED_EVENT May 14 00:15:47.194905 containerd[1528]: time="2025-05-14T00:15:47.194887303Z" level=warning msg="container event discarded" container=c68edb6db6f4e3dff6336790687425439f884113334df71a916307a56431b00d type=CONTAINER_STARTED_EVENT May 14 00:15:47.310438 containerd[1528]: time="2025-05-14T00:15:47.310200876Z" level=warning msg="container event discarded" container=44ee60be99d3f4f5a1d079644cb13f79da20f0c09ea5abcd31c28c77c0b051f7 type=CONTAINER_CREATED_EVENT May 14 00:15:47.310438 containerd[1528]: time="2025-05-14T00:15:47.310294501Z" level=warning msg="container event discarded" container=44ee60be99d3f4f5a1d079644cb13f79da20f0c09ea5abcd31c28c77c0b051f7 type=CONTAINER_STARTED_EVENT May 14 00:15:50.176472 containerd[1528]: time="2025-05-14T00:15:50.176379611Z" level=warning msg="container event discarded" container=027678d19c5103161b4fd7f1b3404b0c42082cfc92c7139d793ff87fd04f8e1b type=CONTAINER_CREATED_EVENT May 14 00:15:50.263840 containerd[1528]: time="2025-05-14T00:15:50.263705541Z" level=warning msg="container event discarded" container=027678d19c5103161b4fd7f1b3404b0c42082cfc92c7139d793ff87fd04f8e1b type=CONTAINER_STARTED_EVENT May 14 00:15:52.028832 kubelet[2809]: E0514 00:15:52.028747 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:15:52.111148 containerd[1528]: time="2025-05-14T00:15:52.111044341Z" level=warning msg="container event discarded" container=dc0803b917f54cd1174f52dd6bdee598ee638fbccc8d5b1a34bf2fdda32e7d23 type=CONTAINER_CREATED_EVENT May 14 00:15:52.200546 containerd[1528]: time="2025-05-14T00:15:52.200430726Z" level=warning msg="container event discarded" container=dc0803b917f54cd1174f52dd6bdee598ee638fbccc8d5b1a34bf2fdda32e7d23 type=CONTAINER_STARTED_EVENT May 14 00:15:52.388256 containerd[1528]: time="2025-05-14T00:15:52.387989405Z" level=warning msg="container event discarded" container=dc0803b917f54cd1174f52dd6bdee598ee638fbccc8d5b1a34bf2fdda32e7d23 type=CONTAINER_STOPPED_EVENT May 14 00:15:57.029689 kubelet[2809]: E0514 00:15:57.029613 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:15:58.171759 containerd[1528]: time="2025-05-14T00:15:58.171628238Z" level=warning msg="container event discarded" container=e9721195ea45d37f715057074c9b7632108963526b6d75843213e38ad2ed7f6c type=CONTAINER_CREATED_EVENT May 14 00:15:58.277349 containerd[1528]: time="2025-05-14T00:15:58.277246683Z" level=warning msg="container event discarded" container=e9721195ea45d37f715057074c9b7632108963526b6d75843213e38ad2ed7f6c type=CONTAINER_STARTED_EVENT May 14 00:15:58.916379 containerd[1528]: time="2025-05-14T00:15:58.916219073Z" level=warning msg="container event discarded" container=e9721195ea45d37f715057074c9b7632108963526b6d75843213e38ad2ed7f6c type=CONTAINER_STOPPED_EVENT May 14 00:16:02.029957 kubelet[2809]: E0514 00:16:02.029879 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:16:07.031047 kubelet[2809]: E0514 00:16:07.030931 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:16:08.704620 containerd[1528]: time="2025-05-14T00:16:08.704526616Z" level=warning msg="container event discarded" container=eab12bd0739f390cc9dd331e60ff26d292938081c479e7fff02af86d04f23f0d type=CONTAINER_CREATED_EVENT May 14 00:16:09.018560 containerd[1528]: time="2025-05-14T00:16:09.018318160Z" level=warning msg="container event discarded" container=eab12bd0739f390cc9dd331e60ff26d292938081c479e7fff02af86d04f23f0d type=CONTAINER_STARTED_EVENT May 14 00:16:12.032008 kubelet[2809]: E0514 00:16:12.031900 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:16:12.132948 containerd[1528]: time="2025-05-14T00:16:12.132884376Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eab12bd0739f390cc9dd331e60ff26d292938081c479e7fff02af86d04f23f0d\" id:\"7db7f0a0ae6c3da24e2fe5d16bf82abfef9e3cf9310cd26d7ea96aec8500c273\" pid:4432 exited_at:{seconds:1747181772 nanos:132301424}" May 14 00:16:17.032729 kubelet[2809]: E0514 00:16:17.032660 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:16:22.033312 kubelet[2809]: E0514 00:16:22.033217 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:16:27.034380 kubelet[2809]: E0514 00:16:27.034278 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:16:32.034927 kubelet[2809]: E0514 00:16:32.034790 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:16:37.036197 kubelet[2809]: E0514 00:16:37.036104 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:16:42.037274 kubelet[2809]: E0514 00:16:42.036284 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:16:42.133823 containerd[1528]: time="2025-05-14T00:16:42.133713086Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eab12bd0739f390cc9dd331e60ff26d292938081c479e7fff02af86d04f23f0d\" id:\"54bda6c5eb7b739db8d05d08039b7b90e16720a1026e785d49c17d2c138028df\" pid:4468 exited_at:{seconds:1747181802 nanos:131251788}" May 14 00:16:47.036536 kubelet[2809]: E0514 00:16:47.036445 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:16:49.775996 update_engine[1500]: I20250514 00:16:49.775854 1500 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs May 14 00:16:49.775996 update_engine[1500]: I20250514 00:16:49.775952 1500 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs May 14 00:16:49.780973 update_engine[1500]: I20250514 00:16:49.779287 1500 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs May 14 00:16:49.780973 update_engine[1500]: I20250514 00:16:49.780124 1500 omaha_request_params.cc:62] Current group set to alpha May 14 00:16:49.780973 update_engine[1500]: I20250514 00:16:49.780396 1500 update_attempter.cc:499] Already updated boot flags. Skipping. May 14 00:16:49.780973 update_engine[1500]: I20250514 00:16:49.780421 1500 update_attempter.cc:643] Scheduling an action processor start. May 14 00:16:49.780973 update_engine[1500]: I20250514 00:16:49.780457 1500 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 14 00:16:49.780973 update_engine[1500]: I20250514 00:16:49.780530 1500 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs May 14 00:16:49.780973 update_engine[1500]: I20250514 00:16:49.780645 1500 omaha_request_action.cc:271] Posting an Omaha request to disabled May 14 00:16:49.780973 update_engine[1500]: I20250514 00:16:49.780664 1500 omaha_request_action.cc:272] Request: May 14 00:16:49.780973 update_engine[1500]: May 14 00:16:49.780973 update_engine[1500]: May 14 00:16:49.780973 update_engine[1500]: May 14 00:16:49.780973 update_engine[1500]: May 14 00:16:49.780973 update_engine[1500]: May 14 00:16:49.780973 update_engine[1500]: May 14 00:16:49.780973 update_engine[1500]: May 14 00:16:49.780973 update_engine[1500]: May 14 00:16:49.780973 update_engine[1500]: I20250514 00:16:49.780678 1500 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 14 00:16:49.796852 locksmithd[1531]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 May 14 00:16:49.801513 update_engine[1500]: I20250514 00:16:49.801450 1500 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 14 00:16:49.802178 update_engine[1500]: I20250514 00:16:49.802100 1500 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 14 00:16:49.803568 update_engine[1500]: E20250514 00:16:49.803503 1500 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 14 00:16:49.803659 update_engine[1500]: I20250514 00:16:49.803630 1500 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 May 14 00:16:52.037356 kubelet[2809]: E0514 00:16:52.037207 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:16:57.037960 kubelet[2809]: E0514 00:16:57.037883 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:16:59.631243 update_engine[1500]: I20250514 00:16:59.631114 1500 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 14 00:16:59.631807 update_engine[1500]: I20250514 00:16:59.631567 1500 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 14 00:16:59.632024 update_engine[1500]: I20250514 00:16:59.631971 1500 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 14 00:16:59.632850 update_engine[1500]: E20250514 00:16:59.632742 1500 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 14 00:16:59.633022 update_engine[1500]: I20250514 00:16:59.632885 1500 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 May 14 00:17:02.038925 kubelet[2809]: E0514 00:17:02.038780 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:17:07.039724 kubelet[2809]: E0514 00:17:07.039656 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:17:09.629780 update_engine[1500]: I20250514 00:17:09.629647 1500 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 14 00:17:09.631463 update_engine[1500]: I20250514 00:17:09.630152 1500 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 14 00:17:09.631463 update_engine[1500]: I20250514 00:17:09.630754 1500 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 14 00:17:09.632188 update_engine[1500]: E20250514 00:17:09.632118 1500 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 14 00:17:09.632330 update_engine[1500]: I20250514 00:17:09.632207 1500 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 May 14 00:17:12.040699 kubelet[2809]: E0514 00:17:12.040647 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:17:12.131418 containerd[1528]: time="2025-05-14T00:17:12.131258062Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eab12bd0739f390cc9dd331e60ff26d292938081c479e7fff02af86d04f23f0d\" id:\"439a1c903f48c9812b6577dfcd7bbc0fab163585dec055dd525cef6a0ce6e1ee\" pid:4497 exited_at:{seconds:1747181832 nanos:130753603}" May 14 00:17:17.042826 kubelet[2809]: E0514 00:17:17.042757 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:17:19.631287 update_engine[1500]: I20250514 00:17:19.630293 1500 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 14 00:17:19.633088 update_engine[1500]: I20250514 00:17:19.632565 1500 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 14 00:17:19.633088 update_engine[1500]: I20250514 00:17:19.633009 1500 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 14 00:17:19.634272 update_engine[1500]: E20250514 00:17:19.634057 1500 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 14 00:17:19.634272 update_engine[1500]: I20250514 00:17:19.634130 1500 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 14 00:17:19.634272 update_engine[1500]: I20250514 00:17:19.634145 1500 omaha_request_action.cc:617] Omaha request response: May 14 00:17:19.635379 update_engine[1500]: E20250514 00:17:19.634323 1500 omaha_request_action.cc:636] Omaha request network transfer failed. May 14 00:17:19.635379 update_engine[1500]: I20250514 00:17:19.634364 1500 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. May 14 00:17:19.635379 update_engine[1500]: I20250514 00:17:19.634374 1500 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 14 00:17:19.635379 update_engine[1500]: I20250514 00:17:19.634382 1500 update_attempter.cc:306] Processing Done. May 14 00:17:19.635379 update_engine[1500]: E20250514 00:17:19.634402 1500 update_attempter.cc:619] Update failed. May 14 00:17:19.635379 update_engine[1500]: I20250514 00:17:19.634411 1500 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse May 14 00:17:19.635379 update_engine[1500]: I20250514 00:17:19.634423 1500 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) May 14 00:17:19.635379 update_engine[1500]: I20250514 00:17:19.634436 1500 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. May 14 00:17:19.635379 update_engine[1500]: I20250514 00:17:19.634799 1500 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 14 00:17:19.635379 update_engine[1500]: I20250514 00:17:19.634850 1500 omaha_request_action.cc:271] Posting an Omaha request to disabled May 14 00:17:19.635379 update_engine[1500]: I20250514 00:17:19.634860 1500 omaha_request_action.cc:272] Request: May 14 00:17:19.635379 update_engine[1500]: May 14 00:17:19.635379 update_engine[1500]: May 14 00:17:19.635379 update_engine[1500]: May 14 00:17:19.635379 update_engine[1500]: May 14 00:17:19.635379 update_engine[1500]: May 14 00:17:19.635379 update_engine[1500]: May 14 00:17:19.635379 update_engine[1500]: I20250514 00:17:19.634868 1500 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 14 00:17:19.637828 update_engine[1500]: I20250514 00:17:19.635198 1500 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 14 00:17:19.637828 update_engine[1500]: I20250514 00:17:19.635545 1500 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 14 00:17:19.637828 update_engine[1500]: E20250514 00:17:19.637357 1500 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 14 00:17:19.637828 update_engine[1500]: I20250514 00:17:19.637429 1500 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 14 00:17:19.637828 update_engine[1500]: I20250514 00:17:19.637440 1500 omaha_request_action.cc:617] Omaha request response: May 14 00:17:19.637828 update_engine[1500]: I20250514 00:17:19.637450 1500 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 14 00:17:19.637828 update_engine[1500]: I20250514 00:17:19.637459 1500 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 14 00:17:19.637828 update_engine[1500]: I20250514 00:17:19.637466 1500 update_attempter.cc:306] Processing Done. May 14 00:17:19.637828 update_engine[1500]: I20250514 00:17:19.637480 1500 update_attempter.cc:310] Error event sent. May 14 00:17:19.637828 update_engine[1500]: I20250514 00:17:19.637494 1500 update_check_scheduler.cc:74] Next update check in 46m8s May 14 00:17:19.638556 locksmithd[1531]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 May 14 00:17:19.640564 locksmithd[1531]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 May 14 00:17:22.043803 kubelet[2809]: E0514 00:17:22.043747 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:17:24.817983 kubelet[2809]: E0514 00:17:24.817869 2809 log.go:32] "Status from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" May 14 00:17:24.817983 kubelet[2809]: E0514 00:17:24.817977 2809 kubelet.go:2993] "Container runtime sanity check failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" May 14 00:17:27.044026 kubelet[2809]: E0514 00:17:27.043967 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:17:32.045397 kubelet[2809]: E0514 00:17:32.044963 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:17:37.046066 kubelet[2809]: E0514 00:17:37.046001 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:17:42.046811 kubelet[2809]: E0514 00:17:42.046716 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:17:42.131181 containerd[1528]: time="2025-05-14T00:17:42.131124453Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eab12bd0739f390cc9dd331e60ff26d292938081c479e7fff02af86d04f23f0d\" id:\"24931cac7db4b3bce1d513d2f9ce796d5d3e0b2e2b7b0591e20bf1f8afffcf84\" pid:4538 exited_at:{seconds:1747181862 nanos:130751471}" May 14 00:17:47.046981 kubelet[2809]: E0514 00:17:47.046891 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:17:52.047122 kubelet[2809]: E0514 00:17:52.047050 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:17:57.048171 kubelet[2809]: E0514 00:17:57.048076 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:18:02.048528 kubelet[2809]: E0514 00:18:02.048450 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:18:07.049153 kubelet[2809]: E0514 00:18:07.049034 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:18:12.049693 kubelet[2809]: E0514 00:18:12.049636 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:18:12.118748 containerd[1528]: time="2025-05-14T00:18:12.118652524Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eab12bd0739f390cc9dd331e60ff26d292938081c479e7fff02af86d04f23f0d\" id:\"7f473e341a73206e18094e769a200cb3e73ea22b2e0a806163d5b93c54422422\" pid:4568 exited_at:{seconds:1747181892 nanos:117880152}" May 14 00:18:17.050280 kubelet[2809]: E0514 00:18:17.050137 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:18:22.051390 kubelet[2809]: E0514 00:18:22.051292 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:18:27.051978 kubelet[2809]: E0514 00:18:27.051858 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:18:32.052692 kubelet[2809]: E0514 00:18:32.052584 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:18:37.053775 kubelet[2809]: E0514 00:18:37.053669 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:18:42.054666 kubelet[2809]: E0514 00:18:42.054605 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:18:42.125565 containerd[1528]: time="2025-05-14T00:18:42.125475710Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eab12bd0739f390cc9dd331e60ff26d292938081c479e7fff02af86d04f23f0d\" id:\"83df7f8fbc1b5e5e4adda551bed22e2f72b4da85639246c1f2405398930b7a2f\" pid:4600 exited_at:{seconds:1747181922 nanos:124999727}" May 14 00:18:47.054737 kubelet[2809]: E0514 00:18:47.054692 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:18:52.055852 kubelet[2809]: E0514 00:18:52.055774 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:18:57.056653 kubelet[2809]: E0514 00:18:57.056573 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:19:02.057089 kubelet[2809]: E0514 00:19:02.056987 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:19:07.057620 kubelet[2809]: E0514 00:19:07.057529 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:19:12.058360 kubelet[2809]: E0514 00:19:12.058304 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:19:12.120343 containerd[1528]: time="2025-05-14T00:19:12.120196932Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eab12bd0739f390cc9dd331e60ff26d292938081c479e7fff02af86d04f23f0d\" id:\"80a6709981beaea1bdf529a52080d1f3bdf4b6e8e747f34ced5b9ac4e112cf2a\" pid:4637 exited_at:{seconds:1747181952 nanos:119570365}" May 14 00:19:17.059698 kubelet[2809]: E0514 00:19:17.059555 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:19:22.060042 kubelet[2809]: E0514 00:19:22.059924 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:19:27.061265 kubelet[2809]: E0514 00:19:27.061152 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:19:29.818753 kubelet[2809]: E0514 00:19:29.818601 2809 log.go:32] "Status from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" May 14 00:19:29.818753 kubelet[2809]: E0514 00:19:29.818692 2809 kubelet.go:2993] "Container runtime sanity check failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" May 14 00:19:31.311814 systemd[1]: Started sshd@8-37.27.39.104:22-139.178.89.65:40094.service - OpenSSH per-connection server daemon (139.178.89.65:40094). May 14 00:19:32.061964 kubelet[2809]: E0514 00:19:32.061883 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:19:32.361370 sshd[4654]: Accepted publickey for core from 139.178.89.65 port 40094 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:19:32.363089 sshd-session[4654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:19:32.373321 systemd-logind[1496]: New session 8 of user core. May 14 00:19:32.378467 systemd[1]: Started session-8.scope - Session 8 of User core. May 14 00:19:33.960598 sshd[4664]: Connection closed by 139.178.89.65 port 40094 May 14 00:19:33.961518 sshd-session[4654]: pam_unix(sshd:session): session closed for user core May 14 00:19:33.967799 systemd-logind[1496]: Session 8 logged out. Waiting for processes to exit. May 14 00:19:33.968511 systemd[1]: sshd@8-37.27.39.104:22-139.178.89.65:40094.service: Deactivated successfully. May 14 00:19:33.974873 systemd[1]: session-8.scope: Deactivated successfully. May 14 00:19:33.978680 systemd-logind[1496]: Removed session 8. May 14 00:19:37.063082 kubelet[2809]: E0514 00:19:37.063028 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:19:39.132991 systemd[1]: Started sshd@9-37.27.39.104:22-139.178.89.65:49784.service - OpenSSH per-connection server daemon (139.178.89.65:49784). May 14 00:19:40.141215 sshd[4680]: Accepted publickey for core from 139.178.89.65 port 49784 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:19:40.143949 sshd-session[4680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:19:40.150180 systemd-logind[1496]: New session 9 of user core. May 14 00:19:40.158362 systemd[1]: Started session-9.scope - Session 9 of User core. May 14 00:19:40.971694 sshd[4682]: Connection closed by 139.178.89.65 port 49784 May 14 00:19:40.973552 sshd-session[4680]: pam_unix(sshd:session): session closed for user core May 14 00:19:40.984641 systemd[1]: sshd@9-37.27.39.104:22-139.178.89.65:49784.service: Deactivated successfully. May 14 00:19:40.988829 systemd[1]: session-9.scope: Deactivated successfully. May 14 00:19:40.990516 systemd-logind[1496]: Session 9 logged out. Waiting for processes to exit. May 14 00:19:40.992563 systemd-logind[1496]: Removed session 9. May 14 00:19:42.063387 kubelet[2809]: E0514 00:19:42.063345 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:19:42.124143 containerd[1528]: time="2025-05-14T00:19:42.124074297Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eab12bd0739f390cc9dd331e60ff26d292938081c479e7fff02af86d04f23f0d\" id:\"55d75f0e17977fb4e148caabb2bfbab90d4c6ebfe15c0cb6a76d99d7c76ef253\" pid:4709 exited_at:{seconds:1747181982 nanos:123522510}" May 14 00:19:46.149199 systemd[1]: Started sshd@10-37.27.39.104:22-139.178.89.65:49792.service - OpenSSH per-connection server daemon (139.178.89.65:49792). May 14 00:19:47.064152 kubelet[2809]: E0514 00:19:47.064101 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:19:47.188218 sshd[4722]: Accepted publickey for core from 139.178.89.65 port 49792 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:19:47.189931 sshd-session[4722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:19:47.197016 systemd-logind[1496]: New session 10 of user core. May 14 00:19:47.209453 systemd[1]: Started session-10.scope - Session 10 of User core. May 14 00:19:47.996401 sshd[4724]: Connection closed by 139.178.89.65 port 49792 May 14 00:19:47.997874 sshd-session[4722]: pam_unix(sshd:session): session closed for user core May 14 00:19:48.008018 systemd[1]: sshd@10-37.27.39.104:22-139.178.89.65:49792.service: Deactivated successfully. May 14 00:19:48.013141 systemd[1]: session-10.scope: Deactivated successfully. May 14 00:19:48.015536 systemd-logind[1496]: Session 10 logged out. Waiting for processes to exit. May 14 00:19:48.019930 systemd-logind[1496]: Removed session 10. May 14 00:19:52.064852 kubelet[2809]: E0514 00:19:52.064773 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:19:53.172522 systemd[1]: Started sshd@11-37.27.39.104:22-139.178.89.65:46420.service - OpenSSH per-connection server daemon (139.178.89.65:46420). May 14 00:19:54.188206 sshd[4737]: Accepted publickey for core from 139.178.89.65 port 46420 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:19:54.190603 sshd-session[4737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:19:54.200565 systemd-logind[1496]: New session 11 of user core. May 14 00:19:54.205531 systemd[1]: Started session-11.scope - Session 11 of User core. May 14 00:19:55.001100 sshd[4739]: Connection closed by 139.178.89.65 port 46420 May 14 00:19:55.002172 sshd-session[4737]: pam_unix(sshd:session): session closed for user core May 14 00:19:55.011480 systemd-logind[1496]: Session 11 logged out. Waiting for processes to exit. May 14 00:19:55.013319 systemd[1]: sshd@11-37.27.39.104:22-139.178.89.65:46420.service: Deactivated successfully. May 14 00:19:55.016995 systemd[1]: session-11.scope: Deactivated successfully. May 14 00:19:55.021331 systemd-logind[1496]: Removed session 11. May 14 00:19:57.065051 kubelet[2809]: E0514 00:19:57.064957 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:20:00.174588 systemd[1]: Started sshd@12-37.27.39.104:22-139.178.89.65:35702.service - OpenSSH per-connection server daemon (139.178.89.65:35702). May 14 00:20:01.222458 sshd[4753]: Accepted publickey for core from 139.178.89.65 port 35702 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:20:01.224699 sshd-session[4753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:20:01.231646 systemd-logind[1496]: New session 12 of user core. May 14 00:20:01.237494 systemd[1]: Started session-12.scope - Session 12 of User core. May 14 00:20:02.038751 sshd[4755]: Connection closed by 139.178.89.65 port 35702 May 14 00:20:02.039901 sshd-session[4753]: pam_unix(sshd:session): session closed for user core May 14 00:20:02.047808 systemd[1]: sshd@12-37.27.39.104:22-139.178.89.65:35702.service: Deactivated successfully. May 14 00:20:02.052548 systemd[1]: session-12.scope: Deactivated successfully. May 14 00:20:02.055085 systemd-logind[1496]: Session 12 logged out. Waiting for processes to exit. May 14 00:20:02.058340 systemd-logind[1496]: Removed session 12. May 14 00:20:02.065452 kubelet[2809]: E0514 00:20:02.065368 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:20:07.065946 kubelet[2809]: E0514 00:20:07.065869 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:20:07.211405 systemd[1]: Started sshd@13-37.27.39.104:22-139.178.89.65:39068.service - OpenSSH per-connection server daemon (139.178.89.65:39068). May 14 00:20:08.213848 sshd[4767]: Accepted publickey for core from 139.178.89.65 port 39068 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:20:08.216188 sshd-session[4767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:20:08.225130 systemd-logind[1496]: New session 13 of user core. May 14 00:20:08.231521 systemd[1]: Started session-13.scope - Session 13 of User core. May 14 00:20:09.024511 sshd[4769]: Connection closed by 139.178.89.65 port 39068 May 14 00:20:09.027627 sshd-session[4767]: pam_unix(sshd:session): session closed for user core May 14 00:20:09.033218 systemd[1]: sshd@13-37.27.39.104:22-139.178.89.65:39068.service: Deactivated successfully. May 14 00:20:09.036794 systemd[1]: session-13.scope: Deactivated successfully. May 14 00:20:09.038260 systemd-logind[1496]: Session 13 logged out. Waiting for processes to exit. May 14 00:20:09.040345 systemd-logind[1496]: Removed session 13. May 14 00:20:12.066839 kubelet[2809]: E0514 00:20:12.066775 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:20:12.119279 containerd[1528]: time="2025-05-14T00:20:12.119010425Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eab12bd0739f390cc9dd331e60ff26d292938081c479e7fff02af86d04f23f0d\" id:\"3104477b0fed87f15a3fa8f198e40af61082572c66c775ebfc965eb255f87b91\" pid:4795 exited_at:{seconds:1747182012 nanos:118576672}" May 14 00:20:14.199406 systemd[1]: Started sshd@14-37.27.39.104:22-139.178.89.65:39076.service - OpenSSH per-connection server daemon (139.178.89.65:39076). May 14 00:20:15.233488 sshd[4808]: Accepted publickey for core from 139.178.89.65 port 39076 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:20:15.236066 sshd-session[4808]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:20:15.243284 systemd-logind[1496]: New session 14 of user core. May 14 00:20:15.249968 systemd[1]: Started session-14.scope - Session 14 of User core. May 14 00:20:16.039058 sshd[4810]: Connection closed by 139.178.89.65 port 39076 May 14 00:20:16.039784 sshd-session[4808]: pam_unix(sshd:session): session closed for user core May 14 00:20:16.045277 systemd[1]: sshd@14-37.27.39.104:22-139.178.89.65:39076.service: Deactivated successfully. May 14 00:20:16.048973 systemd[1]: session-14.scope: Deactivated successfully. May 14 00:20:16.053824 systemd-logind[1496]: Session 14 logged out. Waiting for processes to exit. May 14 00:20:16.056282 systemd-logind[1496]: Removed session 14. May 14 00:20:17.067179 kubelet[2809]: E0514 00:20:17.067102 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:20:21.218862 systemd[1]: Started sshd@15-37.27.39.104:22-139.178.89.65:59412.service - OpenSSH per-connection server daemon (139.178.89.65:59412). May 14 00:20:22.068087 kubelet[2809]: E0514 00:20:22.067991 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:20:22.227862 sshd[4823]: Accepted publickey for core from 139.178.89.65 port 59412 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:20:22.230190 sshd-session[4823]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:20:22.238661 systemd-logind[1496]: New session 15 of user core. May 14 00:20:22.244468 systemd[1]: Started session-15.scope - Session 15 of User core. May 14 00:20:23.029798 sshd[4825]: Connection closed by 139.178.89.65 port 59412 May 14 00:20:23.030548 sshd-session[4823]: pam_unix(sshd:session): session closed for user core May 14 00:20:23.035913 systemd-logind[1496]: Session 15 logged out. Waiting for processes to exit. May 14 00:20:23.036851 systemd[1]: sshd@15-37.27.39.104:22-139.178.89.65:59412.service: Deactivated successfully. May 14 00:20:23.040583 systemd[1]: session-15.scope: Deactivated successfully. May 14 00:20:23.043294 systemd-logind[1496]: Removed session 15. May 14 00:20:27.068449 kubelet[2809]: E0514 00:20:27.068355 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:20:28.200591 systemd[1]: Started sshd@16-37.27.39.104:22-139.178.89.65:50466.service - OpenSSH per-connection server daemon (139.178.89.65:50466). May 14 00:20:29.206392 sshd[4838]: Accepted publickey for core from 139.178.89.65 port 50466 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:20:29.209186 sshd-session[4838]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:20:29.218187 systemd-logind[1496]: New session 16 of user core. May 14 00:20:29.225735 systemd[1]: Started session-16.scope - Session 16 of User core. May 14 00:20:29.983265 sshd[4840]: Connection closed by 139.178.89.65 port 50466 May 14 00:20:29.984957 sshd-session[4838]: pam_unix(sshd:session): session closed for user core May 14 00:20:29.991003 systemd[1]: sshd@16-37.27.39.104:22-139.178.89.65:50466.service: Deactivated successfully. May 14 00:20:29.994772 systemd[1]: session-16.scope: Deactivated successfully. May 14 00:20:29.996330 systemd-logind[1496]: Session 16 logged out. Waiting for processes to exit. May 14 00:20:29.998398 systemd-logind[1496]: Removed session 16. May 14 00:20:32.069396 kubelet[2809]: E0514 00:20:32.069326 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:20:35.155610 systemd[1]: Started sshd@17-37.27.39.104:22-139.178.89.65:50482.service - OpenSSH per-connection server daemon (139.178.89.65:50482). May 14 00:20:36.160525 sshd[4855]: Accepted publickey for core from 139.178.89.65 port 50482 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:20:36.163705 sshd-session[4855]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:20:36.171699 systemd-logind[1496]: New session 17 of user core. May 14 00:20:36.179485 systemd[1]: Started session-17.scope - Session 17 of User core. May 14 00:20:36.931388 sshd[4857]: Connection closed by 139.178.89.65 port 50482 May 14 00:20:36.933591 sshd-session[4855]: pam_unix(sshd:session): session closed for user core May 14 00:20:36.940822 systemd[1]: sshd@17-37.27.39.104:22-139.178.89.65:50482.service: Deactivated successfully. May 14 00:20:36.943903 systemd[1]: session-17.scope: Deactivated successfully. May 14 00:20:36.947360 systemd-logind[1496]: Session 17 logged out. Waiting for processes to exit. May 14 00:20:36.949175 systemd-logind[1496]: Removed session 17. May 14 00:20:37.069494 kubelet[2809]: E0514 00:20:37.069423 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:20:42.069802 kubelet[2809]: E0514 00:20:42.069754 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:20:42.107336 systemd[1]: Started sshd@18-37.27.39.104:22-139.178.89.65:34606.service - OpenSSH per-connection server daemon (139.178.89.65:34606). May 14 00:20:42.136387 containerd[1528]: time="2025-05-14T00:20:42.136331473Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eab12bd0739f390cc9dd331e60ff26d292938081c479e7fff02af86d04f23f0d\" id:\"46a8363e5394f90710996396f5036bbc074c70eeedb5559c7608816f64c356e1\" pid:4888 exited_at:{seconds:1747182042 nanos:135964886}" May 14 00:20:43.121072 sshd[4899]: Accepted publickey for core from 139.178.89.65 port 34606 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:20:43.122979 sshd-session[4899]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:20:43.132882 systemd-logind[1496]: New session 18 of user core. May 14 00:20:43.141592 systemd[1]: Started session-18.scope - Session 18 of User core. May 14 00:20:43.948783 sshd[4902]: Connection closed by 139.178.89.65 port 34606 May 14 00:20:43.949563 sshd-session[4899]: pam_unix(sshd:session): session closed for user core May 14 00:20:43.954588 systemd-logind[1496]: Session 18 logged out. Waiting for processes to exit. May 14 00:20:43.955625 systemd[1]: sshd@18-37.27.39.104:22-139.178.89.65:34606.service: Deactivated successfully. May 14 00:20:43.959258 systemd[1]: session-18.scope: Deactivated successfully. May 14 00:20:43.961063 systemd-logind[1496]: Removed session 18. May 14 00:20:47.070855 kubelet[2809]: E0514 00:20:47.070774 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:20:49.122490 systemd[1]: Started sshd@19-37.27.39.104:22-139.178.89.65:50444.service - OpenSSH per-connection server daemon (139.178.89.65:50444). May 14 00:20:50.125115 sshd[4923]: Accepted publickey for core from 139.178.89.65 port 50444 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:20:50.127411 sshd-session[4923]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:20:50.136197 systemd-logind[1496]: New session 19 of user core. May 14 00:20:50.143488 systemd[1]: Started session-19.scope - Session 19 of User core. May 14 00:20:50.891862 sshd[4925]: Connection closed by 139.178.89.65 port 50444 May 14 00:20:50.893922 sshd-session[4923]: pam_unix(sshd:session): session closed for user core May 14 00:20:50.901879 systemd[1]: sshd@19-37.27.39.104:22-139.178.89.65:50444.service: Deactivated successfully. May 14 00:20:50.910709 systemd[1]: session-19.scope: Deactivated successfully. May 14 00:20:50.912813 systemd-logind[1496]: Session 19 logged out. Waiting for processes to exit. May 14 00:20:50.918828 systemd-logind[1496]: Removed session 19. May 14 00:20:52.072044 kubelet[2809]: E0514 00:20:52.071956 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:20:56.067481 systemd[1]: Started sshd@20-37.27.39.104:22-139.178.89.65:50448.service - OpenSSH per-connection server daemon (139.178.89.65:50448). May 14 00:20:57.071681 sshd[4938]: Accepted publickey for core from 139.178.89.65 port 50448 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:20:57.072813 kubelet[2809]: E0514 00:20:57.072749 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:20:57.073896 sshd-session[4938]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:20:57.089286 systemd-logind[1496]: New session 20 of user core. May 14 00:20:57.095457 systemd[1]: Started session-20.scope - Session 20 of User core. May 14 00:20:57.852256 sshd[4940]: Connection closed by 139.178.89.65 port 50448 May 14 00:20:57.853048 sshd-session[4938]: pam_unix(sshd:session): session closed for user core May 14 00:20:57.857758 systemd-logind[1496]: Session 20 logged out. Waiting for processes to exit. May 14 00:20:57.858301 systemd[1]: sshd@20-37.27.39.104:22-139.178.89.65:50448.service: Deactivated successfully. May 14 00:20:57.861777 systemd[1]: session-20.scope: Deactivated successfully. May 14 00:20:57.863623 systemd-logind[1496]: Removed session 20. May 14 00:21:02.073480 kubelet[2809]: E0514 00:21:02.073399 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:21:03.028891 systemd[1]: Started sshd@21-37.27.39.104:22-139.178.89.65:58224.service - OpenSSH per-connection server daemon (139.178.89.65:58224). May 14 00:21:04.034210 sshd[4954]: Accepted publickey for core from 139.178.89.65 port 58224 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:21:04.036492 sshd-session[4954]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:21:04.043640 systemd-logind[1496]: New session 21 of user core. May 14 00:21:04.048521 systemd[1]: Started session-21.scope - Session 21 of User core. May 14 00:21:04.838771 sshd[4956]: Connection closed by 139.178.89.65 port 58224 May 14 00:21:04.840152 sshd-session[4954]: pam_unix(sshd:session): session closed for user core May 14 00:21:04.845296 systemd[1]: sshd@21-37.27.39.104:22-139.178.89.65:58224.service: Deactivated successfully. May 14 00:21:04.848481 systemd[1]: session-21.scope: Deactivated successfully. May 14 00:21:04.851375 systemd-logind[1496]: Session 21 logged out. Waiting for processes to exit. May 14 00:21:04.854506 systemd-logind[1496]: Removed session 21. May 14 00:21:07.074598 kubelet[2809]: E0514 00:21:07.074527 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:21:10.013299 systemd[1]: Started sshd@22-37.27.39.104:22-139.178.89.65:41462.service - OpenSSH per-connection server daemon (139.178.89.65:41462). May 14 00:21:11.037593 sshd[4969]: Accepted publickey for core from 139.178.89.65 port 41462 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:21:11.041120 sshd-session[4969]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:21:11.055537 systemd-logind[1496]: New session 22 of user core. May 14 00:21:11.059499 systemd[1]: Started session-22.scope - Session 22 of User core. May 14 00:21:11.837392 sshd[4973]: Connection closed by 139.178.89.65 port 41462 May 14 00:21:11.838614 sshd-session[4969]: pam_unix(sshd:session): session closed for user core May 14 00:21:11.844895 systemd-logind[1496]: Session 22 logged out. Waiting for processes to exit. May 14 00:21:11.845728 systemd[1]: sshd@22-37.27.39.104:22-139.178.89.65:41462.service: Deactivated successfully. May 14 00:21:11.849370 systemd[1]: session-22.scope: Deactivated successfully. May 14 00:21:11.851216 systemd-logind[1496]: Removed session 22. May 14 00:21:12.075206 kubelet[2809]: E0514 00:21:12.075126 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:21:12.130679 containerd[1528]: time="2025-05-14T00:21:12.130508245Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eab12bd0739f390cc9dd331e60ff26d292938081c479e7fff02af86d04f23f0d\" id:\"0f74c72a4b93fa17f9387d14808c81d7440fed462951e24c511fb5409ba020bf\" pid:4998 exited_at:{seconds:1747182072 nanos:130009070}" May 14 00:21:17.010409 systemd[1]: Started sshd@23-37.27.39.104:22-139.178.89.65:58174.service - OpenSSH per-connection server daemon (139.178.89.65:58174). May 14 00:21:17.075713 kubelet[2809]: E0514 00:21:17.075546 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:21:18.055896 sshd[5010]: Accepted publickey for core from 139.178.89.65 port 58174 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:21:18.058547 sshd-session[5010]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:21:18.066939 systemd-logind[1496]: New session 23 of user core. May 14 00:21:18.073566 systemd[1]: Started session-23.scope - Session 23 of User core. May 14 00:21:18.877071 sshd[5012]: Connection closed by 139.178.89.65 port 58174 May 14 00:21:18.879495 sshd-session[5010]: pam_unix(sshd:session): session closed for user core May 14 00:21:18.886841 systemd-logind[1496]: Session 23 logged out. Waiting for processes to exit. May 14 00:21:18.888000 systemd[1]: sshd@23-37.27.39.104:22-139.178.89.65:58174.service: Deactivated successfully. May 14 00:21:18.891914 systemd[1]: session-23.scope: Deactivated successfully. May 14 00:21:18.893962 systemd-logind[1496]: Removed session 23. May 14 00:21:22.076245 kubelet[2809]: E0514 00:21:22.076162 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:21:24.052009 systemd[1]: Started sshd@24-37.27.39.104:22-139.178.89.65:58180.service - OpenSSH per-connection server daemon (139.178.89.65:58180). May 14 00:21:25.082857 sshd[5025]: Accepted publickey for core from 139.178.89.65 port 58180 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:21:25.085697 sshd-session[5025]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:21:25.094991 systemd-logind[1496]: New session 24 of user core. May 14 00:21:25.100557 systemd[1]: Started session-24.scope - Session 24 of User core. May 14 00:21:25.852500 sshd[5027]: Connection closed by 139.178.89.65 port 58180 May 14 00:21:25.853551 sshd-session[5025]: pam_unix(sshd:session): session closed for user core May 14 00:21:25.859378 systemd[1]: sshd@24-37.27.39.104:22-139.178.89.65:58180.service: Deactivated successfully. May 14 00:21:25.864454 systemd[1]: session-24.scope: Deactivated successfully. May 14 00:21:25.866424 systemd-logind[1496]: Session 24 logged out. Waiting for processes to exit. May 14 00:21:25.868518 systemd-logind[1496]: Removed session 24. May 14 00:21:27.077415 kubelet[2809]: E0514 00:21:27.077341 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:21:31.028104 systemd[1]: Started sshd@25-37.27.39.104:22-139.178.89.65:34876.service - OpenSSH per-connection server daemon (139.178.89.65:34876). May 14 00:21:32.039704 sshd[5040]: Accepted publickey for core from 139.178.89.65 port 34876 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:21:32.042260 sshd-session[5040]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:21:32.052299 systemd-logind[1496]: New session 25 of user core. May 14 00:21:32.059536 systemd[1]: Started session-25.scope - Session 25 of User core. May 14 00:21:32.078129 kubelet[2809]: E0514 00:21:32.078051 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:21:32.821916 sshd[5042]: Connection closed by 139.178.89.65 port 34876 May 14 00:21:32.823061 sshd-session[5040]: pam_unix(sshd:session): session closed for user core May 14 00:21:32.828790 systemd[1]: sshd@25-37.27.39.104:22-139.178.89.65:34876.service: Deactivated successfully. May 14 00:21:32.832636 systemd[1]: session-25.scope: Deactivated successfully. May 14 00:21:32.834774 systemd-logind[1496]: Session 25 logged out. Waiting for processes to exit. May 14 00:21:32.836812 systemd-logind[1496]: Removed session 25. May 14 00:21:34.820329 kubelet[2809]: E0514 00:21:34.820256 2809 log.go:32] "Status from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" May 14 00:21:34.821072 kubelet[2809]: E0514 00:21:34.820445 2809 kubelet.go:2993] "Container runtime sanity check failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" May 14 00:21:37.078461 kubelet[2809]: E0514 00:21:37.078395 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:21:37.999669 systemd[1]: Started sshd@26-37.27.39.104:22-139.178.89.65:40842.service - OpenSSH per-connection server daemon (139.178.89.65:40842). May 14 00:21:39.014313 sshd[5057]: Accepted publickey for core from 139.178.89.65 port 40842 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:21:39.017356 sshd-session[5057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:21:39.027475 systemd-logind[1496]: New session 26 of user core. May 14 00:21:39.036761 systemd[1]: Started session-26.scope - Session 26 of User core. May 14 00:21:39.812250 sshd[5059]: Connection closed by 139.178.89.65 port 40842 May 14 00:21:39.813549 sshd-session[5057]: pam_unix(sshd:session): session closed for user core May 14 00:21:39.818767 systemd[1]: sshd@26-37.27.39.104:22-139.178.89.65:40842.service: Deactivated successfully. May 14 00:21:39.822370 systemd[1]: session-26.scope: Deactivated successfully. May 14 00:21:39.824779 systemd-logind[1496]: Session 26 logged out. Waiting for processes to exit. May 14 00:21:39.826472 systemd-logind[1496]: Removed session 26. May 14 00:21:42.079476 kubelet[2809]: E0514 00:21:42.079393 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:21:42.129479 containerd[1528]: time="2025-05-14T00:21:42.129394214Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eab12bd0739f390cc9dd331e60ff26d292938081c479e7fff02af86d04f23f0d\" id:\"6c1517244168071f4bde6136dc585b57cbb041ab67dcbe4b9dba593948d0b6d5\" pid:5085 exited_at:{seconds:1747182102 nanos:128778730}" May 14 00:21:44.990268 systemd[1]: Started sshd@27-37.27.39.104:22-139.178.89.65:40856.service - OpenSSH per-connection server daemon (139.178.89.65:40856). May 14 00:21:46.001196 sshd[5097]: Accepted publickey for core from 139.178.89.65 port 40856 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:21:46.003673 sshd-session[5097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:21:46.012476 systemd-logind[1496]: New session 27 of user core. May 14 00:21:46.017977 systemd[1]: Started session-27.scope - Session 27 of User core. May 14 00:21:46.794300 sshd[5099]: Connection closed by 139.178.89.65 port 40856 May 14 00:21:46.794767 sshd-session[5097]: pam_unix(sshd:session): session closed for user core May 14 00:21:46.806485 systemd[1]: sshd@27-37.27.39.104:22-139.178.89.65:40856.service: Deactivated successfully. May 14 00:21:46.809918 systemd[1]: session-27.scope: Deactivated successfully. May 14 00:21:46.812869 systemd-logind[1496]: Session 27 logged out. Waiting for processes to exit. May 14 00:21:46.814899 systemd-logind[1496]: Removed session 27. May 14 00:21:47.080894 kubelet[2809]: E0514 00:21:47.080408 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:21:51.966874 systemd[1]: Started sshd@28-37.27.39.104:22-139.178.89.65:56382.service - OpenSSH per-connection server daemon (139.178.89.65:56382). May 14 00:21:52.096933 kubelet[2809]: E0514 00:21:52.082568 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:21:53.000246 sshd[5120]: Accepted publickey for core from 139.178.89.65 port 56382 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:21:53.001156 sshd-session[5120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:21:53.011394 systemd-logind[1496]: New session 28 of user core. May 14 00:21:53.015729 systemd[1]: Started session-28.scope - Session 28 of User core. May 14 00:21:53.825439 sshd[5122]: Connection closed by 139.178.89.65 port 56382 May 14 00:21:53.826028 sshd-session[5120]: pam_unix(sshd:session): session closed for user core May 14 00:21:53.834721 systemd[1]: sshd@28-37.27.39.104:22-139.178.89.65:56382.service: Deactivated successfully. May 14 00:21:53.838683 systemd[1]: session-28.scope: Deactivated successfully. May 14 00:21:53.842103 systemd-logind[1496]: Session 28 logged out. Waiting for processes to exit. May 14 00:21:53.845935 systemd-logind[1496]: Removed session 28. May 14 00:21:57.083559 kubelet[2809]: E0514 00:21:57.083395 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:21:59.001909 systemd[1]: Started sshd@29-37.27.39.104:22-139.178.89.65:56212.service - OpenSSH per-connection server daemon (139.178.89.65:56212). May 14 00:22:00.022890 sshd[5134]: Accepted publickey for core from 139.178.89.65 port 56212 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:22:00.025390 sshd-session[5134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:22:00.034468 systemd-logind[1496]: New session 29 of user core. May 14 00:22:00.044515 systemd[1]: Started session-29.scope - Session 29 of User core. May 14 00:22:00.783042 sshd[5136]: Connection closed by 139.178.89.65 port 56212 May 14 00:22:00.784658 sshd-session[5134]: pam_unix(sshd:session): session closed for user core May 14 00:22:00.792016 systemd[1]: sshd@29-37.27.39.104:22-139.178.89.65:56212.service: Deactivated successfully. May 14 00:22:00.795162 systemd[1]: session-29.scope: Deactivated successfully. May 14 00:22:00.797395 systemd-logind[1496]: Session 29 logged out. Waiting for processes to exit. May 14 00:22:00.801749 systemd-logind[1496]: Removed session 29. May 14 00:22:02.084377 kubelet[2809]: E0514 00:22:02.084269 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:22:05.954567 systemd[1]: Started sshd@30-37.27.39.104:22-139.178.89.65:56228.service - OpenSSH per-connection server daemon (139.178.89.65:56228). May 14 00:22:06.958293 sshd[5149]: Accepted publickey for core from 139.178.89.65 port 56228 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:22:06.960273 sshd-session[5149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:22:06.968093 systemd-logind[1496]: New session 30 of user core. May 14 00:22:06.972389 systemd[1]: Started session-30.scope - Session 30 of User core. May 14 00:22:07.085247 kubelet[2809]: E0514 00:22:07.085166 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:22:07.743676 sshd[5151]: Connection closed by 139.178.89.65 port 56228 May 14 00:22:07.746874 sshd-session[5149]: pam_unix(sshd:session): session closed for user core May 14 00:22:07.751358 systemd[1]: sshd@30-37.27.39.104:22-139.178.89.65:56228.service: Deactivated successfully. May 14 00:22:07.755880 systemd[1]: session-30.scope: Deactivated successfully. May 14 00:22:07.758184 systemd-logind[1496]: Session 30 logged out. Waiting for processes to exit. May 14 00:22:07.761278 systemd-logind[1496]: Removed session 30. May 14 00:22:12.085730 kubelet[2809]: E0514 00:22:12.085648 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:22:12.128476 containerd[1528]: time="2025-05-14T00:22:12.128346872Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eab12bd0739f390cc9dd331e60ff26d292938081c479e7fff02af86d04f23f0d\" id:\"d729b8ac2d973d2c2eeca31686a35e93a867e3613c2c9ea42ab19d60ebe5350a\" pid:5178 exited_at:{seconds:1747182132 nanos:127815688}" May 14 00:22:12.914841 systemd[1]: Started sshd@31-37.27.39.104:22-139.178.89.65:55884.service - OpenSSH per-connection server daemon (139.178.89.65:55884). May 14 00:22:13.920101 sshd[5196]: Accepted publickey for core from 139.178.89.65 port 55884 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:22:13.921047 sshd-session[5196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:22:13.929461 systemd-logind[1496]: New session 31 of user core. May 14 00:22:13.940588 systemd[1]: Started session-31.scope - Session 31 of User core. May 14 00:22:14.707534 sshd[5198]: Connection closed by 139.178.89.65 port 55884 May 14 00:22:14.709537 sshd-session[5196]: pam_unix(sshd:session): session closed for user core May 14 00:22:14.715116 systemd-logind[1496]: Session 31 logged out. Waiting for processes to exit. May 14 00:22:14.716156 systemd[1]: sshd@31-37.27.39.104:22-139.178.89.65:55884.service: Deactivated successfully. May 14 00:22:14.719349 systemd[1]: session-31.scope: Deactivated successfully. May 14 00:22:14.721586 systemd-logind[1496]: Removed session 31. May 14 00:22:17.086942 kubelet[2809]: E0514 00:22:17.086845 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:22:19.881664 systemd[1]: Started sshd@32-37.27.39.104:22-139.178.89.65:42488.service - OpenSSH per-connection server daemon (139.178.89.65:42488). May 14 00:22:20.891293 sshd[5212]: Accepted publickey for core from 139.178.89.65 port 42488 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:22:20.892765 sshd-session[5212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:22:20.905856 systemd-logind[1496]: New session 32 of user core. May 14 00:22:20.910660 systemd[1]: Started session-32.scope - Session 32 of User core. May 14 00:22:21.681854 sshd[5214]: Connection closed by 139.178.89.65 port 42488 May 14 00:22:21.684685 sshd-session[5212]: pam_unix(sshd:session): session closed for user core May 14 00:22:21.690669 systemd[1]: sshd@32-37.27.39.104:22-139.178.89.65:42488.service: Deactivated successfully. May 14 00:22:21.696950 systemd[1]: session-32.scope: Deactivated successfully. May 14 00:22:21.701030 systemd-logind[1496]: Session 32 logged out. Waiting for processes to exit. May 14 00:22:21.703073 systemd-logind[1496]: Removed session 32. May 14 00:22:22.087972 kubelet[2809]: E0514 00:22:22.087896 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:22:26.864283 systemd[1]: Started sshd@33-37.27.39.104:22-139.178.89.65:39034.service - OpenSSH per-connection server daemon (139.178.89.65:39034). May 14 00:22:27.088346 kubelet[2809]: E0514 00:22:27.088286 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:22:27.883296 sshd[5234]: Accepted publickey for core from 139.178.89.65 port 39034 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:22:27.886497 sshd-session[5234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:22:27.896613 systemd-logind[1496]: New session 33 of user core. May 14 00:22:27.905466 systemd[1]: Started session-33.scope - Session 33 of User core. May 14 00:22:28.687639 sshd[5236]: Connection closed by 139.178.89.65 port 39034 May 14 00:22:28.688673 sshd-session[5234]: pam_unix(sshd:session): session closed for user core May 14 00:22:28.693382 systemd-logind[1496]: Session 33 logged out. Waiting for processes to exit. May 14 00:22:28.694105 systemd[1]: sshd@33-37.27.39.104:22-139.178.89.65:39034.service: Deactivated successfully. May 14 00:22:28.696064 systemd[1]: session-33.scope: Deactivated successfully. May 14 00:22:28.697671 systemd-logind[1496]: Removed session 33. May 14 00:22:32.089691 kubelet[2809]: E0514 00:22:32.089616 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:22:33.865378 systemd[1]: Started sshd@34-37.27.39.104:22-139.178.89.65:39040.service - OpenSSH per-connection server daemon (139.178.89.65:39040). May 14 00:22:34.887553 sshd[5249]: Accepted publickey for core from 139.178.89.65 port 39040 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:22:34.891611 sshd-session[5249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:22:34.903456 systemd-logind[1496]: New session 34 of user core. May 14 00:22:34.911487 systemd[1]: Started session-34.scope - Session 34 of User core. May 14 00:22:35.694868 sshd[5253]: Connection closed by 139.178.89.65 port 39040 May 14 00:22:35.695622 sshd-session[5249]: pam_unix(sshd:session): session closed for user core May 14 00:22:35.700469 systemd[1]: sshd@34-37.27.39.104:22-139.178.89.65:39040.service: Deactivated successfully. May 14 00:22:35.704591 systemd[1]: session-34.scope: Deactivated successfully. May 14 00:22:35.707526 systemd-logind[1496]: Session 34 logged out. Waiting for processes to exit. May 14 00:22:35.710405 systemd-logind[1496]: Removed session 34. May 14 00:22:37.090486 kubelet[2809]: E0514 00:22:37.090394 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:22:40.866065 systemd[1]: Started sshd@35-37.27.39.104:22-139.178.89.65:59440.service - OpenSSH per-connection server daemon (139.178.89.65:59440). May 14 00:22:41.890292 sshd[5266]: Accepted publickey for core from 139.178.89.65 port 59440 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:22:41.891897 sshd-session[5266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:22:41.902831 systemd-logind[1496]: New session 35 of user core. May 14 00:22:41.909170 systemd[1]: Started session-35.scope - Session 35 of User core. May 14 00:22:42.090882 kubelet[2809]: E0514 00:22:42.090799 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:22:42.129770 containerd[1528]: time="2025-05-14T00:22:42.129624406Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eab12bd0739f390cc9dd331e60ff26d292938081c479e7fff02af86d04f23f0d\" id:\"846b18217a201f2517698a9ab53e3a89259b69478585e61195e7a41f5517d538\" pid:5284 exited_at:{seconds:1747182162 nanos:128970030}" May 14 00:22:42.700106 sshd[5270]: Connection closed by 139.178.89.65 port 59440 May 14 00:22:42.702169 sshd-session[5266]: pam_unix(sshd:session): session closed for user core May 14 00:22:42.708650 systemd-logind[1496]: Session 35 logged out. Waiting for processes to exit. May 14 00:22:42.709161 systemd[1]: sshd@35-37.27.39.104:22-139.178.89.65:59440.service: Deactivated successfully. May 14 00:22:42.712585 systemd[1]: session-35.scope: Deactivated successfully. May 14 00:22:42.715133 systemd-logind[1496]: Removed session 35. May 14 00:22:47.091382 kubelet[2809]: E0514 00:22:47.091304 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:22:47.879099 systemd[1]: Started sshd@36-37.27.39.104:22-139.178.89.65:55944.service - OpenSSH per-connection server daemon (139.178.89.65:55944). May 14 00:22:48.892286 sshd[5308]: Accepted publickey for core from 139.178.89.65 port 55944 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:22:48.894075 sshd-session[5308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:22:48.900790 systemd-logind[1496]: New session 36 of user core. May 14 00:22:48.910496 systemd[1]: Started session-36.scope - Session 36 of User core. May 14 00:22:49.679137 sshd[5310]: Connection closed by 139.178.89.65 port 55944 May 14 00:22:49.680307 sshd-session[5308]: pam_unix(sshd:session): session closed for user core May 14 00:22:49.684718 systemd[1]: sshd@36-37.27.39.104:22-139.178.89.65:55944.service: Deactivated successfully. May 14 00:22:49.687800 systemd[1]: session-36.scope: Deactivated successfully. May 14 00:22:49.690411 systemd-logind[1496]: Session 36 logged out. Waiting for processes to exit. May 14 00:22:49.692782 systemd-logind[1496]: Removed session 36. May 14 00:22:52.091969 kubelet[2809]: E0514 00:22:52.091868 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:22:54.853004 systemd[1]: Started sshd@37-37.27.39.104:22-139.178.89.65:55956.service - OpenSSH per-connection server daemon (139.178.89.65:55956). May 14 00:22:55.879288 sshd[5323]: Accepted publickey for core from 139.178.89.65 port 55956 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:22:55.882008 sshd-session[5323]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:22:55.901798 systemd-logind[1496]: New session 37 of user core. May 14 00:22:55.906192 systemd[1]: Started session-37.scope - Session 37 of User core. May 14 00:22:56.667625 sshd[5325]: Connection closed by 139.178.89.65 port 55956 May 14 00:22:56.668582 sshd-session[5323]: pam_unix(sshd:session): session closed for user core May 14 00:22:56.674969 systemd[1]: sshd@37-37.27.39.104:22-139.178.89.65:55956.service: Deactivated successfully. May 14 00:22:56.678530 systemd[1]: session-37.scope: Deactivated successfully. May 14 00:22:56.681117 systemd-logind[1496]: Session 37 logged out. Waiting for processes to exit. May 14 00:22:56.683643 systemd-logind[1496]: Removed session 37. May 14 00:22:57.092425 kubelet[2809]: E0514 00:22:57.092338 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:23:01.844400 systemd[1]: Started sshd@38-37.27.39.104:22-139.178.89.65:34522.service - OpenSSH per-connection server daemon (139.178.89.65:34522). May 14 00:23:01.847654 systemd[1]: Starting systemd-tmpfiles-clean.service - Cleanup of Temporary Directories... May 14 00:23:01.944044 systemd-tmpfiles[5339]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 14 00:23:01.947811 systemd-tmpfiles[5339]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 14 00:23:01.952801 systemd-tmpfiles[5339]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 14 00:23:01.953808 systemd-tmpfiles[5339]: ACLs are not supported, ignoring. May 14 00:23:01.954004 systemd-tmpfiles[5339]: ACLs are not supported, ignoring. May 14 00:23:01.960449 systemd-tmpfiles[5339]: Detected autofs mount point /boot during canonicalization of boot. May 14 00:23:01.960637 systemd-tmpfiles[5339]: Skipping /boot May 14 00:23:01.969890 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. May 14 00:23:01.970124 systemd[1]: Finished systemd-tmpfiles-clean.service - Cleanup of Temporary Directories. May 14 00:23:01.975125 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully. May 14 00:23:02.092731 kubelet[2809]: E0514 00:23:02.092662 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:23:02.920214 sshd[5338]: Accepted publickey for core from 139.178.89.65 port 34522 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:23:02.923389 sshd-session[5338]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:23:02.931757 systemd-logind[1496]: New session 38 of user core. May 14 00:23:02.934711 systemd[1]: Started session-38.scope - Session 38 of User core. May 14 00:23:03.764588 sshd[5343]: Connection closed by 139.178.89.65 port 34522 May 14 00:23:03.765661 sshd-session[5338]: pam_unix(sshd:session): session closed for user core May 14 00:23:03.773638 systemd[1]: sshd@38-37.27.39.104:22-139.178.89.65:34522.service: Deactivated successfully. May 14 00:23:03.777061 systemd[1]: session-38.scope: Deactivated successfully. May 14 00:23:03.780413 systemd-logind[1496]: Session 38 logged out. Waiting for processes to exit. May 14 00:23:03.782314 systemd-logind[1496]: Removed session 38. May 14 00:23:07.093103 kubelet[2809]: E0514 00:23:07.093018 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:23:08.944155 systemd[1]: Started sshd@39-37.27.39.104:22-139.178.89.65:55886.service - OpenSSH per-connection server daemon (139.178.89.65:55886). May 14 00:23:09.964611 sshd[5359]: Accepted publickey for core from 139.178.89.65 port 55886 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:23:09.966902 sshd-session[5359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:23:09.977084 systemd-logind[1496]: New session 39 of user core. May 14 00:23:09.982596 systemd[1]: Started session-39.scope - Session 39 of User core. May 14 00:23:10.752927 sshd[5361]: Connection closed by 139.178.89.65 port 55886 May 14 00:23:10.753948 sshd-session[5359]: pam_unix(sshd:session): session closed for user core May 14 00:23:10.759594 systemd-logind[1496]: Session 39 logged out. Waiting for processes to exit. May 14 00:23:10.760119 systemd[1]: sshd@39-37.27.39.104:22-139.178.89.65:55886.service: Deactivated successfully. May 14 00:23:10.763428 systemd[1]: session-39.scope: Deactivated successfully. May 14 00:23:10.765748 systemd-logind[1496]: Removed session 39. May 14 00:23:12.093566 kubelet[2809]: E0514 00:23:12.093489 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:23:12.125013 containerd[1528]: time="2025-05-14T00:23:12.124949768Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eab12bd0739f390cc9dd331e60ff26d292938081c479e7fff02af86d04f23f0d\" id:\"0d0cb5c450d9877ee22135baad43705a610dad52c0faee81da291b8f4136ddcf\" pid:5388 exited_at:{seconds:1747182192 nanos:124398265}" May 14 00:23:15.926319 systemd[1]: Started sshd@40-37.27.39.104:22-139.178.89.65:55896.service - OpenSSH per-connection server daemon (139.178.89.65:55896). May 14 00:23:16.930293 sshd[5402]: Accepted publickey for core from 139.178.89.65 port 55896 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:23:16.932643 sshd-session[5402]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:23:16.941496 systemd-logind[1496]: New session 40 of user core. May 14 00:23:16.949522 systemd[1]: Started session-40.scope - Session 40 of User core. May 14 00:23:17.094581 kubelet[2809]: E0514 00:23:17.094384 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:23:17.717002 sshd[5405]: Connection closed by 139.178.89.65 port 55896 May 14 00:23:17.718611 sshd-session[5402]: pam_unix(sshd:session): session closed for user core May 14 00:23:17.724986 systemd-logind[1496]: Session 40 logged out. Waiting for processes to exit. May 14 00:23:17.726107 systemd[1]: sshd@40-37.27.39.104:22-139.178.89.65:55896.service: Deactivated successfully. May 14 00:23:17.729859 systemd[1]: session-40.scope: Deactivated successfully. May 14 00:23:17.731860 systemd-logind[1496]: Removed session 40. May 14 00:23:22.094598 kubelet[2809]: E0514 00:23:22.094544 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:23:22.895794 systemd[1]: Started sshd@41-37.27.39.104:22-139.178.89.65:55314.service - OpenSSH per-connection server daemon (139.178.89.65:55314). May 14 00:23:23.904258 sshd[5419]: Accepted publickey for core from 139.178.89.65 port 55314 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:23:23.906868 sshd-session[5419]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:23:23.915561 systemd-logind[1496]: New session 41 of user core. May 14 00:23:23.923539 systemd[1]: Started session-41.scope - Session 41 of User core. May 14 00:23:24.680312 sshd[5421]: Connection closed by 139.178.89.65 port 55314 May 14 00:23:24.681215 sshd-session[5419]: pam_unix(sshd:session): session closed for user core May 14 00:23:24.686271 systemd[1]: sshd@41-37.27.39.104:22-139.178.89.65:55314.service: Deactivated successfully. May 14 00:23:24.689898 systemd[1]: session-41.scope: Deactivated successfully. May 14 00:23:24.691562 systemd-logind[1496]: Session 41 logged out. Waiting for processes to exit. May 14 00:23:24.693144 systemd-logind[1496]: Removed session 41. May 14 00:23:27.095561 kubelet[2809]: E0514 00:23:27.095480 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:23:29.856576 systemd[1]: Started sshd@42-37.27.39.104:22-139.178.89.65:59980.service - OpenSSH per-connection server daemon (139.178.89.65:59980). May 14 00:23:30.864453 sshd[5434]: Accepted publickey for core from 139.178.89.65 port 59980 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:23:30.867505 sshd-session[5434]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:23:30.877769 systemd-logind[1496]: New session 42 of user core. May 14 00:23:30.881777 systemd[1]: Started session-42.scope - Session 42 of User core. May 14 00:23:31.667523 sshd[5436]: Connection closed by 139.178.89.65 port 59980 May 14 00:23:31.668596 sshd-session[5434]: pam_unix(sshd:session): session closed for user core May 14 00:23:31.674493 systemd[1]: sshd@42-37.27.39.104:22-139.178.89.65:59980.service: Deactivated successfully. May 14 00:23:31.679007 systemd[1]: session-42.scope: Deactivated successfully. May 14 00:23:31.681051 systemd-logind[1496]: Session 42 logged out. Waiting for processes to exit. May 14 00:23:31.683180 systemd-logind[1496]: Removed session 42. May 14 00:23:32.096340 kubelet[2809]: E0514 00:23:32.096255 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:23:36.840265 systemd[1]: Started sshd@43-37.27.39.104:22-139.178.89.65:41428.service - OpenSSH per-connection server daemon (139.178.89.65:41428). May 14 00:23:37.098486 kubelet[2809]: E0514 00:23:37.098255 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:23:37.851603 sshd[5453]: Accepted publickey for core from 139.178.89.65 port 41428 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:23:37.853474 sshd-session[5453]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:23:37.859287 systemd-logind[1496]: New session 43 of user core. May 14 00:23:37.864473 systemd[1]: Started session-43.scope - Session 43 of User core. May 14 00:23:38.637835 sshd[5455]: Connection closed by 139.178.89.65 port 41428 May 14 00:23:38.638702 sshd-session[5453]: pam_unix(sshd:session): session closed for user core May 14 00:23:38.642865 systemd[1]: sshd@43-37.27.39.104:22-139.178.89.65:41428.service: Deactivated successfully. May 14 00:23:38.645759 systemd[1]: session-43.scope: Deactivated successfully. May 14 00:23:38.648380 systemd-logind[1496]: Session 43 logged out. Waiting for processes to exit. May 14 00:23:38.650591 systemd-logind[1496]: Removed session 43. May 14 00:23:38.809524 systemd[1]: Started sshd@44-37.27.39.104:22-139.178.89.65:41436.service - OpenSSH per-connection server daemon (139.178.89.65:41436). May 14 00:23:39.812130 sshd[5468]: Accepted publickey for core from 139.178.89.65 port 41436 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:23:39.814688 sshd-session[5468]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:23:39.821703 kubelet[2809]: E0514 00:23:39.820659 2809 log.go:32] "Status from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" May 14 00:23:39.821703 kubelet[2809]: E0514 00:23:39.820748 2809 kubelet.go:2993] "Container runtime sanity check failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" May 14 00:23:39.826498 systemd-logind[1496]: New session 44 of user core. May 14 00:23:39.834539 systemd[1]: Started session-44.scope - Session 44 of User core. May 14 00:23:40.667984 sshd[5470]: Connection closed by 139.178.89.65 port 41436 May 14 00:23:40.668730 sshd-session[5468]: pam_unix(sshd:session): session closed for user core May 14 00:23:40.672788 systemd-logind[1496]: Session 44 logged out. Waiting for processes to exit. May 14 00:23:40.673623 systemd[1]: sshd@44-37.27.39.104:22-139.178.89.65:41436.service: Deactivated successfully. May 14 00:23:40.677009 systemd[1]: session-44.scope: Deactivated successfully. May 14 00:23:40.680000 systemd-logind[1496]: Removed session 44. May 14 00:23:40.842996 systemd[1]: Started sshd@45-37.27.39.104:22-139.178.89.65:41440.service - OpenSSH per-connection server daemon (139.178.89.65:41440). May 14 00:23:41.862022 sshd[5479]: Accepted publickey for core from 139.178.89.65 port 41440 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:23:41.864130 sshd-session[5479]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:23:41.874664 systemd-logind[1496]: New session 45 of user core. May 14 00:23:41.881497 systemd[1]: Started session-45.scope - Session 45 of User core. May 14 00:23:42.099455 kubelet[2809]: E0514 00:23:42.099384 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:23:42.107480 containerd[1528]: time="2025-05-14T00:23:42.107430275Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eab12bd0739f390cc9dd331e60ff26d292938081c479e7fff02af86d04f23f0d\" id:\"fc75f62e3869ab247054233b1d5c475c5f45b334ec5ed8e06d01fd3969fa7944\" pid:5496 exited_at:{seconds:1747182222 nanos:106876116}" May 14 00:23:42.646682 sshd[5483]: Connection closed by 139.178.89.65 port 41440 May 14 00:23:42.647798 sshd-session[5479]: pam_unix(sshd:session): session closed for user core May 14 00:23:42.652831 systemd[1]: sshd@45-37.27.39.104:22-139.178.89.65:41440.service: Deactivated successfully. May 14 00:23:42.657047 systemd[1]: session-45.scope: Deactivated successfully. May 14 00:23:42.660583 systemd-logind[1496]: Session 45 logged out. Waiting for processes to exit. May 14 00:23:42.662991 systemd-logind[1496]: Removed session 45. May 14 00:23:47.100109 kubelet[2809]: E0514 00:23:47.100012 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:23:47.821427 systemd[1]: Started sshd@46-37.27.39.104:22-139.178.89.65:46278.service - OpenSSH per-connection server daemon (139.178.89.65:46278). May 14 00:23:48.846649 sshd[5525]: Accepted publickey for core from 139.178.89.65 port 46278 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:23:48.849530 sshd-session[5525]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:23:48.859987 systemd-logind[1496]: New session 46 of user core. May 14 00:23:48.867548 systemd[1]: Started session-46.scope - Session 46 of User core. May 14 00:23:49.670563 sshd[5527]: Connection closed by 139.178.89.65 port 46278 May 14 00:23:49.671717 sshd-session[5525]: pam_unix(sshd:session): session closed for user core May 14 00:23:49.677361 systemd-logind[1496]: Session 46 logged out. Waiting for processes to exit. May 14 00:23:49.678541 systemd[1]: sshd@46-37.27.39.104:22-139.178.89.65:46278.service: Deactivated successfully. May 14 00:23:49.682707 systemd[1]: session-46.scope: Deactivated successfully. May 14 00:23:49.684827 systemd-logind[1496]: Removed session 46. May 14 00:23:52.101168 kubelet[2809]: E0514 00:23:52.101120 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:23:54.844751 systemd[1]: Started sshd@47-37.27.39.104:22-139.178.89.65:46286.service - OpenSSH per-connection server daemon (139.178.89.65:46286). May 14 00:23:55.852069 sshd[5546]: Accepted publickey for core from 139.178.89.65 port 46286 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:23:55.854652 sshd-session[5546]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:23:55.863085 systemd-logind[1496]: New session 47 of user core. May 14 00:23:55.869889 systemd[1]: Started session-47.scope - Session 47 of User core. May 14 00:23:56.687849 sshd[5548]: Connection closed by 139.178.89.65 port 46286 May 14 00:23:56.690875 sshd-session[5546]: pam_unix(sshd:session): session closed for user core May 14 00:23:56.698054 systemd[1]: sshd@47-37.27.39.104:22-139.178.89.65:46286.service: Deactivated successfully. May 14 00:23:56.701401 systemd[1]: session-47.scope: Deactivated successfully. May 14 00:23:56.702977 systemd-logind[1496]: Session 47 logged out. Waiting for processes to exit. May 14 00:23:56.705459 systemd-logind[1496]: Removed session 47. May 14 00:23:57.101404 kubelet[2809]: E0514 00:23:57.101313 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:24:01.861327 systemd[1]: Started sshd@48-37.27.39.104:22-139.178.89.65:56180.service - OpenSSH per-connection server daemon (139.178.89.65:56180). May 14 00:24:02.101682 kubelet[2809]: E0514 00:24:02.101632 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:24:02.894276 sshd[5560]: Accepted publickey for core from 139.178.89.65 port 56180 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:24:02.896402 sshd-session[5560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:24:02.908012 systemd-logind[1496]: New session 48 of user core. May 14 00:24:02.917075 systemd[1]: Started session-48.scope - Session 48 of User core. May 14 00:24:03.704727 sshd[5562]: Connection closed by 139.178.89.65 port 56180 May 14 00:24:03.705926 sshd-session[5560]: pam_unix(sshd:session): session closed for user core May 14 00:24:03.710447 systemd[1]: sshd@48-37.27.39.104:22-139.178.89.65:56180.service: Deactivated successfully. May 14 00:24:03.713498 systemd[1]: session-48.scope: Deactivated successfully. May 14 00:24:03.715664 systemd-logind[1496]: Session 48 logged out. Waiting for processes to exit. May 14 00:24:03.717532 systemd-logind[1496]: Removed session 48. May 14 00:24:07.102762 kubelet[2809]: E0514 00:24:07.102662 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:24:08.879447 systemd[1]: Started sshd@49-37.27.39.104:22-139.178.89.65:44222.service - OpenSSH per-connection server daemon (139.178.89.65:44222). May 14 00:24:09.899014 sshd[5574]: Accepted publickey for core from 139.178.89.65 port 44222 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:24:09.904866 sshd-session[5574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:24:09.914266 systemd-logind[1496]: New session 49 of user core. May 14 00:24:09.923481 systemd[1]: Started session-49.scope - Session 49 of User core. May 14 00:24:10.706262 sshd[5576]: Connection closed by 139.178.89.65 port 44222 May 14 00:24:10.708532 sshd-session[5574]: pam_unix(sshd:session): session closed for user core May 14 00:24:10.720913 systemd-logind[1496]: Session 49 logged out. Waiting for processes to exit. May 14 00:24:10.721482 systemd[1]: sshd@49-37.27.39.104:22-139.178.89.65:44222.service: Deactivated successfully. May 14 00:24:10.725556 systemd[1]: session-49.scope: Deactivated successfully. May 14 00:24:10.727795 systemd-logind[1496]: Removed session 49. May 14 00:24:11.746363 systemd[1]: Started sshd@50-37.27.39.104:22-194.0.234.19:56904.service - OpenSSH per-connection server daemon (194.0.234.19:56904). May 14 00:24:12.103933 kubelet[2809]: E0514 00:24:12.103744 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:24:12.146845 containerd[1528]: time="2025-05-14T00:24:12.146781169Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eab12bd0739f390cc9dd331e60ff26d292938081c479e7fff02af86d04f23f0d\" id:\"4320b02ed2d5b8fec6c88c514b4f7fe1c93745452c35eb341c4f4c3235fbe009\" pid:5604 exited_at:{seconds:1747182252 nanos:146150738}" May 14 00:24:13.237442 sshd[5591]: Invalid user guest from 194.0.234.19 port 56904 May 14 00:24:13.353702 sshd[5591]: Connection closed by invalid user guest 194.0.234.19 port 56904 [preauth] May 14 00:24:13.356153 systemd[1]: sshd@50-37.27.39.104:22-194.0.234.19:56904.service: Deactivated successfully. May 14 00:24:15.875655 systemd[1]: Started sshd@51-37.27.39.104:22-139.178.89.65:44234.service - OpenSSH per-connection server daemon (139.178.89.65:44234). May 14 00:24:16.899584 sshd[5621]: Accepted publickey for core from 139.178.89.65 port 44234 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:24:16.906694 sshd-session[5621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:24:16.915373 systemd-logind[1496]: New session 50 of user core. May 14 00:24:16.920505 systemd[1]: Started session-50.scope - Session 50 of User core. May 14 00:24:17.104922 kubelet[2809]: E0514 00:24:17.104843 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:24:17.743514 sshd[5623]: Connection closed by 139.178.89.65 port 44234 May 14 00:24:17.744469 sshd-session[5621]: pam_unix(sshd:session): session closed for user core May 14 00:24:17.748989 systemd[1]: sshd@51-37.27.39.104:22-139.178.89.65:44234.service: Deactivated successfully. May 14 00:24:17.753878 systemd[1]: session-50.scope: Deactivated successfully. May 14 00:24:17.755857 systemd-logind[1496]: Session 50 logged out. Waiting for processes to exit. May 14 00:24:17.759469 systemd-logind[1496]: Removed session 50. May 14 00:24:22.106403 kubelet[2809]: E0514 00:24:22.106303 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:24:22.915882 systemd[1]: Started sshd@52-37.27.39.104:22-139.178.89.65:36988.service - OpenSSH per-connection server daemon (139.178.89.65:36988). May 14 00:24:23.924069 sshd[5636]: Accepted publickey for core from 139.178.89.65 port 36988 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:24:23.926679 sshd-session[5636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:24:23.940670 systemd-logind[1496]: New session 51 of user core. May 14 00:24:23.949484 systemd[1]: Started session-51.scope - Session 51 of User core. May 14 00:24:24.716883 sshd[5638]: Connection closed by 139.178.89.65 port 36988 May 14 00:24:24.720314 sshd-session[5636]: pam_unix(sshd:session): session closed for user core May 14 00:24:24.725056 systemd-logind[1496]: Session 51 logged out. Waiting for processes to exit. May 14 00:24:24.726487 systemd[1]: sshd@52-37.27.39.104:22-139.178.89.65:36988.service: Deactivated successfully. May 14 00:24:24.730866 systemd[1]: session-51.scope: Deactivated successfully. May 14 00:24:24.736057 systemd-logind[1496]: Removed session 51. May 14 00:24:27.107632 kubelet[2809]: E0514 00:24:27.107549 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:24:29.898496 systemd[1]: Started sshd@53-37.27.39.104:22-139.178.89.65:33326.service - OpenSSH per-connection server daemon (139.178.89.65:33326). May 14 00:24:30.909273 sshd[5650]: Accepted publickey for core from 139.178.89.65 port 33326 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:24:30.911581 sshd-session[5650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:24:30.919454 systemd-logind[1496]: New session 52 of user core. May 14 00:24:30.926523 systemd[1]: Started session-52.scope - Session 52 of User core. May 14 00:24:31.685513 sshd[5652]: Connection closed by 139.178.89.65 port 33326 May 14 00:24:31.686646 sshd-session[5650]: pam_unix(sshd:session): session closed for user core May 14 00:24:31.692716 systemd-logind[1496]: Session 52 logged out. Waiting for processes to exit. May 14 00:24:31.695077 systemd[1]: sshd@53-37.27.39.104:22-139.178.89.65:33326.service: Deactivated successfully. May 14 00:24:31.698738 systemd[1]: session-52.scope: Deactivated successfully. May 14 00:24:31.701114 systemd-logind[1496]: Removed session 52. May 14 00:24:32.108296 kubelet[2809]: E0514 00:24:32.108192 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:24:36.862529 systemd[1]: Started sshd@54-37.27.39.104:22-139.178.89.65:55410.service - OpenSSH per-connection server daemon (139.178.89.65:55410). May 14 00:24:37.108410 kubelet[2809]: E0514 00:24:37.108353 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:24:37.868723 sshd[5666]: Accepted publickey for core from 139.178.89.65 port 55410 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:24:37.871133 sshd-session[5666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:24:37.881094 systemd-logind[1496]: New session 53 of user core. May 14 00:24:37.889799 systemd[1]: Started session-53.scope - Session 53 of User core. May 14 00:24:38.656662 sshd[5668]: Connection closed by 139.178.89.65 port 55410 May 14 00:24:38.657824 sshd-session[5666]: pam_unix(sshd:session): session closed for user core May 14 00:24:38.664075 systemd-logind[1496]: Session 53 logged out. Waiting for processes to exit. May 14 00:24:38.665148 systemd[1]: sshd@54-37.27.39.104:22-139.178.89.65:55410.service: Deactivated successfully. May 14 00:24:38.668929 systemd[1]: session-53.scope: Deactivated successfully. May 14 00:24:38.671712 systemd-logind[1496]: Removed session 53. May 14 00:24:42.109374 kubelet[2809]: E0514 00:24:42.109269 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:24:42.130699 containerd[1528]: time="2025-05-14T00:24:42.130637655Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eab12bd0739f390cc9dd331e60ff26d292938081c479e7fff02af86d04f23f0d\" id:\"ee98aef50a6d0b075f5a9cb7d5ee5d7dc92f1aaaecc673f1792576fa2b05f9ca\" pid:5694 exited_at:{seconds:1747182282 nanos:129915132}" May 14 00:24:43.827589 systemd[1]: Started sshd@55-37.27.39.104:22-139.178.89.65:55420.service - OpenSSH per-connection server daemon (139.178.89.65:55420). May 14 00:24:44.818932 sshd[5708]: Accepted publickey for core from 139.178.89.65 port 55420 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:24:44.821471 sshd-session[5708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:24:44.830098 systemd-logind[1496]: New session 54 of user core. May 14 00:24:44.835497 systemd[1]: Started session-54.scope - Session 54 of User core. May 14 00:24:45.606036 sshd[5710]: Connection closed by 139.178.89.65 port 55420 May 14 00:24:45.607470 sshd-session[5708]: pam_unix(sshd:session): session closed for user core May 14 00:24:45.612325 systemd[1]: sshd@55-37.27.39.104:22-139.178.89.65:55420.service: Deactivated successfully. May 14 00:24:45.616753 systemd[1]: session-54.scope: Deactivated successfully. May 14 00:24:45.619716 systemd-logind[1496]: Session 54 logged out. Waiting for processes to exit. May 14 00:24:45.622023 systemd-logind[1496]: Removed session 54. May 14 00:24:47.110516 kubelet[2809]: E0514 00:24:47.110442 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:24:50.784515 systemd[1]: Started sshd@56-37.27.39.104:22-139.178.89.65:59088.service - OpenSSH per-connection server daemon (139.178.89.65:59088). May 14 00:24:51.796368 sshd[5722]: Accepted publickey for core from 139.178.89.65 port 59088 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:24:51.798469 sshd-session[5722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:24:51.803747 systemd-logind[1496]: New session 55 of user core. May 14 00:24:51.809671 systemd[1]: Started session-55.scope - Session 55 of User core. May 14 00:24:52.111876 kubelet[2809]: E0514 00:24:52.111601 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:24:52.618930 sshd[5732]: Connection closed by 139.178.89.65 port 59088 May 14 00:24:52.621383 sshd-session[5722]: pam_unix(sshd:session): session closed for user core May 14 00:24:52.626623 systemd[1]: sshd@56-37.27.39.104:22-139.178.89.65:59088.service: Deactivated successfully. May 14 00:24:52.632329 systemd[1]: session-55.scope: Deactivated successfully. May 14 00:24:52.635128 systemd-logind[1496]: Session 55 logged out. Waiting for processes to exit. May 14 00:24:52.636974 systemd-logind[1496]: Removed session 55. May 14 00:24:57.112774 kubelet[2809]: E0514 00:24:57.112671 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:24:57.799546 systemd[1]: Started sshd@57-37.27.39.104:22-139.178.89.65:53180.service - OpenSSH per-connection server daemon (139.178.89.65:53180). May 14 00:24:58.804115 sshd[5743]: Accepted publickey for core from 139.178.89.65 port 53180 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:24:58.805205 sshd-session[5743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:24:58.815575 systemd-logind[1496]: New session 56 of user core. May 14 00:24:58.820593 systemd[1]: Started session-56.scope - Session 56 of User core. May 14 00:24:59.606158 sshd[5745]: Connection closed by 139.178.89.65 port 53180 May 14 00:24:59.607522 sshd-session[5743]: pam_unix(sshd:session): session closed for user core May 14 00:24:59.612879 systemd-logind[1496]: Session 56 logged out. Waiting for processes to exit. May 14 00:24:59.613885 systemd[1]: sshd@57-37.27.39.104:22-139.178.89.65:53180.service: Deactivated successfully. May 14 00:24:59.617045 systemd[1]: session-56.scope: Deactivated successfully. May 14 00:24:59.619637 systemd-logind[1496]: Removed session 56. May 14 00:25:02.113348 kubelet[2809]: E0514 00:25:02.113285 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:25:04.776656 systemd[1]: Started sshd@58-37.27.39.104:22-139.178.89.65:53182.service - OpenSSH per-connection server daemon (139.178.89.65:53182). May 14 00:25:05.789290 sshd[5757]: Accepted publickey for core from 139.178.89.65 port 53182 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:25:05.790485 sshd-session[5757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:25:05.800374 systemd-logind[1496]: New session 57 of user core. May 14 00:25:05.805499 systemd[1]: Started session-57.scope - Session 57 of User core. May 14 00:25:06.583332 sshd[5759]: Connection closed by 139.178.89.65 port 53182 May 14 00:25:06.585508 sshd-session[5757]: pam_unix(sshd:session): session closed for user core May 14 00:25:06.590889 systemd[1]: sshd@58-37.27.39.104:22-139.178.89.65:53182.service: Deactivated successfully. May 14 00:25:06.594780 systemd[1]: session-57.scope: Deactivated successfully. May 14 00:25:06.596332 systemd-logind[1496]: Session 57 logged out. Waiting for processes to exit. May 14 00:25:06.598157 systemd-logind[1496]: Removed session 57. May 14 00:25:07.114509 kubelet[2809]: E0514 00:25:07.114438 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:25:11.762005 systemd[1]: Started sshd@59-37.27.39.104:22-139.178.89.65:60058.service - OpenSSH per-connection server daemon (139.178.89.65:60058). May 14 00:25:12.115436 kubelet[2809]: E0514 00:25:12.115288 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:25:12.125451 containerd[1528]: time="2025-05-14T00:25:12.125374070Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eab12bd0739f390cc9dd331e60ff26d292938081c479e7fff02af86d04f23f0d\" id:\"5cf76ee7c6992780e351441894fc35d17264c989de95c6b85adf54d0673f4aa9\" pid:5786 exited_at:{seconds:1747182312 nanos:124872660}" May 14 00:25:12.766036 sshd[5772]: Accepted publickey for core from 139.178.89.65 port 60058 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:25:12.770093 sshd-session[5772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:25:12.780136 systemd-logind[1496]: New session 58 of user core. May 14 00:25:12.794574 systemd[1]: Started session-58.scope - Session 58 of User core. May 14 00:25:13.572569 sshd[5799]: Connection closed by 139.178.89.65 port 60058 May 14 00:25:13.574685 sshd-session[5772]: pam_unix(sshd:session): session closed for user core May 14 00:25:13.583466 systemd-logind[1496]: Session 58 logged out. Waiting for processes to exit. May 14 00:25:13.584563 systemd[1]: sshd@59-37.27.39.104:22-139.178.89.65:60058.service: Deactivated successfully. May 14 00:25:13.587924 systemd[1]: session-58.scope: Deactivated successfully. May 14 00:25:13.590383 systemd-logind[1496]: Removed session 58. May 14 00:25:17.116177 kubelet[2809]: E0514 00:25:17.116088 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:25:18.746558 systemd[1]: Started sshd@60-37.27.39.104:22-139.178.89.65:41188.service - OpenSSH per-connection server daemon (139.178.89.65:41188). May 14 00:25:19.759121 sshd[5811]: Accepted publickey for core from 139.178.89.65 port 41188 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:25:19.761575 sshd-session[5811]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:25:19.770486 systemd-logind[1496]: New session 59 of user core. May 14 00:25:19.780493 systemd[1]: Started session-59.scope - Session 59 of User core. May 14 00:25:20.555557 sshd[5813]: Connection closed by 139.178.89.65 port 41188 May 14 00:25:20.556570 sshd-session[5811]: pam_unix(sshd:session): session closed for user core May 14 00:25:20.561738 systemd-logind[1496]: Session 59 logged out. Waiting for processes to exit. May 14 00:25:20.562578 systemd[1]: sshd@60-37.27.39.104:22-139.178.89.65:41188.service: Deactivated successfully. May 14 00:25:20.566000 systemd[1]: session-59.scope: Deactivated successfully. May 14 00:25:20.568918 systemd-logind[1496]: Removed session 59. May 14 00:25:22.117133 kubelet[2809]: E0514 00:25:22.117055 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:25:25.734894 systemd[1]: Started sshd@61-37.27.39.104:22-139.178.89.65:41204.service - OpenSSH per-connection server daemon (139.178.89.65:41204). May 14 00:25:26.745939 sshd[5830]: Accepted publickey for core from 139.178.89.65 port 41204 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:25:26.747042 sshd-session[5830]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:25:26.755369 systemd-logind[1496]: New session 60 of user core. May 14 00:25:26.765609 systemd[1]: Started session-60.scope - Session 60 of User core. May 14 00:25:27.117451 kubelet[2809]: E0514 00:25:27.117178 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:25:27.550509 sshd[5832]: Connection closed by 139.178.89.65 port 41204 May 14 00:25:27.551601 sshd-session[5830]: pam_unix(sshd:session): session closed for user core May 14 00:25:27.557256 systemd-logind[1496]: Session 60 logged out. Waiting for processes to exit. May 14 00:25:27.558211 systemd[1]: sshd@61-37.27.39.104:22-139.178.89.65:41204.service: Deactivated successfully. May 14 00:25:27.563683 systemd[1]: session-60.scope: Deactivated successfully. May 14 00:25:27.565843 systemd-logind[1496]: Removed session 60. May 14 00:25:32.118165 kubelet[2809]: E0514 00:25:32.118094 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:25:32.722471 systemd[1]: Started sshd@62-37.27.39.104:22-139.178.89.65:47556.service - OpenSSH per-connection server daemon (139.178.89.65:47556). May 14 00:25:33.744179 sshd[5851]: Accepted publickey for core from 139.178.89.65 port 47556 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:25:33.746716 sshd-session[5851]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:25:33.757259 systemd-logind[1496]: New session 61 of user core. May 14 00:25:33.764546 systemd[1]: Started session-61.scope - Session 61 of User core. May 14 00:25:34.508856 sshd[5853]: Connection closed by 139.178.89.65 port 47556 May 14 00:25:34.509548 sshd-session[5851]: pam_unix(sshd:session): session closed for user core May 14 00:25:34.515849 systemd-logind[1496]: Session 61 logged out. Waiting for processes to exit. May 14 00:25:34.517151 systemd[1]: sshd@62-37.27.39.104:22-139.178.89.65:47556.service: Deactivated successfully. May 14 00:25:34.520049 systemd[1]: session-61.scope: Deactivated successfully. May 14 00:25:34.521775 systemd-logind[1496]: Removed session 61. May 14 00:25:37.125185 kubelet[2809]: E0514 00:25:37.118632 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:25:39.689754 systemd[1]: Started sshd@63-37.27.39.104:22-139.178.89.65:56362.service - OpenSSH per-connection server daemon (139.178.89.65:56362). May 14 00:25:40.681142 sshd[5867]: Accepted publickey for core from 139.178.89.65 port 56362 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:25:40.683505 sshd-session[5867]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:25:40.694566 systemd-logind[1496]: New session 62 of user core. May 14 00:25:40.706635 systemd[1]: Started session-62.scope - Session 62 of User core. May 14 00:25:41.483245 sshd[5869]: Connection closed by 139.178.89.65 port 56362 May 14 00:25:41.484187 sshd-session[5867]: pam_unix(sshd:session): session closed for user core May 14 00:25:41.488357 systemd[1]: sshd@63-37.27.39.104:22-139.178.89.65:56362.service: Deactivated successfully. May 14 00:25:41.491484 systemd[1]: session-62.scope: Deactivated successfully. May 14 00:25:41.494293 systemd-logind[1496]: Session 62 logged out. Waiting for processes to exit. May 14 00:25:41.496751 systemd-logind[1496]: Removed session 62. May 14 00:25:42.118909 kubelet[2809]: E0514 00:25:42.118794 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:25:42.138683 containerd[1528]: time="2025-05-14T00:25:42.138522363Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eab12bd0739f390cc9dd331e60ff26d292938081c479e7fff02af86d04f23f0d\" id:\"453217bada76c3fec435eb658742a95784b298b58962b96b8ff5d7aba43107e8\" pid:5894 exited_at:{seconds:1747182342 nanos:138164966}" May 14 00:25:44.822394 kubelet[2809]: E0514 00:25:44.822310 2809 log.go:32] "Status from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" May 14 00:25:44.823055 kubelet[2809]: E0514 00:25:44.822526 2809 kubelet.go:2993] "Container runtime sanity check failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" May 14 00:25:46.659769 systemd[1]: Started sshd@64-37.27.39.104:22-139.178.89.65:41988.service - OpenSSH per-connection server daemon (139.178.89.65:41988). May 14 00:25:47.119753 kubelet[2809]: E0514 00:25:47.119608 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:25:47.670031 sshd[5908]: Accepted publickey for core from 139.178.89.65 port 41988 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:25:47.671093 sshd-session[5908]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:25:47.678595 systemd-logind[1496]: New session 63 of user core. May 14 00:25:47.687482 systemd[1]: Started session-63.scope - Session 63 of User core. May 14 00:25:48.473739 sshd[5910]: Connection closed by 139.178.89.65 port 41988 May 14 00:25:48.475510 sshd-session[5908]: pam_unix(sshd:session): session closed for user core May 14 00:25:48.481611 systemd[1]: sshd@64-37.27.39.104:22-139.178.89.65:41988.service: Deactivated successfully. May 14 00:25:48.486295 systemd[1]: session-63.scope: Deactivated successfully. May 14 00:25:48.488972 systemd-logind[1496]: Session 63 logged out. Waiting for processes to exit. May 14 00:25:48.491093 systemd-logind[1496]: Removed session 63. May 14 00:25:52.120533 kubelet[2809]: E0514 00:25:52.120444 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:25:53.657703 systemd[1]: Started sshd@65-37.27.39.104:22-139.178.89.65:41992.service - OpenSSH per-connection server daemon (139.178.89.65:41992). May 14 00:25:54.671162 sshd[5921]: Accepted publickey for core from 139.178.89.65 port 41992 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:25:54.674819 sshd-session[5921]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:25:54.691130 systemd-logind[1496]: New session 64 of user core. May 14 00:25:54.695559 systemd[1]: Started session-64.scope - Session 64 of User core. May 14 00:25:55.483622 sshd[5923]: Connection closed by 139.178.89.65 port 41992 May 14 00:25:55.484152 sshd-session[5921]: pam_unix(sshd:session): session closed for user core May 14 00:25:55.487723 systemd[1]: sshd@65-37.27.39.104:22-139.178.89.65:41992.service: Deactivated successfully. May 14 00:25:55.491625 systemd[1]: session-64.scope: Deactivated successfully. May 14 00:25:55.493595 systemd-logind[1496]: Session 64 logged out. Waiting for processes to exit. May 14 00:25:55.494631 systemd-logind[1496]: Removed session 64. May 14 00:25:57.121461 kubelet[2809]: E0514 00:25:57.121298 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:26:00.654917 systemd[1]: Started sshd@66-37.27.39.104:22-139.178.89.65:46302.service - OpenSSH per-connection server daemon (139.178.89.65:46302). May 14 00:26:01.664316 sshd[5935]: Accepted publickey for core from 139.178.89.65 port 46302 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:26:01.666742 sshd-session[5935]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:26:01.674702 systemd-logind[1496]: New session 65 of user core. May 14 00:26:01.684522 systemd[1]: Started session-65.scope - Session 65 of User core. May 14 00:26:02.121725 kubelet[2809]: E0514 00:26:02.121627 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:26:02.451474 sshd[5937]: Connection closed by 139.178.89.65 port 46302 May 14 00:26:02.451261 sshd-session[5935]: pam_unix(sshd:session): session closed for user core May 14 00:26:02.458173 systemd-logind[1496]: Session 65 logged out. Waiting for processes to exit. May 14 00:26:02.459296 systemd[1]: sshd@66-37.27.39.104:22-139.178.89.65:46302.service: Deactivated successfully. May 14 00:26:02.463557 systemd[1]: session-65.scope: Deactivated successfully. May 14 00:26:02.465234 systemd-logind[1496]: Removed session 65. May 14 00:26:07.123020 kubelet[2809]: E0514 00:26:07.122880 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:26:07.626526 systemd[1]: Started sshd@67-37.27.39.104:22-139.178.89.65:42932.service - OpenSSH per-connection server daemon (139.178.89.65:42932). May 14 00:26:08.635499 sshd[5949]: Accepted publickey for core from 139.178.89.65 port 42932 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:26:08.636478 sshd-session[5949]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:26:08.646312 systemd-logind[1496]: New session 66 of user core. May 14 00:26:08.649721 systemd[1]: Started session-66.scope - Session 66 of User core. May 14 00:26:09.431581 sshd[5951]: Connection closed by 139.178.89.65 port 42932 May 14 00:26:09.432494 sshd-session[5949]: pam_unix(sshd:session): session closed for user core May 14 00:26:09.440573 systemd[1]: sshd@67-37.27.39.104:22-139.178.89.65:42932.service: Deactivated successfully. May 14 00:26:09.444257 systemd[1]: session-66.scope: Deactivated successfully. May 14 00:26:09.445773 systemd-logind[1496]: Session 66 logged out. Waiting for processes to exit. May 14 00:26:09.447678 systemd-logind[1496]: Removed session 66. May 14 00:26:12.120134 containerd[1528]: time="2025-05-14T00:26:12.119994999Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eab12bd0739f390cc9dd331e60ff26d292938081c479e7fff02af86d04f23f0d\" id:\"cf5f51db5c819c566a811e41c9da617049ca9da75941a6c2cc558086b5035db4\" pid:5976 exited_at:{seconds:1747182372 nanos:119325835}" May 14 00:26:12.123404 kubelet[2809]: E0514 00:26:12.123350 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:26:14.607607 systemd[1]: Started sshd@68-37.27.39.104:22-139.178.89.65:42940.service - OpenSSH per-connection server daemon (139.178.89.65:42940). May 14 00:26:15.625054 sshd[5990]: Accepted publickey for core from 139.178.89.65 port 42940 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:26:15.628402 sshd-session[5990]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:26:15.638157 systemd-logind[1496]: New session 67 of user core. May 14 00:26:15.648512 systemd[1]: Started session-67.scope - Session 67 of User core. May 14 00:26:16.412261 sshd[5992]: Connection closed by 139.178.89.65 port 42940 May 14 00:26:16.414459 sshd-session[5990]: pam_unix(sshd:session): session closed for user core May 14 00:26:16.420698 systemd-logind[1496]: Session 67 logged out. Waiting for processes to exit. May 14 00:26:16.421837 systemd[1]: sshd@68-37.27.39.104:22-139.178.89.65:42940.service: Deactivated successfully. May 14 00:26:16.425769 systemd[1]: session-67.scope: Deactivated successfully. May 14 00:26:16.428100 systemd-logind[1496]: Removed session 67. May 14 00:26:17.124099 kubelet[2809]: E0514 00:26:17.124037 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:26:21.589427 systemd[1]: Started sshd@69-37.27.39.104:22-139.178.89.65:44018.service - OpenSSH per-connection server daemon (139.178.89.65:44018). May 14 00:26:22.125188 kubelet[2809]: E0514 00:26:22.125139 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:26:22.595807 sshd[6004]: Accepted publickey for core from 139.178.89.65 port 44018 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:26:22.598384 sshd-session[6004]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:26:22.605977 systemd-logind[1496]: New session 68 of user core. May 14 00:26:22.610465 systemd[1]: Started session-68.scope - Session 68 of User core. May 14 00:26:23.370457 sshd[6006]: Connection closed by 139.178.89.65 port 44018 May 14 00:26:23.372523 sshd-session[6004]: pam_unix(sshd:session): session closed for user core May 14 00:26:23.377921 systemd-logind[1496]: Session 68 logged out. Waiting for processes to exit. May 14 00:26:23.379047 systemd[1]: sshd@69-37.27.39.104:22-139.178.89.65:44018.service: Deactivated successfully. May 14 00:26:23.382945 systemd[1]: session-68.scope: Deactivated successfully. May 14 00:26:23.385141 systemd-logind[1496]: Removed session 68. May 14 00:26:27.126259 kubelet[2809]: E0514 00:26:27.125970 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:26:28.544361 systemd[1]: Started sshd@70-37.27.39.104:22-139.178.89.65:36992.service - OpenSSH per-connection server daemon (139.178.89.65:36992). May 14 00:26:29.561633 sshd[6018]: Accepted publickey for core from 139.178.89.65 port 36992 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:26:29.563779 sshd-session[6018]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:26:29.576988 systemd-logind[1496]: New session 69 of user core. May 14 00:26:29.584643 systemd[1]: Started session-69.scope - Session 69 of User core. May 14 00:26:30.336435 sshd[6020]: Connection closed by 139.178.89.65 port 36992 May 14 00:26:30.337528 sshd-session[6018]: pam_unix(sshd:session): session closed for user core May 14 00:26:30.343870 systemd[1]: sshd@70-37.27.39.104:22-139.178.89.65:36992.service: Deactivated successfully. May 14 00:26:30.346825 systemd[1]: session-69.scope: Deactivated successfully. May 14 00:26:30.348059 systemd-logind[1496]: Session 69 logged out. Waiting for processes to exit. May 14 00:26:30.349664 systemd-logind[1496]: Removed session 69. May 14 00:26:32.127017 kubelet[2809]: E0514 00:26:32.126914 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:26:35.509450 systemd[1]: Started sshd@71-37.27.39.104:22-139.178.89.65:36996.service - OpenSSH per-connection server daemon (139.178.89.65:36996). May 14 00:26:36.514286 sshd[6034]: Accepted publickey for core from 139.178.89.65 port 36996 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:26:36.515983 sshd-session[6034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:26:36.525893 systemd-logind[1496]: New session 70 of user core. May 14 00:26:36.531542 systemd[1]: Started session-70.scope - Session 70 of User core. May 14 00:26:37.127632 kubelet[2809]: E0514 00:26:37.127560 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:26:37.302160 sshd[6036]: Connection closed by 139.178.89.65 port 36996 May 14 00:26:37.303030 sshd-session[6034]: pam_unix(sshd:session): session closed for user core May 14 00:26:37.307347 systemd[1]: sshd@71-37.27.39.104:22-139.178.89.65:36996.service: Deactivated successfully. May 14 00:26:37.311030 systemd[1]: session-70.scope: Deactivated successfully. May 14 00:26:37.314042 systemd-logind[1496]: Session 70 logged out. Waiting for processes to exit. May 14 00:26:37.316836 systemd-logind[1496]: Removed session 70. May 14 00:26:42.123692 containerd[1528]: time="2025-05-14T00:26:42.123629252Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eab12bd0739f390cc9dd331e60ff26d292938081c479e7fff02af86d04f23f0d\" id:\"146e8e25d47c96b8ce031a460d4874c4bf407a36e8370daad3e161a4afe9faeb\" pid:6064 exited_at:{seconds:1747182402 nanos:123091187}" May 14 00:26:42.128151 kubelet[2809]: E0514 00:26:42.127699 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:26:42.472905 systemd[1]: Started sshd@72-37.27.39.104:22-139.178.89.65:57062.service - OpenSSH per-connection server daemon (139.178.89.65:57062). May 14 00:26:43.470458 sshd[6077]: Accepted publickey for core from 139.178.89.65 port 57062 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:26:43.472328 sshd-session[6077]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:26:43.479310 systemd-logind[1496]: New session 71 of user core. May 14 00:26:43.485404 systemd[1]: Started session-71.scope - Session 71 of User core. May 14 00:26:44.273803 sshd[6080]: Connection closed by 139.178.89.65 port 57062 May 14 00:26:44.274754 sshd-session[6077]: pam_unix(sshd:session): session closed for user core May 14 00:26:44.281030 systemd-logind[1496]: Session 71 logged out. Waiting for processes to exit. May 14 00:26:44.282478 systemd[1]: sshd@72-37.27.39.104:22-139.178.89.65:57062.service: Deactivated successfully. May 14 00:26:44.285943 systemd[1]: session-71.scope: Deactivated successfully. May 14 00:26:44.288379 systemd-logind[1496]: Removed session 71. May 14 00:26:47.128895 kubelet[2809]: E0514 00:26:47.128823 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:26:49.451051 systemd[1]: Started sshd@73-37.27.39.104:22-139.178.89.65:40898.service - OpenSSH per-connection server daemon (139.178.89.65:40898). May 14 00:26:50.465419 sshd[6093]: Accepted publickey for core from 139.178.89.65 port 40898 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:26:50.467940 sshd-session[6093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:26:50.476660 systemd-logind[1496]: New session 72 of user core. May 14 00:26:50.483506 systemd[1]: Started session-72.scope - Session 72 of User core. May 14 00:26:51.241693 sshd[6095]: Connection closed by 139.178.89.65 port 40898 May 14 00:26:51.242561 sshd-session[6093]: pam_unix(sshd:session): session closed for user core May 14 00:26:51.247320 systemd-logind[1496]: Session 72 logged out. Waiting for processes to exit. May 14 00:26:51.248141 systemd[1]: sshd@73-37.27.39.104:22-139.178.89.65:40898.service: Deactivated successfully. May 14 00:26:51.251021 systemd[1]: session-72.scope: Deactivated successfully. May 14 00:26:51.252963 systemd-logind[1496]: Removed session 72. May 14 00:26:52.129708 kubelet[2809]: E0514 00:26:52.129649 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:26:56.419058 systemd[1]: Started sshd@74-37.27.39.104:22-139.178.89.65:40900.service - OpenSSH per-connection server daemon (139.178.89.65:40900). May 14 00:26:57.130415 kubelet[2809]: E0514 00:26:57.130346 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:26:57.432888 sshd[6107]: Accepted publickey for core from 139.178.89.65 port 40900 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:26:57.436798 sshd-session[6107]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:26:57.449152 systemd-logind[1496]: New session 73 of user core. May 14 00:26:57.457508 systemd[1]: Started session-73.scope - Session 73 of User core. May 14 00:26:58.224317 sshd[6109]: Connection closed by 139.178.89.65 port 40900 May 14 00:26:58.226087 sshd-session[6107]: pam_unix(sshd:session): session closed for user core May 14 00:26:58.231590 systemd[1]: sshd@74-37.27.39.104:22-139.178.89.65:40900.service: Deactivated successfully. May 14 00:26:58.235136 systemd[1]: session-73.scope: Deactivated successfully. May 14 00:26:58.236988 systemd-logind[1496]: Session 73 logged out. Waiting for processes to exit. May 14 00:26:58.238754 systemd-logind[1496]: Removed session 73. May 14 00:27:02.131005 kubelet[2809]: E0514 00:27:02.130936 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:27:03.395106 systemd[1]: Started sshd@75-37.27.39.104:22-139.178.89.65:33362.service - OpenSSH per-connection server daemon (139.178.89.65:33362). May 14 00:27:04.401150 sshd[6133]: Accepted publickey for core from 139.178.89.65 port 33362 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:27:04.403399 sshd-session[6133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:27:04.411710 systemd-logind[1496]: New session 74 of user core. May 14 00:27:04.417446 systemd[1]: Started session-74.scope - Session 74 of User core. May 14 00:27:05.193960 sshd[6135]: Connection closed by 139.178.89.65 port 33362 May 14 00:27:05.194906 sshd-session[6133]: pam_unix(sshd:session): session closed for user core May 14 00:27:05.200371 systemd[1]: sshd@75-37.27.39.104:22-139.178.89.65:33362.service: Deactivated successfully. May 14 00:27:05.203789 systemd[1]: session-74.scope: Deactivated successfully. May 14 00:27:05.205660 systemd-logind[1496]: Session 74 logged out. Waiting for processes to exit. May 14 00:27:05.207588 systemd-logind[1496]: Removed session 74. May 14 00:27:07.131547 kubelet[2809]: E0514 00:27:07.131470 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:27:10.366050 systemd[1]: Started sshd@76-37.27.39.104:22-139.178.89.65:59606.service - OpenSSH per-connection server daemon (139.178.89.65:59606). May 14 00:27:11.387168 sshd[6147]: Accepted publickey for core from 139.178.89.65 port 59606 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:27:11.389417 sshd-session[6147]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:27:11.395329 systemd-logind[1496]: New session 75 of user core. May 14 00:27:11.399489 systemd[1]: Started session-75.scope - Session 75 of User core. May 14 00:27:12.106242 containerd[1528]: time="2025-05-14T00:27:12.106154942Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eab12bd0739f390cc9dd331e60ff26d292938081c479e7fff02af86d04f23f0d\" id:\"3a0d21ead11661919d6ea4e26b7bf931f59a93db70f75162fd0f85dcb185866e\" pid:6172 exited_at:{seconds:1747182432 nanos:105728928}" May 14 00:27:12.131676 kubelet[2809]: E0514 00:27:12.131628 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:27:12.179436 sshd[6151]: Connection closed by 139.178.89.65 port 59606 May 14 00:27:12.180396 sshd-session[6147]: pam_unix(sshd:session): session closed for user core May 14 00:27:12.184871 systemd-logind[1496]: Session 75 logged out. Waiting for processes to exit. May 14 00:27:12.185707 systemd[1]: sshd@76-37.27.39.104:22-139.178.89.65:59606.service: Deactivated successfully. May 14 00:27:12.189011 systemd[1]: session-75.scope: Deactivated successfully. May 14 00:27:12.190877 systemd-logind[1496]: Removed session 75. May 14 00:27:17.132439 kubelet[2809]: E0514 00:27:17.132363 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:27:17.354734 systemd[1]: Started sshd@77-37.27.39.104:22-139.178.89.65:57442.service - OpenSSH per-connection server daemon (139.178.89.65:57442). May 14 00:27:18.361481 sshd[6189]: Accepted publickey for core from 139.178.89.65 port 57442 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:27:18.364143 sshd-session[6189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:27:18.375067 systemd-logind[1496]: New session 76 of user core. May 14 00:27:18.380123 systemd[1]: Started session-76.scope - Session 76 of User core. May 14 00:27:19.155873 sshd[6191]: Connection closed by 139.178.89.65 port 57442 May 14 00:27:19.156793 sshd-session[6189]: pam_unix(sshd:session): session closed for user core May 14 00:27:19.162206 systemd-logind[1496]: Session 76 logged out. Waiting for processes to exit. May 14 00:27:19.162631 systemd[1]: sshd@77-37.27.39.104:22-139.178.89.65:57442.service: Deactivated successfully. May 14 00:27:19.165781 systemd[1]: session-76.scope: Deactivated successfully. May 14 00:27:19.167683 systemd-logind[1496]: Removed session 76. May 14 00:27:22.133322 kubelet[2809]: E0514 00:27:22.133260 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:27:24.337329 systemd[1]: Started sshd@78-37.27.39.104:22-139.178.89.65:57454.service - OpenSSH per-connection server daemon (139.178.89.65:57454). May 14 00:27:25.340748 sshd[6204]: Accepted publickey for core from 139.178.89.65 port 57454 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:27:25.343326 sshd-session[6204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:27:25.353798 systemd-logind[1496]: New session 77 of user core. May 14 00:27:25.360482 systemd[1]: Started session-77.scope - Session 77 of User core. May 14 00:27:26.129166 sshd[6206]: Connection closed by 139.178.89.65 port 57454 May 14 00:27:26.130359 sshd-session[6204]: pam_unix(sshd:session): session closed for user core May 14 00:27:26.135659 systemd[1]: sshd@78-37.27.39.104:22-139.178.89.65:57454.service: Deactivated successfully. May 14 00:27:26.139109 systemd[1]: session-77.scope: Deactivated successfully. May 14 00:27:26.142368 systemd-logind[1496]: Session 77 logged out. Waiting for processes to exit. May 14 00:27:26.144857 systemd-logind[1496]: Removed session 77. May 14 00:27:27.134845 kubelet[2809]: E0514 00:27:27.134755 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:27:31.303431 systemd[1]: Started sshd@79-37.27.39.104:22-139.178.89.65:51476.service - OpenSSH per-connection server daemon (139.178.89.65:51476). May 14 00:27:32.135051 kubelet[2809]: E0514 00:27:32.134978 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:27:32.309423 sshd[6218]: Accepted publickey for core from 139.178.89.65 port 51476 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:27:32.312111 sshd-session[6218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:27:32.319770 systemd-logind[1496]: New session 78 of user core. May 14 00:27:32.328540 systemd[1]: Started session-78.scope - Session 78 of User core. May 14 00:27:33.113682 sshd[6220]: Connection closed by 139.178.89.65 port 51476 May 14 00:27:33.114960 sshd-session[6218]: pam_unix(sshd:session): session closed for user core May 14 00:27:33.121211 systemd[1]: sshd@79-37.27.39.104:22-139.178.89.65:51476.service: Deactivated successfully. May 14 00:27:33.124978 systemd[1]: session-78.scope: Deactivated successfully. May 14 00:27:33.126596 systemd-logind[1496]: Session 78 logged out. Waiting for processes to exit. May 14 00:27:33.128596 systemd-logind[1496]: Removed session 78. May 14 00:27:37.136269 kubelet[2809]: E0514 00:27:37.136143 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:27:38.286853 systemd[1]: Started sshd@80-37.27.39.104:22-139.178.89.65:34014.service - OpenSSH per-connection server daemon (139.178.89.65:34014). May 14 00:27:39.296741 sshd[6234]: Accepted publickey for core from 139.178.89.65 port 34014 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:27:39.298958 sshd-session[6234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:27:39.305990 systemd-logind[1496]: New session 79 of user core. May 14 00:27:39.311458 systemd[1]: Started session-79.scope - Session 79 of User core. May 14 00:27:40.096543 sshd[6236]: Connection closed by 139.178.89.65 port 34014 May 14 00:27:40.097751 sshd-session[6234]: pam_unix(sshd:session): session closed for user core May 14 00:27:40.102870 systemd[1]: sshd@80-37.27.39.104:22-139.178.89.65:34014.service: Deactivated successfully. May 14 00:27:40.105954 systemd[1]: session-79.scope: Deactivated successfully. May 14 00:27:40.107406 systemd-logind[1496]: Session 79 logged out. Waiting for processes to exit. May 14 00:27:40.109133 systemd-logind[1496]: Removed session 79. May 14 00:27:42.123800 containerd[1528]: time="2025-05-14T00:27:42.123693152Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eab12bd0739f390cc9dd331e60ff26d292938081c479e7fff02af86d04f23f0d\" id:\"d0a1cb62587eb4486c55a201980c154d45d3261f51dcb9cf449c927224b0f8d0\" pid:6262 exited_at:{seconds:1747182462 nanos:123281786}" May 14 00:27:42.136899 kubelet[2809]: E0514 00:27:42.136833 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:27:45.273695 systemd[1]: Started sshd@81-37.27.39.104:22-139.178.89.65:34026.service - OpenSSH per-connection server daemon (139.178.89.65:34026). May 14 00:27:46.278165 sshd[6275]: Accepted publickey for core from 139.178.89.65 port 34026 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:27:46.281273 sshd-session[6275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:27:46.291537 systemd-logind[1496]: New session 80 of user core. May 14 00:27:46.296467 systemd[1]: Started session-80.scope - Session 80 of User core. May 14 00:27:47.085479 sshd[6277]: Connection closed by 139.178.89.65 port 34026 May 14 00:27:47.086458 sshd-session[6275]: pam_unix(sshd:session): session closed for user core May 14 00:27:47.090750 systemd[1]: sshd@81-37.27.39.104:22-139.178.89.65:34026.service: Deactivated successfully. May 14 00:27:47.093677 systemd[1]: session-80.scope: Deactivated successfully. May 14 00:27:47.095872 systemd-logind[1496]: Session 80 logged out. Waiting for processes to exit. May 14 00:27:47.098019 systemd-logind[1496]: Removed session 80. May 14 00:27:47.137966 kubelet[2809]: E0514 00:27:47.137872 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:27:47.260265 systemd[1]: Started sshd@82-37.27.39.104:22-139.178.89.65:36978.service - OpenSSH per-connection server daemon (139.178.89.65:36978). May 14 00:27:48.270858 sshd[6289]: Accepted publickey for core from 139.178.89.65 port 36978 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:27:48.272968 sshd-session[6289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:27:48.280017 systemd-logind[1496]: New session 81 of user core. May 14 00:27:48.285470 systemd[1]: Started session-81.scope - Session 81 of User core. May 14 00:27:49.275289 sshd[6291]: Connection closed by 139.178.89.65 port 36978 May 14 00:27:49.277196 sshd-session[6289]: pam_unix(sshd:session): session closed for user core May 14 00:27:49.281304 systemd[1]: sshd@82-37.27.39.104:22-139.178.89.65:36978.service: Deactivated successfully. May 14 00:27:49.284404 systemd[1]: session-81.scope: Deactivated successfully. May 14 00:27:49.286992 systemd-logind[1496]: Session 81 logged out. Waiting for processes to exit. May 14 00:27:49.289168 systemd-logind[1496]: Removed session 81. May 14 00:27:49.448903 systemd[1]: Started sshd@83-37.27.39.104:22-139.178.89.65:36990.service - OpenSSH per-connection server daemon (139.178.89.65:36990). May 14 00:27:49.824288 kubelet[2809]: E0514 00:27:49.824196 2809 log.go:32] "Status from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" May 14 00:27:49.824288 kubelet[2809]: E0514 00:27:49.824289 2809 kubelet.go:2993] "Container runtime sanity check failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" May 14 00:27:50.488605 sshd[6303]: Accepted publickey for core from 139.178.89.65 port 36990 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:27:50.491188 sshd-session[6303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:27:50.500277 systemd-logind[1496]: New session 82 of user core. May 14 00:27:50.521990 systemd[1]: Started session-82.scope - Session 82 of User core. May 14 00:27:52.138157 kubelet[2809]: E0514 00:27:52.138089 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:27:52.432327 sshd[6305]: Connection closed by 139.178.89.65 port 36990 May 14 00:27:52.436841 sshd-session[6303]: pam_unix(sshd:session): session closed for user core May 14 00:27:52.443036 systemd[1]: sshd@83-37.27.39.104:22-139.178.89.65:36990.service: Deactivated successfully. May 14 00:27:52.446074 systemd[1]: session-82.scope: Deactivated successfully. May 14 00:27:52.447623 systemd-logind[1496]: Session 82 logged out. Waiting for processes to exit. May 14 00:27:52.449753 systemd-logind[1496]: Removed session 82. May 14 00:27:52.601536 systemd[1]: Started sshd@84-37.27.39.104:22-139.178.89.65:37002.service - OpenSSH per-connection server daemon (139.178.89.65:37002). May 14 00:27:53.600317 sshd[6333]: Accepted publickey for core from 139.178.89.65 port 37002 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:27:53.601156 sshd-session[6333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:27:53.612841 systemd-logind[1496]: New session 83 of user core. May 14 00:27:53.617425 systemd[1]: Started session-83.scope - Session 83 of User core. May 14 00:27:54.666618 sshd[6335]: Connection closed by 139.178.89.65 port 37002 May 14 00:27:54.667407 sshd-session[6333]: pam_unix(sshd:session): session closed for user core May 14 00:27:54.672418 systemd-logind[1496]: Session 83 logged out. Waiting for processes to exit. May 14 00:27:54.672753 systemd[1]: sshd@84-37.27.39.104:22-139.178.89.65:37002.service: Deactivated successfully. May 14 00:27:54.675705 systemd[1]: session-83.scope: Deactivated successfully. May 14 00:27:54.677258 systemd-logind[1496]: Removed session 83. May 14 00:27:54.846523 systemd[1]: Started sshd@85-37.27.39.104:22-139.178.89.65:37012.service - OpenSSH per-connection server daemon (139.178.89.65:37012). May 14 00:27:55.850317 sshd[6345]: Accepted publickey for core from 139.178.89.65 port 37012 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:27:55.852359 sshd-session[6345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:27:55.862208 systemd-logind[1496]: New session 84 of user core. May 14 00:27:55.869533 systemd[1]: Started session-84.scope - Session 84 of User core. May 14 00:27:56.638206 sshd[6347]: Connection closed by 139.178.89.65 port 37012 May 14 00:27:56.639043 sshd-session[6345]: pam_unix(sshd:session): session closed for user core May 14 00:27:56.644160 systemd-logind[1496]: Session 84 logged out. Waiting for processes to exit. May 14 00:27:56.645139 systemd[1]: sshd@85-37.27.39.104:22-139.178.89.65:37012.service: Deactivated successfully. May 14 00:27:56.648649 systemd[1]: session-84.scope: Deactivated successfully. May 14 00:27:56.650720 systemd-logind[1496]: Removed session 84. May 14 00:27:57.138808 kubelet[2809]: E0514 00:27:57.138720 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:28:01.810189 systemd[1]: Started sshd@86-37.27.39.104:22-139.178.89.65:41404.service - OpenSSH per-connection server daemon (139.178.89.65:41404). May 14 00:28:02.139544 kubelet[2809]: E0514 00:28:02.139355 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:28:02.829626 sshd[6359]: Accepted publickey for core from 139.178.89.65 port 41404 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:28:02.832351 sshd-session[6359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:28:02.840897 systemd-logind[1496]: New session 85 of user core. May 14 00:28:02.850519 systemd[1]: Started session-85.scope - Session 85 of User core. May 14 00:28:03.627748 sshd[6361]: Connection closed by 139.178.89.65 port 41404 May 14 00:28:03.629590 sshd-session[6359]: pam_unix(sshd:session): session closed for user core May 14 00:28:03.633745 systemd[1]: sshd@86-37.27.39.104:22-139.178.89.65:41404.service: Deactivated successfully. May 14 00:28:03.636574 systemd[1]: session-85.scope: Deactivated successfully. May 14 00:28:03.640253 systemd-logind[1496]: Session 85 logged out. Waiting for processes to exit. May 14 00:28:03.641891 systemd-logind[1496]: Removed session 85. May 14 00:28:07.140637 kubelet[2809]: E0514 00:28:07.140536 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:28:08.806612 systemd[1]: Started sshd@87-37.27.39.104:22-139.178.89.65:38742.service - OpenSSH per-connection server daemon (139.178.89.65:38742). May 14 00:28:09.816185 sshd[6373]: Accepted publickey for core from 139.178.89.65 port 38742 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:28:09.818914 sshd-session[6373]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:28:09.827297 systemd-logind[1496]: New session 86 of user core. May 14 00:28:09.834461 systemd[1]: Started session-86.scope - Session 86 of User core. May 14 00:28:10.590360 sshd[6375]: Connection closed by 139.178.89.65 port 38742 May 14 00:28:10.591625 sshd-session[6373]: pam_unix(sshd:session): session closed for user core May 14 00:28:10.596791 systemd[1]: sshd@87-37.27.39.104:22-139.178.89.65:38742.service: Deactivated successfully. May 14 00:28:10.601266 systemd[1]: session-86.scope: Deactivated successfully. May 14 00:28:10.603974 systemd-logind[1496]: Session 86 logged out. Waiting for processes to exit. May 14 00:28:10.608555 systemd-logind[1496]: Removed session 86. May 14 00:28:12.135302 containerd[1528]: time="2025-05-14T00:28:12.135120764Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eab12bd0739f390cc9dd331e60ff26d292938081c479e7fff02af86d04f23f0d\" id:\"0e1f7a520760830b708baf5886549a7672c9d2251e98096395b9d0fe7063b4f6\" pid:6401 exited_at:{seconds:1747182492 nanos:134606305}" May 14 00:28:12.141394 kubelet[2809]: E0514 00:28:12.141331 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:28:15.760026 systemd[1]: Started sshd@88-37.27.39.104:22-139.178.89.65:38756.service - OpenSSH per-connection server daemon (139.178.89.65:38756). May 14 00:28:16.742835 sshd[6414]: Accepted publickey for core from 139.178.89.65 port 38756 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:28:16.744651 sshd-session[6414]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:28:16.753080 systemd-logind[1496]: New session 87 of user core. May 14 00:28:16.759466 systemd[1]: Started session-87.scope - Session 87 of User core. May 14 00:28:17.142590 kubelet[2809]: E0514 00:28:17.142312 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:28:17.531351 sshd[6416]: Connection closed by 139.178.89.65 port 38756 May 14 00:28:17.532453 sshd-session[6414]: pam_unix(sshd:session): session closed for user core May 14 00:28:17.537961 systemd[1]: sshd@88-37.27.39.104:22-139.178.89.65:38756.service: Deactivated successfully. May 14 00:28:17.541575 systemd[1]: session-87.scope: Deactivated successfully. May 14 00:28:17.544068 systemd-logind[1496]: Session 87 logged out. Waiting for processes to exit. May 14 00:28:17.546411 systemd-logind[1496]: Removed session 87. May 14 00:28:22.143249 kubelet[2809]: E0514 00:28:22.143163 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:28:22.704745 systemd[1]: Started sshd@89-37.27.39.104:22-139.178.89.65:40084.service - OpenSSH per-connection server daemon (139.178.89.65:40084). May 14 00:28:23.716876 sshd[6429]: Accepted publickey for core from 139.178.89.65 port 40084 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:28:23.719656 sshd-session[6429]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:28:23.730178 systemd-logind[1496]: New session 88 of user core. May 14 00:28:23.739826 systemd[1]: Started session-88.scope - Session 88 of User core. May 14 00:28:24.489870 sshd[6431]: Connection closed by 139.178.89.65 port 40084 May 14 00:28:24.491059 sshd-session[6429]: pam_unix(sshd:session): session closed for user core May 14 00:28:24.495517 systemd[1]: sshd@89-37.27.39.104:22-139.178.89.65:40084.service: Deactivated successfully. May 14 00:28:24.498902 systemd[1]: session-88.scope: Deactivated successfully. May 14 00:28:24.501723 systemd-logind[1496]: Session 88 logged out. Waiting for processes to exit. May 14 00:28:24.504062 systemd-logind[1496]: Removed session 88. May 14 00:28:27.143336 kubelet[2809]: E0514 00:28:27.143258 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:28:29.667738 systemd[1]: Started sshd@90-37.27.39.104:22-139.178.89.65:47460.service - OpenSSH per-connection server daemon (139.178.89.65:47460). May 14 00:28:30.679089 sshd[6443]: Accepted publickey for core from 139.178.89.65 port 47460 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:28:30.681187 sshd-session[6443]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:28:30.688744 systemd-logind[1496]: New session 89 of user core. May 14 00:28:30.698557 systemd[1]: Started session-89.scope - Session 89 of User core. May 14 00:28:31.480766 sshd[6445]: Connection closed by 139.178.89.65 port 47460 May 14 00:28:31.482551 sshd-session[6443]: pam_unix(sshd:session): session closed for user core May 14 00:28:31.487903 systemd[1]: sshd@90-37.27.39.104:22-139.178.89.65:47460.service: Deactivated successfully. May 14 00:28:31.491787 systemd[1]: session-89.scope: Deactivated successfully. May 14 00:28:31.493355 systemd-logind[1496]: Session 89 logged out. Waiting for processes to exit. May 14 00:28:31.495868 systemd-logind[1496]: Removed session 89. May 14 00:28:32.143940 kubelet[2809]: E0514 00:28:32.143853 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:28:36.656264 systemd[1]: Started sshd@91-37.27.39.104:22-139.178.89.65:38822.service - OpenSSH per-connection server daemon (139.178.89.65:38822). May 14 00:28:37.144952 kubelet[2809]: E0514 00:28:37.144838 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:28:37.660275 sshd[6463]: Accepted publickey for core from 139.178.89.65 port 38822 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:28:37.661786 sshd-session[6463]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:28:37.670048 systemd-logind[1496]: New session 90 of user core. May 14 00:28:37.675453 systemd[1]: Started session-90.scope - Session 90 of User core. May 14 00:28:38.435653 sshd[6465]: Connection closed by 139.178.89.65 port 38822 May 14 00:28:38.436601 sshd-session[6463]: pam_unix(sshd:session): session closed for user core May 14 00:28:38.444632 systemd[1]: sshd@91-37.27.39.104:22-139.178.89.65:38822.service: Deactivated successfully. May 14 00:28:38.448528 systemd[1]: session-90.scope: Deactivated successfully. May 14 00:28:38.450935 systemd-logind[1496]: Session 90 logged out. Waiting for processes to exit. May 14 00:28:38.452904 systemd-logind[1496]: Removed session 90. May 14 00:28:42.137659 containerd[1528]: time="2025-05-14T00:28:42.137593932Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eab12bd0739f390cc9dd331e60ff26d292938081c479e7fff02af86d04f23f0d\" id:\"d58d6bfb295eba46fbe4cc4432143da33190c424063d253b072bc014a6326c92\" pid:6499 exited_at:{seconds:1747182522 nanos:135913138}" May 14 00:28:42.145259 kubelet[2809]: E0514 00:28:42.145204 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:28:43.615065 systemd[1]: Started sshd@92-37.27.39.104:22-139.178.89.65:38834.service - OpenSSH per-connection server daemon (139.178.89.65:38834). May 14 00:28:44.619138 sshd[6512]: Accepted publickey for core from 139.178.89.65 port 38834 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:28:44.621839 sshd-session[6512]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:28:44.630320 systemd-logind[1496]: New session 91 of user core. May 14 00:28:44.637612 systemd[1]: Started session-91.scope - Session 91 of User core. May 14 00:28:45.427648 sshd[6514]: Connection closed by 139.178.89.65 port 38834 May 14 00:28:45.430530 sshd-session[6512]: pam_unix(sshd:session): session closed for user core May 14 00:28:45.435451 systemd[1]: sshd@92-37.27.39.104:22-139.178.89.65:38834.service: Deactivated successfully. May 14 00:28:45.439699 systemd[1]: session-91.scope: Deactivated successfully. May 14 00:28:45.440978 systemd-logind[1496]: Session 91 logged out. Waiting for processes to exit. May 14 00:28:45.442855 systemd-logind[1496]: Removed session 91. May 14 00:28:47.146382 kubelet[2809]: E0514 00:28:47.146260 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:28:50.600671 systemd[1]: Started sshd@93-37.27.39.104:22-139.178.89.65:35358.service - OpenSSH per-connection server daemon (139.178.89.65:35358). May 14 00:28:51.593594 sshd[6526]: Accepted publickey for core from 139.178.89.65 port 35358 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:28:51.596056 sshd-session[6526]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:28:51.605980 systemd-logind[1496]: New session 92 of user core. May 14 00:28:51.613616 systemd[1]: Started session-92.scope - Session 92 of User core. May 14 00:28:52.147215 kubelet[2809]: E0514 00:28:52.147159 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:28:52.378268 sshd[6528]: Connection closed by 139.178.89.65 port 35358 May 14 00:28:52.378994 sshd-session[6526]: pam_unix(sshd:session): session closed for user core May 14 00:28:52.383176 systemd[1]: sshd@93-37.27.39.104:22-139.178.89.65:35358.service: Deactivated successfully. May 14 00:28:52.388254 systemd[1]: session-92.scope: Deactivated successfully. May 14 00:28:52.391754 systemd-logind[1496]: Session 92 logged out. Waiting for processes to exit. May 14 00:28:52.393289 systemd-logind[1496]: Removed session 92. May 14 00:28:57.148281 kubelet[2809]: E0514 00:28:57.148183 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:28:57.555215 systemd[1]: Started sshd@94-37.27.39.104:22-139.178.89.65:43310.service - OpenSSH per-connection server daemon (139.178.89.65:43310). May 14 00:28:58.562469 sshd[6541]: Accepted publickey for core from 139.178.89.65 port 43310 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:28:58.565595 sshd-session[6541]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:28:58.582170 systemd-logind[1496]: New session 93 of user core. May 14 00:28:58.589148 systemd[1]: Started session-93.scope - Session 93 of User core. May 14 00:28:59.341267 sshd[6543]: Connection closed by 139.178.89.65 port 43310 May 14 00:28:59.342213 sshd-session[6541]: pam_unix(sshd:session): session closed for user core May 14 00:28:59.346591 systemd[1]: sshd@94-37.27.39.104:22-139.178.89.65:43310.service: Deactivated successfully. May 14 00:28:59.350969 systemd[1]: session-93.scope: Deactivated successfully. May 14 00:28:59.353140 systemd-logind[1496]: Session 93 logged out. Waiting for processes to exit. May 14 00:28:59.355652 systemd-logind[1496]: Removed session 93. May 14 00:29:02.148883 kubelet[2809]: E0514 00:29:02.148818 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:29:04.516621 systemd[1]: Started sshd@95-37.27.39.104:22-139.178.89.65:43322.service - OpenSSH per-connection server daemon (139.178.89.65:43322). May 14 00:29:05.523342 sshd[6555]: Accepted publickey for core from 139.178.89.65 port 43322 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:29:05.524729 sshd-session[6555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:29:05.535056 systemd-logind[1496]: New session 94 of user core. May 14 00:29:05.538463 systemd[1]: Started session-94.scope - Session 94 of User core. May 14 00:29:06.300433 sshd[6557]: Connection closed by 139.178.89.65 port 43322 May 14 00:29:06.301458 sshd-session[6555]: pam_unix(sshd:session): session closed for user core May 14 00:29:06.306438 systemd[1]: sshd@95-37.27.39.104:22-139.178.89.65:43322.service: Deactivated successfully. May 14 00:29:06.310589 systemd[1]: session-94.scope: Deactivated successfully. May 14 00:29:06.313659 systemd-logind[1496]: Session 94 logged out. Waiting for processes to exit. May 14 00:29:06.315839 systemd-logind[1496]: Removed session 94. May 14 00:29:07.149927 kubelet[2809]: E0514 00:29:07.149831 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:29:11.479216 systemd[1]: Started sshd@96-37.27.39.104:22-139.178.89.65:60498.service - OpenSSH per-connection server daemon (139.178.89.65:60498). May 14 00:29:12.114791 containerd[1528]: time="2025-05-14T00:29:12.114526728Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eab12bd0739f390cc9dd331e60ff26d292938081c479e7fff02af86d04f23f0d\" id:\"97a02cd6ad924e72446cbfdcbafca8d9eedbbac205ae4b82aa7f838c3551b10a\" pid:6585 exited_at:{seconds:1747182552 nanos:114006940}" May 14 00:29:12.150121 kubelet[2809]: E0514 00:29:12.149940 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:29:12.485918 sshd[6571]: Accepted publickey for core from 139.178.89.65 port 60498 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:29:12.488348 sshd-session[6571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:29:12.497117 systemd-logind[1496]: New session 95 of user core. May 14 00:29:12.501468 systemd[1]: Started session-95.scope - Session 95 of User core. May 14 00:29:13.265430 sshd[6598]: Connection closed by 139.178.89.65 port 60498 May 14 00:29:13.266416 sshd-session[6571]: pam_unix(sshd:session): session closed for user core May 14 00:29:13.272321 systemd[1]: sshd@96-37.27.39.104:22-139.178.89.65:60498.service: Deactivated successfully. May 14 00:29:13.276023 systemd[1]: session-95.scope: Deactivated successfully. May 14 00:29:13.278452 systemd-logind[1496]: Session 95 logged out. Waiting for processes to exit. May 14 00:29:13.280628 systemd-logind[1496]: Removed session 95. May 14 00:29:17.150741 kubelet[2809]: E0514 00:29:17.150624 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:29:18.437524 systemd[1]: Started sshd@97-37.27.39.104:22-139.178.89.65:51896.service - OpenSSH per-connection server daemon (139.178.89.65:51896). May 14 00:29:19.441598 sshd[6610]: Accepted publickey for core from 139.178.89.65 port 51896 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:29:19.443874 sshd-session[6610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:29:19.451674 systemd-logind[1496]: New session 96 of user core. May 14 00:29:19.458458 systemd[1]: Started session-96.scope - Session 96 of User core. May 14 00:29:20.222182 sshd[6612]: Connection closed by 139.178.89.65 port 51896 May 14 00:29:20.223518 sshd-session[6610]: pam_unix(sshd:session): session closed for user core May 14 00:29:20.227311 systemd[1]: sshd@97-37.27.39.104:22-139.178.89.65:51896.service: Deactivated successfully. May 14 00:29:20.229194 systemd[1]: session-96.scope: Deactivated successfully. May 14 00:29:20.231187 systemd-logind[1496]: Session 96 logged out. Waiting for processes to exit. May 14 00:29:20.232833 systemd-logind[1496]: Removed session 96. May 14 00:29:22.151720 kubelet[2809]: E0514 00:29:22.151648 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:29:25.396187 systemd[1]: Started sshd@98-37.27.39.104:22-139.178.89.65:51910.service - OpenSSH per-connection server daemon (139.178.89.65:51910). May 14 00:29:26.396170 sshd[6624]: Accepted publickey for core from 139.178.89.65 port 51910 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:29:26.398836 sshd-session[6624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:29:26.407099 systemd-logind[1496]: New session 97 of user core. May 14 00:29:26.411560 systemd[1]: Started session-97.scope - Session 97 of User core. May 14 00:29:27.152174 kubelet[2809]: E0514 00:29:27.152067 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:29:27.171037 sshd[6626]: Connection closed by 139.178.89.65 port 51910 May 14 00:29:27.171619 sshd-session[6624]: pam_unix(sshd:session): session closed for user core May 14 00:29:27.174896 systemd[1]: sshd@98-37.27.39.104:22-139.178.89.65:51910.service: Deactivated successfully. May 14 00:29:27.177530 systemd[1]: session-97.scope: Deactivated successfully. May 14 00:29:27.179303 systemd-logind[1496]: Session 97 logged out. Waiting for processes to exit. May 14 00:29:27.181293 systemd-logind[1496]: Removed session 97. May 14 00:29:32.152619 kubelet[2809]: E0514 00:29:32.152549 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:29:32.344629 systemd[1]: Started sshd@99-37.27.39.104:22-139.178.89.65:53384.service - OpenSSH per-connection server daemon (139.178.89.65:53384). May 14 00:29:33.345167 sshd[6637]: Accepted publickey for core from 139.178.89.65 port 53384 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:29:33.347408 sshd-session[6637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:29:33.355315 systemd-logind[1496]: New session 98 of user core. May 14 00:29:33.365519 systemd[1]: Started session-98.scope - Session 98 of User core. May 14 00:29:34.148663 sshd[6639]: Connection closed by 139.178.89.65 port 53384 May 14 00:29:34.149483 sshd-session[6637]: pam_unix(sshd:session): session closed for user core May 14 00:29:34.154660 systemd[1]: sshd@99-37.27.39.104:22-139.178.89.65:53384.service: Deactivated successfully. May 14 00:29:34.156197 systemd[1]: session-98.scope: Deactivated successfully. May 14 00:29:34.156282 systemd-logind[1496]: Session 98 logged out. Waiting for processes to exit. May 14 00:29:34.158560 systemd-logind[1496]: Removed session 98. May 14 00:29:37.153119 kubelet[2809]: E0514 00:29:37.152967 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:29:39.320658 systemd[1]: Started sshd@100-37.27.39.104:22-139.178.89.65:33736.service - OpenSSH per-connection server daemon (139.178.89.65:33736). May 14 00:29:40.324013 sshd[6654]: Accepted publickey for core from 139.178.89.65 port 33736 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:29:40.328056 sshd-session[6654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:29:40.340312 systemd-logind[1496]: New session 99 of user core. May 14 00:29:40.345536 systemd[1]: Started session-99.scope - Session 99 of User core. May 14 00:29:41.128243 sshd[6656]: Connection closed by 139.178.89.65 port 33736 May 14 00:29:41.129642 sshd-session[6654]: pam_unix(sshd:session): session closed for user core May 14 00:29:41.134200 systemd[1]: sshd@100-37.27.39.104:22-139.178.89.65:33736.service: Deactivated successfully. May 14 00:29:41.137576 systemd[1]: session-99.scope: Deactivated successfully. May 14 00:29:41.139911 systemd-logind[1496]: Session 99 logged out. Waiting for processes to exit. May 14 00:29:41.142081 systemd-logind[1496]: Removed session 99. May 14 00:29:42.114852 containerd[1528]: time="2025-05-14T00:29:42.114768046Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eab12bd0739f390cc9dd331e60ff26d292938081c479e7fff02af86d04f23f0d\" id:\"2fb0982d28d9100e52fc4243fb9b17f38d201a3b3d52d3a2f766c667851c7b15\" pid:6682 exited_at:{seconds:1747182582 nanos:114389855}" May 14 00:29:42.153388 kubelet[2809]: E0514 00:29:42.153344 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:29:46.306365 systemd[1]: Started sshd@101-37.27.39.104:22-139.178.89.65:33744.service - OpenSSH per-connection server daemon (139.178.89.65:33744). May 14 00:29:47.153592 kubelet[2809]: E0514 00:29:47.153525 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:29:47.313368 sshd[6695]: Accepted publickey for core from 139.178.89.65 port 33744 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:29:47.315784 sshd-session[6695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:29:47.325307 systemd-logind[1496]: New session 100 of user core. May 14 00:29:47.330294 systemd[1]: Started session-100.scope - Session 100 of User core. May 14 00:29:48.094647 sshd[6697]: Connection closed by 139.178.89.65 port 33744 May 14 00:29:48.095345 sshd-session[6695]: pam_unix(sshd:session): session closed for user core May 14 00:29:48.098389 systemd[1]: sshd@101-37.27.39.104:22-139.178.89.65:33744.service: Deactivated successfully. May 14 00:29:48.099865 systemd[1]: session-100.scope: Deactivated successfully. May 14 00:29:48.101780 systemd-logind[1496]: Session 100 logged out. Waiting for processes to exit. May 14 00:29:48.102800 systemd-logind[1496]: Removed session 100. May 14 00:29:52.154497 kubelet[2809]: E0514 00:29:52.154442 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:29:53.272753 systemd[1]: Started sshd@102-37.27.39.104:22-139.178.89.65:37152.service - OpenSSH per-connection server daemon (139.178.89.65:37152). May 14 00:29:54.281704 sshd[6709]: Accepted publickey for core from 139.178.89.65 port 37152 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:29:54.283899 sshd-session[6709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:29:54.291196 systemd-logind[1496]: New session 101 of user core. May 14 00:29:54.294479 systemd[1]: Started session-101.scope - Session 101 of User core. May 14 00:29:54.827269 kubelet[2809]: E0514 00:29:54.825714 2809 log.go:32] "Status from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" May 14 00:29:54.827269 kubelet[2809]: E0514 00:29:54.825798 2809 kubelet.go:2993] "Container runtime sanity check failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" May 14 00:29:55.074973 sshd[6711]: Connection closed by 139.178.89.65 port 37152 May 14 00:29:55.075947 sshd-session[6709]: pam_unix(sshd:session): session closed for user core May 14 00:29:55.082392 systemd[1]: sshd@102-37.27.39.104:22-139.178.89.65:37152.service: Deactivated successfully. May 14 00:29:55.086762 systemd[1]: session-101.scope: Deactivated successfully. May 14 00:29:55.088544 systemd-logind[1496]: Session 101 logged out. Waiting for processes to exit. May 14 00:29:55.090830 systemd-logind[1496]: Removed session 101. May 14 00:29:57.155529 kubelet[2809]: E0514 00:29:57.155454 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:30:00.249947 systemd[1]: Started sshd@103-37.27.39.104:22-139.178.89.65:40582.service - OpenSSH per-connection server daemon (139.178.89.65:40582). May 14 00:30:01.273393 sshd[6722]: Accepted publickey for core from 139.178.89.65 port 40582 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:30:01.275592 sshd-session[6722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:30:01.283957 systemd-logind[1496]: New session 102 of user core. May 14 00:30:01.289504 systemd[1]: Started session-102.scope - Session 102 of User core. May 14 00:30:02.071465 sshd[6724]: Connection closed by 139.178.89.65 port 40582 May 14 00:30:02.073567 sshd-session[6722]: pam_unix(sshd:session): session closed for user core May 14 00:30:02.080196 systemd-logind[1496]: Session 102 logged out. Waiting for processes to exit. May 14 00:30:02.081501 systemd[1]: sshd@103-37.27.39.104:22-139.178.89.65:40582.service: Deactivated successfully. May 14 00:30:02.085882 systemd[1]: session-102.scope: Deactivated successfully. May 14 00:30:02.088665 systemd-logind[1496]: Removed session 102. May 14 00:30:02.156508 kubelet[2809]: E0514 00:30:02.156403 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:30:07.156640 kubelet[2809]: E0514 00:30:07.156556 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:30:07.248561 systemd[1]: Started sshd@104-37.27.39.104:22-139.178.89.65:46014.service - OpenSSH per-connection server daemon (139.178.89.65:46014). May 14 00:30:08.264710 sshd[6738]: Accepted publickey for core from 139.178.89.65 port 46014 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:30:08.266926 sshd-session[6738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:30:08.277490 systemd-logind[1496]: New session 103 of user core. May 14 00:30:08.281509 systemd[1]: Started session-103.scope - Session 103 of User core. May 14 00:30:09.077790 sshd[6740]: Connection closed by 139.178.89.65 port 46014 May 14 00:30:09.078423 sshd-session[6738]: pam_unix(sshd:session): session closed for user core May 14 00:30:09.081445 systemd-logind[1496]: Session 103 logged out. Waiting for processes to exit. May 14 00:30:09.081633 systemd[1]: sshd@104-37.27.39.104:22-139.178.89.65:46014.service: Deactivated successfully. May 14 00:30:09.083179 systemd[1]: session-103.scope: Deactivated successfully. May 14 00:30:09.084356 systemd-logind[1496]: Removed session 103. May 14 00:30:12.135152 containerd[1528]: time="2025-05-14T00:30:12.134985059Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eab12bd0739f390cc9dd331e60ff26d292938081c479e7fff02af86d04f23f0d\" id:\"803d9d8c460a7b136f4247a287651e2f398bb6287dfe043c51af3fe78e235a02\" pid:6777 exited_at:{seconds:1747182612 nanos:134390561}" May 14 00:30:12.157306 kubelet[2809]: E0514 00:30:12.157201 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:30:14.255097 systemd[1]: Started sshd@105-37.27.39.104:22-139.178.89.65:46022.service - OpenSSH per-connection server daemon (139.178.89.65:46022). May 14 00:30:15.255976 sshd[6790]: Accepted publickey for core from 139.178.89.65 port 46022 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:30:15.258062 sshd-session[6790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:30:15.267875 systemd-logind[1496]: New session 104 of user core. May 14 00:30:15.275578 systemd[1]: Started session-104.scope - Session 104 of User core. May 14 00:30:16.040720 sshd[6794]: Connection closed by 139.178.89.65 port 46022 May 14 00:30:16.041565 sshd-session[6790]: pam_unix(sshd:session): session closed for user core May 14 00:30:16.046547 systemd[1]: sshd@105-37.27.39.104:22-139.178.89.65:46022.service: Deactivated successfully. May 14 00:30:16.049788 systemd[1]: session-104.scope: Deactivated successfully. May 14 00:30:16.051664 systemd-logind[1496]: Session 104 logged out. Waiting for processes to exit. May 14 00:30:16.053510 systemd-logind[1496]: Removed session 104. May 14 00:30:17.158348 kubelet[2809]: E0514 00:30:17.158280 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:30:21.215200 systemd[1]: Started sshd@106-37.27.39.104:22-139.178.89.65:41824.service - OpenSSH per-connection server daemon (139.178.89.65:41824). May 14 00:30:22.159003 kubelet[2809]: E0514 00:30:22.158941 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:30:22.230427 sshd[6806]: Accepted publickey for core from 139.178.89.65 port 41824 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:30:22.232738 sshd-session[6806]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:30:22.242843 systemd-logind[1496]: New session 105 of user core. May 14 00:30:22.250467 systemd[1]: Started session-105.scope - Session 105 of User core. May 14 00:30:23.018755 sshd[6808]: Connection closed by 139.178.89.65 port 41824 May 14 00:30:23.020680 sshd-session[6806]: pam_unix(sshd:session): session closed for user core May 14 00:30:23.027037 systemd-logind[1496]: Session 105 logged out. Waiting for processes to exit. May 14 00:30:23.028188 systemd[1]: sshd@106-37.27.39.104:22-139.178.89.65:41824.service: Deactivated successfully. May 14 00:30:23.032342 systemd[1]: session-105.scope: Deactivated successfully. May 14 00:30:23.033988 systemd-logind[1496]: Removed session 105. May 14 00:30:27.160008 kubelet[2809]: E0514 00:30:27.159947 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:30:28.194040 systemd[1]: Started sshd@107-37.27.39.104:22-139.178.89.65:40568.service - OpenSSH per-connection server daemon (139.178.89.65:40568). May 14 00:30:29.203354 sshd[6820]: Accepted publickey for core from 139.178.89.65 port 40568 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:30:29.206127 sshd-session[6820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:30:29.213164 systemd-logind[1496]: New session 106 of user core. May 14 00:30:29.220424 systemd[1]: Started session-106.scope - Session 106 of User core. May 14 00:30:29.982043 sshd[6822]: Connection closed by 139.178.89.65 port 40568 May 14 00:30:29.983275 sshd-session[6820]: pam_unix(sshd:session): session closed for user core May 14 00:30:29.987857 systemd[1]: sshd@107-37.27.39.104:22-139.178.89.65:40568.service: Deactivated successfully. May 14 00:30:29.991931 systemd[1]: session-106.scope: Deactivated successfully. May 14 00:30:29.994656 systemd-logind[1496]: Session 106 logged out. Waiting for processes to exit. May 14 00:30:29.998175 systemd-logind[1496]: Removed session 106. May 14 00:30:32.160760 kubelet[2809]: E0514 00:30:32.160683 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:30:35.159157 systemd[1]: Started sshd@108-37.27.39.104:22-139.178.89.65:40572.service - OpenSSH per-connection server daemon (139.178.89.65:40572). May 14 00:30:36.167077 sshd[6836]: Accepted publickey for core from 139.178.89.65 port 40572 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:30:36.169935 sshd-session[6836]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:30:36.178975 systemd-logind[1496]: New session 107 of user core. May 14 00:30:36.184437 systemd[1]: Started session-107.scope - Session 107 of User core. May 14 00:30:36.952372 sshd[6838]: Connection closed by 139.178.89.65 port 40572 May 14 00:30:36.953380 sshd-session[6836]: pam_unix(sshd:session): session closed for user core May 14 00:30:36.958272 systemd[1]: sshd@108-37.27.39.104:22-139.178.89.65:40572.service: Deactivated successfully. May 14 00:30:36.961601 systemd[1]: session-107.scope: Deactivated successfully. May 14 00:30:36.964157 systemd-logind[1496]: Session 107 logged out. Waiting for processes to exit. May 14 00:30:36.966937 systemd-logind[1496]: Removed session 107. May 14 00:30:37.161622 kubelet[2809]: E0514 00:30:37.161538 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:30:42.104158 containerd[1528]: time="2025-05-14T00:30:42.103869373Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eab12bd0739f390cc9dd331e60ff26d292938081c479e7fff02af86d04f23f0d\" id:\"2a1818e32a67bc65c936614e30c80d36b00847449239133ef79facbcf5de788c\" pid:6863 exited_at:{seconds:1747182642 nanos:103461948}" May 14 00:30:42.123724 systemd[1]: Started sshd@109-37.27.39.104:22-139.178.89.65:53178.service - OpenSSH per-connection server daemon (139.178.89.65:53178). May 14 00:30:42.163671 kubelet[2809]: E0514 00:30:42.163323 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:30:43.127455 sshd[6876]: Accepted publickey for core from 139.178.89.65 port 53178 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:30:43.129748 sshd-session[6876]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:30:43.139555 systemd-logind[1496]: New session 108 of user core. May 14 00:30:43.145514 systemd[1]: Started session-108.scope - Session 108 of User core. May 14 00:30:43.920941 sshd[6878]: Connection closed by 139.178.89.65 port 53178 May 14 00:30:43.921841 sshd-session[6876]: pam_unix(sshd:session): session closed for user core May 14 00:30:43.927849 systemd-logind[1496]: Session 108 logged out. Waiting for processes to exit. May 14 00:30:43.929385 systemd[1]: sshd@109-37.27.39.104:22-139.178.89.65:53178.service: Deactivated successfully. May 14 00:30:43.932811 systemd[1]: session-108.scope: Deactivated successfully. May 14 00:30:43.935082 systemd-logind[1496]: Removed session 108. May 14 00:30:47.164063 kubelet[2809]: E0514 00:30:47.163979 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:30:49.094832 systemd[1]: Started sshd@110-37.27.39.104:22-139.178.89.65:41610.service - OpenSSH per-connection server daemon (139.178.89.65:41610). May 14 00:30:50.100016 sshd[6890]: Accepted publickey for core from 139.178.89.65 port 41610 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:30:50.102554 sshd-session[6890]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:30:50.112376 systemd-logind[1496]: New session 109 of user core. May 14 00:30:50.120470 systemd[1]: Started session-109.scope - Session 109 of User core. May 14 00:30:50.883876 sshd[6892]: Connection closed by 139.178.89.65 port 41610 May 14 00:30:50.885522 sshd-session[6890]: pam_unix(sshd:session): session closed for user core May 14 00:30:50.891190 systemd[1]: sshd@110-37.27.39.104:22-139.178.89.65:41610.service: Deactivated successfully. May 14 00:30:50.894517 systemd[1]: session-109.scope: Deactivated successfully. May 14 00:30:50.895963 systemd-logind[1496]: Session 109 logged out. Waiting for processes to exit. May 14 00:30:50.898206 systemd-logind[1496]: Removed session 109. May 14 00:30:52.164470 kubelet[2809]: E0514 00:30:52.164401 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:30:56.062668 systemd[1]: Started sshd@111-37.27.39.104:22-139.178.89.65:41620.service - OpenSSH per-connection server daemon (139.178.89.65:41620). May 14 00:30:57.073283 sshd[6912]: Accepted publickey for core from 139.178.89.65 port 41620 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:30:57.074749 sshd-session[6912]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:30:57.084649 systemd-logind[1496]: New session 110 of user core. May 14 00:30:57.089444 systemd[1]: Started session-110.scope - Session 110 of User core. May 14 00:30:57.165497 kubelet[2809]: E0514 00:30:57.165392 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:30:57.854456 sshd[6914]: Connection closed by 139.178.89.65 port 41620 May 14 00:30:57.855377 sshd-session[6912]: pam_unix(sshd:session): session closed for user core May 14 00:30:57.861506 systemd-logind[1496]: Session 110 logged out. Waiting for processes to exit. May 14 00:30:57.862430 systemd[1]: sshd@111-37.27.39.104:22-139.178.89.65:41620.service: Deactivated successfully. May 14 00:30:57.865619 systemd[1]: session-110.scope: Deactivated successfully. May 14 00:30:57.867709 systemd-logind[1496]: Removed session 110. May 14 00:31:02.165682 kubelet[2809]: E0514 00:31:02.165585 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:31:03.034941 systemd[1]: Started sshd@112-37.27.39.104:22-139.178.89.65:48682.service - OpenSSH per-connection server daemon (139.178.89.65:48682). May 14 00:31:04.054705 sshd[6926]: Accepted publickey for core from 139.178.89.65 port 48682 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:31:04.057163 sshd-session[6926]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:31:04.064806 systemd-logind[1496]: New session 111 of user core. May 14 00:31:04.069474 systemd[1]: Started session-111.scope - Session 111 of User core. May 14 00:31:04.846741 sshd[6928]: Connection closed by 139.178.89.65 port 48682 May 14 00:31:04.847828 sshd-session[6926]: pam_unix(sshd:session): session closed for user core May 14 00:31:04.853543 systemd[1]: sshd@112-37.27.39.104:22-139.178.89.65:48682.service: Deactivated successfully. May 14 00:31:04.857624 systemd[1]: session-111.scope: Deactivated successfully. May 14 00:31:04.859134 systemd-logind[1496]: Session 111 logged out. Waiting for processes to exit. May 14 00:31:04.860829 systemd-logind[1496]: Removed session 111. May 14 00:31:07.166746 kubelet[2809]: E0514 00:31:07.166683 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:31:10.023391 systemd[1]: Started sshd@113-37.27.39.104:22-139.178.89.65:48550.service - OpenSSH per-connection server daemon (139.178.89.65:48550). May 14 00:31:11.035134 sshd[6940]: Accepted publickey for core from 139.178.89.65 port 48550 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:31:11.037420 sshd-session[6940]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:31:11.044963 systemd-logind[1496]: New session 112 of user core. May 14 00:31:11.050505 systemd[1]: Started session-112.scope - Session 112 of User core. May 14 00:31:11.832937 sshd[6944]: Connection closed by 139.178.89.65 port 48550 May 14 00:31:11.834014 sshd-session[6940]: pam_unix(sshd:session): session closed for user core May 14 00:31:11.838603 systemd[1]: sshd@113-37.27.39.104:22-139.178.89.65:48550.service: Deactivated successfully. May 14 00:31:11.842129 systemd[1]: session-112.scope: Deactivated successfully. May 14 00:31:11.844760 systemd-logind[1496]: Session 112 logged out. Waiting for processes to exit. May 14 00:31:11.847679 systemd-logind[1496]: Removed session 112. May 14 00:31:12.135855 containerd[1528]: time="2025-05-14T00:31:12.135571436Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eab12bd0739f390cc9dd331e60ff26d292938081c479e7fff02af86d04f23f0d\" id:\"e968b98c34336ad85546c51e70d2086526ce6f675221decc4434fc0e96709cc2\" pid:6967 exited_at:{seconds:1747182672 nanos:135146747}" May 14 00:31:12.167245 kubelet[2809]: E0514 00:31:12.167181 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:31:17.005024 systemd[1]: Started sshd@114-37.27.39.104:22-139.178.89.65:42430.service - OpenSSH per-connection server daemon (139.178.89.65:42430). May 14 00:31:17.167723 kubelet[2809]: E0514 00:31:17.167626 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:31:18.019501 sshd[6980]: Accepted publickey for core from 139.178.89.65 port 42430 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:31:18.021730 sshd-session[6980]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:31:18.031646 systemd-logind[1496]: New session 113 of user core. May 14 00:31:18.037525 systemd[1]: Started session-113.scope - Session 113 of User core. May 14 00:31:18.782679 sshd[6982]: Connection closed by 139.178.89.65 port 42430 May 14 00:31:18.783347 sshd-session[6980]: pam_unix(sshd:session): session closed for user core May 14 00:31:18.786364 systemd-logind[1496]: Session 113 logged out. Waiting for processes to exit. May 14 00:31:18.787957 systemd[1]: sshd@114-37.27.39.104:22-139.178.89.65:42430.service: Deactivated successfully. May 14 00:31:18.789635 systemd[1]: session-113.scope: Deactivated successfully. May 14 00:31:18.790853 systemd-logind[1496]: Removed session 113. May 14 00:31:22.168968 kubelet[2809]: E0514 00:31:22.168890 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:31:23.960399 systemd[1]: Started sshd@115-37.27.39.104:22-139.178.89.65:42432.service - OpenSSH per-connection server daemon (139.178.89.65:42432). May 14 00:31:24.970247 sshd[6995]: Accepted publickey for core from 139.178.89.65 port 42432 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:31:24.972302 sshd-session[6995]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:31:24.977555 systemd-logind[1496]: New session 114 of user core. May 14 00:31:24.983458 systemd[1]: Started session-114.scope - Session 114 of User core. May 14 00:31:25.749902 sshd[6997]: Connection closed by 139.178.89.65 port 42432 May 14 00:31:25.750792 sshd-session[6995]: pam_unix(sshd:session): session closed for user core May 14 00:31:25.757400 systemd[1]: sshd@115-37.27.39.104:22-139.178.89.65:42432.service: Deactivated successfully. May 14 00:31:25.761765 systemd[1]: session-114.scope: Deactivated successfully. May 14 00:31:25.763175 systemd-logind[1496]: Session 114 logged out. Waiting for processes to exit. May 14 00:31:25.765506 systemd-logind[1496]: Removed session 114. May 14 00:31:27.169246 kubelet[2809]: E0514 00:31:27.169119 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:31:30.928778 systemd[1]: Started sshd@116-37.27.39.104:22-139.178.89.65:34560.service - OpenSSH per-connection server daemon (139.178.89.65:34560). May 14 00:31:31.940300 sshd[7009]: Accepted publickey for core from 139.178.89.65 port 34560 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:31:31.942714 sshd-session[7009]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:31:31.952175 systemd-logind[1496]: New session 115 of user core. May 14 00:31:31.955805 systemd[1]: Started session-115.scope - Session 115 of User core. May 14 00:31:32.170380 kubelet[2809]: E0514 00:31:32.170320 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:31:32.732682 sshd[7011]: Connection closed by 139.178.89.65 port 34560 May 14 00:31:32.733858 sshd-session[7009]: pam_unix(sshd:session): session closed for user core May 14 00:31:32.740215 systemd[1]: sshd@116-37.27.39.104:22-139.178.89.65:34560.service: Deactivated successfully. May 14 00:31:32.743912 systemd[1]: session-115.scope: Deactivated successfully. May 14 00:31:32.747798 systemd-logind[1496]: Session 115 logged out. Waiting for processes to exit. May 14 00:31:32.750459 systemd-logind[1496]: Removed session 115. May 14 00:31:37.170598 kubelet[2809]: E0514 00:31:37.170497 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:31:37.913604 systemd[1]: Started sshd@117-37.27.39.104:22-139.178.89.65:39322.service - OpenSSH per-connection server daemon (139.178.89.65:39322). May 14 00:31:38.928154 sshd[7025]: Accepted publickey for core from 139.178.89.65 port 39322 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:31:38.930545 sshd-session[7025]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:31:38.941399 systemd-logind[1496]: New session 116 of user core. May 14 00:31:38.947537 systemd[1]: Started session-116.scope - Session 116 of User core. May 14 00:31:39.717376 sshd[7027]: Connection closed by 139.178.89.65 port 39322 May 14 00:31:39.718531 sshd-session[7025]: pam_unix(sshd:session): session closed for user core May 14 00:31:39.724387 systemd-logind[1496]: Session 116 logged out. Waiting for processes to exit. May 14 00:31:39.724755 systemd[1]: sshd@117-37.27.39.104:22-139.178.89.65:39322.service: Deactivated successfully. May 14 00:31:39.728801 systemd[1]: session-116.scope: Deactivated successfully. May 14 00:31:39.730909 systemd-logind[1496]: Removed session 116. May 14 00:31:42.132249 containerd[1528]: time="2025-05-14T00:31:42.131273201Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eab12bd0739f390cc9dd331e60ff26d292938081c479e7fff02af86d04f23f0d\" id:\"b500b485baf7947fac2a09633099decb4b9a533b39cab98bbcdfb8ccfdb36678\" pid:7053 exited_at:{seconds:1747182702 nanos:130613942}" May 14 00:31:42.170820 kubelet[2809]: E0514 00:31:42.170726 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:31:44.890826 systemd[1]: Started sshd@118-37.27.39.104:22-139.178.89.65:39334.service - OpenSSH per-connection server daemon (139.178.89.65:39334). May 14 00:31:45.897829 sshd[7073]: Accepted publickey for core from 139.178.89.65 port 39334 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:31:45.898714 sshd-session[7073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:31:45.909152 systemd-logind[1496]: New session 117 of user core. May 14 00:31:45.917449 systemd[1]: Started session-117.scope - Session 117 of User core. May 14 00:31:46.691253 sshd[7075]: Connection closed by 139.178.89.65 port 39334 May 14 00:31:46.692126 sshd-session[7073]: pam_unix(sshd:session): session closed for user core May 14 00:31:46.697813 systemd-logind[1496]: Session 117 logged out. Waiting for processes to exit. May 14 00:31:46.698477 systemd[1]: sshd@118-37.27.39.104:22-139.178.89.65:39334.service: Deactivated successfully. May 14 00:31:46.702037 systemd[1]: session-117.scope: Deactivated successfully. May 14 00:31:46.703811 systemd-logind[1496]: Removed session 117. May 14 00:31:47.171736 kubelet[2809]: E0514 00:31:47.171669 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:31:51.863752 systemd[1]: Started sshd@119-37.27.39.104:22-139.178.89.65:58840.service - OpenSSH per-connection server daemon (139.178.89.65:58840). May 14 00:31:52.173121 kubelet[2809]: E0514 00:31:52.172756 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:31:52.867290 sshd[7092]: Accepted publickey for core from 139.178.89.65 port 58840 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:31:52.869351 sshd-session[7092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:31:52.878554 systemd-logind[1496]: New session 118 of user core. May 14 00:31:52.885528 systemd[1]: Started session-118.scope - Session 118 of User core. May 14 00:31:53.658263 sshd[7094]: Connection closed by 139.178.89.65 port 58840 May 14 00:31:53.660370 sshd-session[7092]: pam_unix(sshd:session): session closed for user core May 14 00:31:53.666021 systemd-logind[1496]: Session 118 logged out. Waiting for processes to exit. May 14 00:31:53.666900 systemd[1]: sshd@119-37.27.39.104:22-139.178.89.65:58840.service: Deactivated successfully. May 14 00:31:53.670196 systemd[1]: session-118.scope: Deactivated successfully. May 14 00:31:53.671784 systemd-logind[1496]: Removed session 118. May 14 00:31:57.173831 kubelet[2809]: E0514 00:31:57.173748 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:31:58.830329 systemd[1]: Started sshd@120-37.27.39.104:22-139.178.89.65:60618.service - OpenSSH per-connection server daemon (139.178.89.65:60618). May 14 00:31:59.827456 kubelet[2809]: E0514 00:31:59.827350 2809 log.go:32] "Status from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" May 14 00:31:59.828152 kubelet[2809]: E0514 00:31:59.828099 2809 kubelet.go:2993] "Container runtime sanity check failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" May 14 00:31:59.837507 sshd[7106]: Accepted publickey for core from 139.178.89.65 port 60618 ssh2: RSA SHA256:hl+BNyx6+jF0K6RugsdzigOvrBYkZECkTt1LqyyIOlQ May 14 00:31:59.840008 sshd-session[7106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:31:59.846024 systemd-logind[1496]: New session 119 of user core. May 14 00:31:59.851354 systemd[1]: Started session-119.scope - Session 119 of User core. May 14 00:32:00.612696 sshd[7108]: Connection closed by 139.178.89.65 port 60618 May 14 00:32:00.614511 sshd-session[7106]: pam_unix(sshd:session): session closed for user core May 14 00:32:00.619261 systemd[1]: sshd@120-37.27.39.104:22-139.178.89.65:60618.service: Deactivated successfully. May 14 00:32:00.623897 systemd[1]: session-119.scope: Deactivated successfully. May 14 00:32:00.625174 systemd-logind[1496]: Session 119 logged out. Waiting for processes to exit. May 14 00:32:00.626734 systemd-logind[1496]: Removed session 119. May 14 00:32:02.175054 kubelet[2809]: E0514 00:32:02.174930 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:32:07.176690 kubelet[2809]: E0514 00:32:07.176599 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:32:12.127686 containerd[1528]: time="2025-05-14T00:32:12.127624043Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eab12bd0739f390cc9dd331e60ff26d292938081c479e7fff02af86d04f23f0d\" id:\"ccddfcecd76afc8eac649f2098ede101e1a4240b24fe67f0c06384a51b8e13dd\" pid:7133 exited_at:{seconds:1747182732 nanos:126398632}" May 14 00:32:12.177260 kubelet[2809]: E0514 00:32:12.177081 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:32:16.477511 systemd[1]: cri-containerd-e9d15bbcd68dfa4ca34035aa6c2bb3ad026e863a74ac6c1024f87bcedf61c3d1.scope: Deactivated successfully. May 14 00:32:16.478085 systemd[1]: cri-containerd-e9d15bbcd68dfa4ca34035aa6c2bb3ad026e863a74ac6c1024f87bcedf61c3d1.scope: Consumed 17.738s CPU time, 69.9M memory peak, 30.3M read from disk. May 14 00:32:16.479332 containerd[1528]: time="2025-05-14T00:32:16.478485059Z" level=info msg="received exit event container_id:\"e9d15bbcd68dfa4ca34035aa6c2bb3ad026e863a74ac6c1024f87bcedf61c3d1\" id:\"e9d15bbcd68dfa4ca34035aa6c2bb3ad026e863a74ac6c1024f87bcedf61c3d1\" pid:3160 exit_status:1 exited_at:{seconds:1747182736 nanos:477513444}" May 14 00:32:16.479952 containerd[1528]: time="2025-05-14T00:32:16.479833933Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e9d15bbcd68dfa4ca34035aa6c2bb3ad026e863a74ac6c1024f87bcedf61c3d1\" id:\"e9d15bbcd68dfa4ca34035aa6c2bb3ad026e863a74ac6c1024f87bcedf61c3d1\" pid:3160 exit_status:1 exited_at:{seconds:1747182736 nanos:477513444}" May 14 00:32:16.504435 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e9d15bbcd68dfa4ca34035aa6c2bb3ad026e863a74ac6c1024f87bcedf61c3d1-rootfs.mount: Deactivated successfully. May 14 00:32:16.682721 systemd[1]: cri-containerd-d929fa1c141e686f858270220b2fcc9369ead14133e2875465954b778015fe78.scope: Deactivated successfully. May 14 00:32:16.685297 systemd[1]: cri-containerd-d929fa1c141e686f858270220b2fcc9369ead14133e2875465954b778015fe78.scope: Consumed 22.315s CPU time, 86.9M memory peak, 48.5M read from disk. May 14 00:32:16.687609 containerd[1528]: time="2025-05-14T00:32:16.687544957Z" level=info msg="received exit event container_id:\"d929fa1c141e686f858270220b2fcc9369ead14133e2875465954b778015fe78\" id:\"d929fa1c141e686f858270220b2fcc9369ead14133e2875465954b778015fe78\" pid:2641 exit_status:1 exited_at:{seconds:1747182736 nanos:686846736}" May 14 00:32:16.689708 containerd[1528]: time="2025-05-14T00:32:16.689666282Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d929fa1c141e686f858270220b2fcc9369ead14133e2875465954b778015fe78\" id:\"d929fa1c141e686f858270220b2fcc9369ead14133e2875465954b778015fe78\" pid:2641 exit_status:1 exited_at:{seconds:1747182736 nanos:686846736}" May 14 00:32:16.733328 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d929fa1c141e686f858270220b2fcc9369ead14133e2875465954b778015fe78-rootfs.mount: Deactivated successfully. May 14 00:32:16.919387 kubelet[2809]: E0514 00:32:16.919269 2809 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:50274->10.0.0.2:2379: read: connection timed out" May 14 00:32:17.177527 kubelet[2809]: E0514 00:32:17.177305 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:32:19.623381 kubelet[2809]: E0514 00:32:19.610721 2809 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:50102->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4284-0-0-n-186718797f.183f3d6fbdb4f1cc kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4284-0-0-n-186718797f,UID:b4f488b485dafcac26d797b0d5f412ff,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4284-0-0-n-186718797f,},FirstTimestamp:2025-05-14 00:32:10.576458188 +0000 UTC m=+1296.116942084,LastTimestamp:2025-05-14 00:32:10.576458188 +0000 UTC m=+1296.116942084,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4284-0-0-n-186718797f,}" May 14 00:32:22.178360 kubelet[2809]: E0514 00:32:22.178256 2809 kubelet.go:2412] "Skipping pod synchronization" err="container runtime is down" May 14 00:32:22.792869 systemd[1]: cri-containerd-a38d66800fdd01af7649bdce1ba713f4e103d174604cdab30143b36f4658cd96.scope: Deactivated successfully. May 14 00:32:22.794178 systemd[1]: cri-containerd-a38d66800fdd01af7649bdce1ba713f4e103d174604cdab30143b36f4658cd96.scope: Consumed 13.361s CPU time, 41.5M memory peak, 23.8M read from disk. May 14 00:32:22.801314 containerd[1528]: time="2025-05-14T00:32:22.801095650Z" level=info msg="received exit event container_id:\"a38d66800fdd01af7649bdce1ba713f4e103d174604cdab30143b36f4658cd96\" id:\"a38d66800fdd01af7649bdce1ba713f4e103d174604cdab30143b36f4658cd96\" pid:2666 exit_status:1 exited_at:{seconds:1747182742 nanos:800201670}" May 14 00:32:22.801314 containerd[1528]: time="2025-05-14T00:32:22.801199955Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a38d66800fdd01af7649bdce1ba713f4e103d174604cdab30143b36f4658cd96\" id:\"a38d66800fdd01af7649bdce1ba713f4e103d174604cdab30143b36f4658cd96\" pid:2666 exit_status:1 exited_at:{seconds:1747182742 nanos:800201670}" May 14 00:32:22.849740 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a38d66800fdd01af7649bdce1ba713f4e103d174604cdab30143b36f4658cd96-rootfs.mount: Deactivated successfully.