Mar 25 02:44:53.047849 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Mon Mar 24 23:38:35 -00 2025 Mar 25 02:44:53.047885 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=e7a00b7ee8d97e8d255663e9d3fa92277da8316702fb7f6d664fd7b137c307e9 Mar 25 02:44:53.047900 kernel: BIOS-provided physical RAM map: Mar 25 02:44:53.047911 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 25 02:44:53.047927 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 25 02:44:53.047938 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 25 02:44:53.047950 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Mar 25 02:44:53.047961 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Mar 25 02:44:53.047972 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 25 02:44:53.047983 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 25 02:44:53.047994 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 25 02:44:53.048013 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 25 02:44:53.048031 kernel: NX (Execute Disable) protection: active Mar 25 02:44:53.048043 kernel: APIC: Static calls initialized Mar 25 02:44:53.048056 kernel: SMBIOS 2.8 present. Mar 25 02:44:53.048074 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Mar 25 02:44:53.048087 kernel: Hypervisor detected: KVM Mar 25 02:44:53.048099 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 25 02:44:53.048116 kernel: kvm-clock: using sched offset of 5695481966 cycles Mar 25 02:44:53.048141 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 25 02:44:53.048153 kernel: tsc: Detected 2499.998 MHz processor Mar 25 02:44:53.048164 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 25 02:44:53.048176 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 25 02:44:53.048201 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Mar 25 02:44:53.048213 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 25 02:44:53.048225 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 25 02:44:53.048237 kernel: Using GB pages for direct mapping Mar 25 02:44:53.048253 kernel: ACPI: Early table checksum verification disabled Mar 25 02:44:53.048265 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Mar 25 02:44:53.048278 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 25 02:44:53.048290 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 25 02:44:53.048302 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 25 02:44:53.048314 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Mar 25 02:44:53.048326 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 25 02:44:53.048338 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 25 02:44:53.048350 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 25 02:44:53.048367 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 25 02:44:53.048379 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Mar 25 02:44:53.048391 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Mar 25 02:44:53.048403 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Mar 25 02:44:53.048421 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Mar 25 02:44:53.048434 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Mar 25 02:44:53.048456 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Mar 25 02:44:53.048471 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Mar 25 02:44:53.048483 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Mar 25 02:44:53.048496 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Mar 25 02:44:53.048508 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Mar 25 02:44:53.048521 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Mar 25 02:44:53.048533 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Mar 25 02:44:53.048545 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Mar 25 02:44:53.048563 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Mar 25 02:44:53.048576 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Mar 25 02:44:53.048588 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Mar 25 02:44:53.048613 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Mar 25 02:44:53.048626 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Mar 25 02:44:53.048638 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Mar 25 02:44:53.048651 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Mar 25 02:44:53.048663 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Mar 25 02:44:53.048681 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Mar 25 02:44:53.050720 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Mar 25 02:44:53.050751 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Mar 25 02:44:53.050765 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Mar 25 02:44:53.050778 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Mar 25 02:44:53.050791 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Mar 25 02:44:53.050803 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Mar 25 02:44:53.050816 kernel: Zone ranges: Mar 25 02:44:53.050829 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 25 02:44:53.050841 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Mar 25 02:44:53.050853 kernel: Normal empty Mar 25 02:44:53.050871 kernel: Movable zone start for each node Mar 25 02:44:53.050884 kernel: Early memory node ranges Mar 25 02:44:53.050896 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 25 02:44:53.050908 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Mar 25 02:44:53.050921 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Mar 25 02:44:53.050934 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 25 02:44:53.050946 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 25 02:44:53.050966 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Mar 25 02:44:53.050981 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 25 02:44:53.050999 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 25 02:44:53.051012 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 25 02:44:53.051024 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 25 02:44:53.051037 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 25 02:44:53.051049 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 25 02:44:53.051062 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 25 02:44:53.051074 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 25 02:44:53.051087 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 25 02:44:53.051099 kernel: TSC deadline timer available Mar 25 02:44:53.051116 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Mar 25 02:44:53.051129 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 25 02:44:53.051141 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 25 02:44:53.051153 kernel: Booting paravirtualized kernel on KVM Mar 25 02:44:53.051166 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 25 02:44:53.051178 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Mar 25 02:44:53.051191 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Mar 25 02:44:53.051203 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Mar 25 02:44:53.051216 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Mar 25 02:44:53.051236 kernel: kvm-guest: PV spinlocks enabled Mar 25 02:44:53.051249 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 25 02:44:53.051263 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=e7a00b7ee8d97e8d255663e9d3fa92277da8316702fb7f6d664fd7b137c307e9 Mar 25 02:44:53.051277 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 25 02:44:53.051303 kernel: random: crng init done Mar 25 02:44:53.051315 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 25 02:44:53.051327 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Mar 25 02:44:53.051340 kernel: Fallback order for Node 0: 0 Mar 25 02:44:53.051362 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Mar 25 02:44:53.051375 kernel: Policy zone: DMA32 Mar 25 02:44:53.051387 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 25 02:44:53.051412 kernel: software IO TLB: area num 16. Mar 25 02:44:53.051425 kernel: Memory: 1897436K/2096616K available (14336K kernel code, 2304K rwdata, 25060K rodata, 43592K init, 1472K bss, 198920K reserved, 0K cma-reserved) Mar 25 02:44:53.051437 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Mar 25 02:44:53.051450 kernel: Kernel/User page tables isolation: enabled Mar 25 02:44:53.051463 kernel: ftrace: allocating 37985 entries in 149 pages Mar 25 02:44:53.051475 kernel: ftrace: allocated 149 pages with 4 groups Mar 25 02:44:53.051493 kernel: Dynamic Preempt: voluntary Mar 25 02:44:53.051506 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 25 02:44:53.051519 kernel: rcu: RCU event tracing is enabled. Mar 25 02:44:53.051532 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Mar 25 02:44:53.051546 kernel: Trampoline variant of Tasks RCU enabled. Mar 25 02:44:53.051573 kernel: Rude variant of Tasks RCU enabled. Mar 25 02:44:53.051591 kernel: Tracing variant of Tasks RCU enabled. Mar 25 02:44:53.051624 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 25 02:44:53.051638 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Mar 25 02:44:53.051650 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Mar 25 02:44:53.051663 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 25 02:44:53.051676 kernel: Console: colour VGA+ 80x25 Mar 25 02:44:53.051989 kernel: printk: console [tty0] enabled Mar 25 02:44:53.052011 kernel: printk: console [ttyS0] enabled Mar 25 02:44:53.052025 kernel: ACPI: Core revision 20230628 Mar 25 02:44:53.052038 kernel: APIC: Switch to symmetric I/O mode setup Mar 25 02:44:53.052051 kernel: x2apic enabled Mar 25 02:44:53.052071 kernel: APIC: Switched APIC routing to: physical x2apic Mar 25 02:44:53.052092 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Mar 25 02:44:53.052107 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Mar 25 02:44:53.052121 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 25 02:44:53.052134 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Mar 25 02:44:53.052147 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Mar 25 02:44:53.052160 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 25 02:44:53.052173 kernel: Spectre V2 : Mitigation: Retpolines Mar 25 02:44:53.052566 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Mar 25 02:44:53.052581 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Mar 25 02:44:53.052614 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Mar 25 02:44:53.052629 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 25 02:44:53.052642 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Mar 25 02:44:53.052655 kernel: MDS: Mitigation: Clear CPU buffers Mar 25 02:44:53.052667 kernel: MMIO Stale Data: Unknown: No mitigations Mar 25 02:44:53.052680 kernel: SRBDS: Unknown: Dependent on hypervisor status Mar 25 02:44:53.053720 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 25 02:44:53.053747 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 25 02:44:53.053761 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 25 02:44:53.053774 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 25 02:44:53.053795 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Mar 25 02:44:53.053816 kernel: Freeing SMP alternatives memory: 32K Mar 25 02:44:53.053831 kernel: pid_max: default: 32768 minimum: 301 Mar 25 02:44:53.053844 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 25 02:44:53.053857 kernel: landlock: Up and running. Mar 25 02:44:53.053870 kernel: SELinux: Initializing. Mar 25 02:44:53.053883 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 25 02:44:53.053896 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 25 02:44:53.053909 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Mar 25 02:44:53.053922 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Mar 25 02:44:53.053936 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Mar 25 02:44:53.053955 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Mar 25 02:44:53.053969 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Mar 25 02:44:53.053982 kernel: signal: max sigframe size: 1776 Mar 25 02:44:53.053995 kernel: rcu: Hierarchical SRCU implementation. Mar 25 02:44:53.054009 kernel: rcu: Max phase no-delay instances is 400. Mar 25 02:44:53.054022 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 25 02:44:53.054035 kernel: smp: Bringing up secondary CPUs ... Mar 25 02:44:53.054048 kernel: smpboot: x86: Booting SMP configuration: Mar 25 02:44:53.054061 kernel: .... node #0, CPUs: #1 Mar 25 02:44:53.054079 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Mar 25 02:44:53.054092 kernel: smp: Brought up 1 node, 2 CPUs Mar 25 02:44:53.054106 kernel: smpboot: Max logical packages: 16 Mar 25 02:44:53.054119 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Mar 25 02:44:53.054132 kernel: devtmpfs: initialized Mar 25 02:44:53.054145 kernel: x86/mm: Memory block size: 128MB Mar 25 02:44:53.054158 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 25 02:44:53.054172 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Mar 25 02:44:53.054185 kernel: pinctrl core: initialized pinctrl subsystem Mar 25 02:44:53.054203 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 25 02:44:53.054216 kernel: audit: initializing netlink subsys (disabled) Mar 25 02:44:53.054229 kernel: audit: type=2000 audit(1742870691.277:1): state=initialized audit_enabled=0 res=1 Mar 25 02:44:53.054242 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 25 02:44:53.054256 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 25 02:44:53.054269 kernel: cpuidle: using governor menu Mar 25 02:44:53.054282 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 25 02:44:53.054295 kernel: dca service started, version 1.12.1 Mar 25 02:44:53.054309 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 25 02:44:53.054327 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 25 02:44:53.054346 kernel: PCI: Using configuration type 1 for base access Mar 25 02:44:53.054360 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 25 02:44:53.054373 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 25 02:44:53.054386 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 25 02:44:53.054400 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 25 02:44:53.054413 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 25 02:44:53.054426 kernel: ACPI: Added _OSI(Module Device) Mar 25 02:44:53.054439 kernel: ACPI: Added _OSI(Processor Device) Mar 25 02:44:53.054457 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 25 02:44:53.054471 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 25 02:44:53.054484 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 25 02:44:53.054497 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 25 02:44:53.054510 kernel: ACPI: Interpreter enabled Mar 25 02:44:53.054523 kernel: ACPI: PM: (supports S0 S5) Mar 25 02:44:53.054536 kernel: ACPI: Using IOAPIC for interrupt routing Mar 25 02:44:53.054550 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 25 02:44:53.054563 kernel: PCI: Using E820 reservations for host bridge windows Mar 25 02:44:53.054581 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 25 02:44:53.054604 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 25 02:44:53.055940 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 25 02:44:53.056143 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 25 02:44:53.056358 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 25 02:44:53.056379 kernel: PCI host bridge to bus 0000:00 Mar 25 02:44:53.056617 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 25 02:44:53.058858 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 25 02:44:53.059047 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 25 02:44:53.059228 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Mar 25 02:44:53.059406 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 25 02:44:53.059618 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Mar 25 02:44:53.059814 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 25 02:44:53.060057 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 25 02:44:53.060291 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Mar 25 02:44:53.060487 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Mar 25 02:44:53.062724 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Mar 25 02:44:53.062941 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Mar 25 02:44:53.063140 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 25 02:44:53.063381 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Mar 25 02:44:53.063589 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Mar 25 02:44:53.063851 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Mar 25 02:44:53.064043 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Mar 25 02:44:53.064244 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Mar 25 02:44:53.064455 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Mar 25 02:44:53.064671 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Mar 25 02:44:53.069049 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Mar 25 02:44:53.069282 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Mar 25 02:44:53.069479 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Mar 25 02:44:53.069735 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Mar 25 02:44:53.069928 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Mar 25 02:44:53.070145 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Mar 25 02:44:53.070343 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Mar 25 02:44:53.070551 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Mar 25 02:44:53.070779 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Mar 25 02:44:53.070981 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Mar 25 02:44:53.071171 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Mar 25 02:44:53.071360 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Mar 25 02:44:53.071548 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Mar 25 02:44:53.071782 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Mar 25 02:44:53.071995 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Mar 25 02:44:53.072195 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Mar 25 02:44:53.072393 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Mar 25 02:44:53.072614 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Mar 25 02:44:53.075555 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 25 02:44:53.075785 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 25 02:44:53.076017 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 25 02:44:53.076258 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Mar 25 02:44:53.076445 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Mar 25 02:44:53.076667 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 25 02:44:53.076874 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Mar 25 02:44:53.077109 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Mar 25 02:44:53.077307 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Mar 25 02:44:53.077510 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Mar 25 02:44:53.079626 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Mar 25 02:44:53.079867 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Mar 25 02:44:53.080102 kernel: pci_bus 0000:02: extended config space not accessible Mar 25 02:44:53.080347 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Mar 25 02:44:53.080567 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Mar 25 02:44:53.080877 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Mar 25 02:44:53.081077 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Mar 25 02:44:53.081311 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Mar 25 02:44:53.081506 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Mar 25 02:44:53.083271 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Mar 25 02:44:53.083508 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Mar 25 02:44:53.083736 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Mar 25 02:44:53.083983 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Mar 25 02:44:53.084185 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Mar 25 02:44:53.084379 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Mar 25 02:44:53.084566 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Mar 25 02:44:53.084793 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Mar 25 02:44:53.084982 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Mar 25 02:44:53.085172 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Mar 25 02:44:53.085369 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Mar 25 02:44:53.085560 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Mar 25 02:44:53.088814 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Mar 25 02:44:53.089020 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Mar 25 02:44:53.089215 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Mar 25 02:44:53.089407 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Mar 25 02:44:53.089612 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Mar 25 02:44:53.089835 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Mar 25 02:44:53.090034 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Mar 25 02:44:53.090221 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Mar 25 02:44:53.090444 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Mar 25 02:44:53.090646 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Mar 25 02:44:53.091898 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Mar 25 02:44:53.091924 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 25 02:44:53.091939 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 25 02:44:53.091952 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 25 02:44:53.091966 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 25 02:44:53.091988 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 25 02:44:53.092001 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 25 02:44:53.092015 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 25 02:44:53.092028 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 25 02:44:53.092041 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 25 02:44:53.092054 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 25 02:44:53.092067 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 25 02:44:53.092081 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 25 02:44:53.092094 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 25 02:44:53.092113 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 25 02:44:53.092126 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 25 02:44:53.092140 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 25 02:44:53.092162 kernel: iommu: Default domain type: Translated Mar 25 02:44:53.092186 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 25 02:44:53.092200 kernel: PCI: Using ACPI for IRQ routing Mar 25 02:44:53.092213 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 25 02:44:53.092227 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 25 02:44:53.092240 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Mar 25 02:44:53.092466 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 25 02:44:53.092677 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 25 02:44:53.092889 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 25 02:44:53.092910 kernel: vgaarb: loaded Mar 25 02:44:53.092924 kernel: clocksource: Switched to clocksource kvm-clock Mar 25 02:44:53.092938 kernel: VFS: Disk quotas dquot_6.6.0 Mar 25 02:44:53.092952 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 25 02:44:53.092965 kernel: pnp: PnP ACPI init Mar 25 02:44:53.093204 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 25 02:44:53.093236 kernel: pnp: PnP ACPI: found 5 devices Mar 25 02:44:53.093250 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 25 02:44:53.093275 kernel: NET: Registered PF_INET protocol family Mar 25 02:44:53.093287 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 25 02:44:53.093300 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Mar 25 02:44:53.093313 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 25 02:44:53.093326 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 25 02:44:53.093346 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Mar 25 02:44:53.093360 kernel: TCP: Hash tables configured (established 16384 bind 16384) Mar 25 02:44:53.093373 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 25 02:44:53.093390 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 25 02:44:53.093403 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 25 02:44:53.093416 kernel: NET: Registered PF_XDP protocol family Mar 25 02:44:53.093627 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Mar 25 02:44:53.096883 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Mar 25 02:44:53.097095 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Mar 25 02:44:53.097290 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Mar 25 02:44:53.097483 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Mar 25 02:44:53.097692 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Mar 25 02:44:53.097906 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Mar 25 02:44:53.098095 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Mar 25 02:44:53.098293 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Mar 25 02:44:53.098481 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Mar 25 02:44:53.098686 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Mar 25 02:44:53.100936 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Mar 25 02:44:53.101131 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Mar 25 02:44:53.101323 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Mar 25 02:44:53.101512 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Mar 25 02:44:53.101744 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Mar 25 02:44:53.101972 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Mar 25 02:44:53.102175 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Mar 25 02:44:53.102363 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Mar 25 02:44:53.102559 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Mar 25 02:44:53.104837 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Mar 25 02:44:53.105040 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Mar 25 02:44:53.105245 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Mar 25 02:44:53.105436 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Mar 25 02:44:53.105649 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Mar 25 02:44:53.105866 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Mar 25 02:44:53.106055 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Mar 25 02:44:53.106244 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Mar 25 02:44:53.106432 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Mar 25 02:44:53.106645 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Mar 25 02:44:53.109844 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Mar 25 02:44:53.110048 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Mar 25 02:44:53.110242 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Mar 25 02:44:53.115287 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Mar 25 02:44:53.115480 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Mar 25 02:44:53.115685 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Mar 25 02:44:53.115898 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Mar 25 02:44:53.116089 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Mar 25 02:44:53.116286 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Mar 25 02:44:53.116486 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Mar 25 02:44:53.116689 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Mar 25 02:44:53.116897 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Mar 25 02:44:53.117093 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Mar 25 02:44:53.117297 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Mar 25 02:44:53.117515 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Mar 25 02:44:53.117761 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Mar 25 02:44:53.117971 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Mar 25 02:44:53.118180 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Mar 25 02:44:53.118389 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Mar 25 02:44:53.118610 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Mar 25 02:44:53.119925 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 25 02:44:53.120107 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 25 02:44:53.120283 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 25 02:44:53.120467 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Mar 25 02:44:53.120658 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 25 02:44:53.120854 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Mar 25 02:44:53.121053 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Mar 25 02:44:53.121235 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Mar 25 02:44:53.121414 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Mar 25 02:44:53.121619 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Mar 25 02:44:53.121863 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Mar 25 02:44:53.122070 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Mar 25 02:44:53.122272 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Mar 25 02:44:53.122465 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Mar 25 02:44:53.122661 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Mar 25 02:44:53.122859 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Mar 25 02:44:53.123122 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Mar 25 02:44:53.123307 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Mar 25 02:44:53.123506 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Mar 25 02:44:53.123771 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Mar 25 02:44:53.123976 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Mar 25 02:44:53.124176 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Mar 25 02:44:53.124422 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Mar 25 02:44:53.124644 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Mar 25 02:44:53.124853 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Mar 25 02:44:53.125042 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Mar 25 02:44:53.125221 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Mar 25 02:44:53.125404 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Mar 25 02:44:53.125616 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Mar 25 02:44:53.125847 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Mar 25 02:44:53.126107 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Mar 25 02:44:53.126132 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 25 02:44:53.126147 kernel: PCI: CLS 0 bytes, default 64 Mar 25 02:44:53.126161 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Mar 25 02:44:53.126175 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Mar 25 02:44:53.126190 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Mar 25 02:44:53.126212 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Mar 25 02:44:53.126226 kernel: Initialise system trusted keyrings Mar 25 02:44:53.126245 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Mar 25 02:44:53.126259 kernel: Key type asymmetric registered Mar 25 02:44:53.126272 kernel: Asymmetric key parser 'x509' registered Mar 25 02:44:53.126286 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 25 02:44:53.126309 kernel: io scheduler mq-deadline registered Mar 25 02:44:53.126323 kernel: io scheduler kyber registered Mar 25 02:44:53.126337 kernel: io scheduler bfq registered Mar 25 02:44:53.126562 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Mar 25 02:44:53.126824 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Mar 25 02:44:53.127025 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 25 02:44:53.127215 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Mar 25 02:44:53.127404 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Mar 25 02:44:53.127602 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 25 02:44:53.127813 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Mar 25 02:44:53.128024 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Mar 25 02:44:53.128221 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 25 02:44:53.128406 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Mar 25 02:44:53.128601 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Mar 25 02:44:53.128810 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 25 02:44:53.128997 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Mar 25 02:44:53.129206 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Mar 25 02:44:53.129403 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 25 02:44:53.129601 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Mar 25 02:44:53.129839 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Mar 25 02:44:53.130048 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 25 02:44:53.130256 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Mar 25 02:44:53.130466 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Mar 25 02:44:53.130706 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 25 02:44:53.130933 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Mar 25 02:44:53.131154 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Mar 25 02:44:53.131375 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 25 02:44:53.131399 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 25 02:44:53.131414 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 25 02:44:53.131437 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 25 02:44:53.131451 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 25 02:44:53.131465 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 25 02:44:53.131480 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 25 02:44:53.131502 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 25 02:44:53.131516 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 25 02:44:53.131781 kernel: rtc_cmos 00:03: RTC can wake from S4 Mar 25 02:44:53.131807 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 25 02:44:53.131981 kernel: rtc_cmos 00:03: registered as rtc0 Mar 25 02:44:53.132166 kernel: rtc_cmos 00:03: setting system clock to 2025-03-25T02:44:52 UTC (1742870692) Mar 25 02:44:53.132343 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Mar 25 02:44:53.132364 kernel: intel_pstate: CPU model not supported Mar 25 02:44:53.132379 kernel: NET: Registered PF_INET6 protocol family Mar 25 02:44:53.132401 kernel: Segment Routing with IPv6 Mar 25 02:44:53.132415 kernel: In-situ OAM (IOAM) with IPv6 Mar 25 02:44:53.132429 kernel: NET: Registered PF_PACKET protocol family Mar 25 02:44:53.132443 kernel: Key type dns_resolver registered Mar 25 02:44:53.132464 kernel: IPI shorthand broadcast: enabled Mar 25 02:44:53.132479 kernel: sched_clock: Marking stable (1422003969, 246621492)->(1933271310, -264645849) Mar 25 02:44:53.132492 kernel: registered taskstats version 1 Mar 25 02:44:53.132506 kernel: Loading compiled-in X.509 certificates Mar 25 02:44:53.132520 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: eff01054e94a599f8e404b9a9482f4e2220f5386' Mar 25 02:44:53.132533 kernel: Key type .fscrypt registered Mar 25 02:44:53.132547 kernel: Key type fscrypt-provisioning registered Mar 25 02:44:53.132565 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 25 02:44:53.132580 kernel: ima: Allocated hash algorithm: sha1 Mar 25 02:44:53.132618 kernel: ima: No architecture policies found Mar 25 02:44:53.132633 kernel: clk: Disabling unused clocks Mar 25 02:44:53.132646 kernel: Freeing unused kernel image (initmem) memory: 43592K Mar 25 02:44:53.132660 kernel: Write protecting the kernel read-only data: 40960k Mar 25 02:44:53.132674 kernel: Freeing unused kernel image (rodata/data gap) memory: 1564K Mar 25 02:44:53.132688 kernel: Run /init as init process Mar 25 02:44:53.132727 kernel: with arguments: Mar 25 02:44:53.132743 kernel: /init Mar 25 02:44:53.132756 kernel: with environment: Mar 25 02:44:53.132777 kernel: HOME=/ Mar 25 02:44:53.132790 kernel: TERM=linux Mar 25 02:44:53.132804 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 25 02:44:53.132819 systemd[1]: Successfully made /usr/ read-only. Mar 25 02:44:53.132838 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 25 02:44:53.132853 systemd[1]: Detected virtualization kvm. Mar 25 02:44:53.132867 systemd[1]: Detected architecture x86-64. Mar 25 02:44:53.132882 systemd[1]: Running in initrd. Mar 25 02:44:53.132902 systemd[1]: No hostname configured, using default hostname. Mar 25 02:44:53.132917 systemd[1]: Hostname set to . Mar 25 02:44:53.132937 systemd[1]: Initializing machine ID from VM UUID. Mar 25 02:44:53.132951 systemd[1]: Queued start job for default target initrd.target. Mar 25 02:44:53.132966 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 25 02:44:53.132989 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 25 02:44:53.133004 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 25 02:44:53.133019 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 25 02:44:53.133039 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 25 02:44:53.133063 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 25 02:44:53.133079 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 25 02:44:53.133094 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 25 02:44:53.133109 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 25 02:44:53.133124 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 25 02:44:53.133138 systemd[1]: Reached target paths.target - Path Units. Mar 25 02:44:53.133159 systemd[1]: Reached target slices.target - Slice Units. Mar 25 02:44:53.133173 systemd[1]: Reached target swap.target - Swaps. Mar 25 02:44:53.133191 systemd[1]: Reached target timers.target - Timer Units. Mar 25 02:44:53.133206 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 25 02:44:53.133221 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 25 02:44:53.133236 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 25 02:44:53.133250 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 25 02:44:53.133265 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 25 02:44:53.133285 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 25 02:44:53.133300 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 25 02:44:53.133315 systemd[1]: Reached target sockets.target - Socket Units. Mar 25 02:44:53.133329 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 25 02:44:53.133344 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 25 02:44:53.133359 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 25 02:44:53.133373 systemd[1]: Starting systemd-fsck-usr.service... Mar 25 02:44:53.133394 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 25 02:44:53.133409 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 25 02:44:53.133430 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 25 02:44:53.133491 systemd-journald[202]: Collecting audit messages is disabled. Mar 25 02:44:53.133532 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 25 02:44:53.133548 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 25 02:44:53.133570 systemd[1]: Finished systemd-fsck-usr.service. Mar 25 02:44:53.133603 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 25 02:44:53.133621 systemd-journald[202]: Journal started Mar 25 02:44:53.133654 systemd-journald[202]: Runtime Journal (/run/log/journal/2c7bc964f3aa4fc5a0d3f0b11ca11809) is 4.7M, max 37.9M, 33.2M free. Mar 25 02:44:53.081729 systemd-modules-load[204]: Inserted module 'overlay' Mar 25 02:44:53.159231 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 25 02:44:53.159261 kernel: Bridge firewalling registered Mar 25 02:44:53.143765 systemd-modules-load[204]: Inserted module 'br_netfilter' Mar 25 02:44:53.167715 systemd[1]: Started systemd-journald.service - Journal Service. Mar 25 02:44:53.168999 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 25 02:44:53.169986 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 25 02:44:53.175482 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 25 02:44:53.184202 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 25 02:44:53.196856 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 25 02:44:53.200764 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 25 02:44:53.206335 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 25 02:44:53.211749 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 25 02:44:53.220043 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 25 02:44:53.228890 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 25 02:44:53.232143 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 25 02:44:53.234239 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 25 02:44:53.245866 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 25 02:44:53.258106 dracut-cmdline[234]: dracut-dracut-053 Mar 25 02:44:53.266016 dracut-cmdline[234]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=e7a00b7ee8d97e8d255663e9d3fa92277da8316702fb7f6d664fd7b137c307e9 Mar 25 02:44:53.301735 systemd-resolved[238]: Positive Trust Anchors: Mar 25 02:44:53.301756 systemd-resolved[238]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 25 02:44:53.301800 systemd-resolved[238]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 25 02:44:53.310427 systemd-resolved[238]: Defaulting to hostname 'linux'. Mar 25 02:44:53.312619 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 25 02:44:53.313892 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 25 02:44:53.373741 kernel: SCSI subsystem initialized Mar 25 02:44:53.384786 kernel: Loading iSCSI transport class v2.0-870. Mar 25 02:44:53.398733 kernel: iscsi: registered transport (tcp) Mar 25 02:44:53.424882 kernel: iscsi: registered transport (qla4xxx) Mar 25 02:44:53.424970 kernel: QLogic iSCSI HBA Driver Mar 25 02:44:53.483735 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 25 02:44:53.486780 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 25 02:44:53.533169 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 25 02:44:53.533251 kernel: device-mapper: uevent: version 1.0.3 Mar 25 02:44:53.534056 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 25 02:44:53.583742 kernel: raid6: sse2x4 gen() 13174 MB/s Mar 25 02:44:53.600733 kernel: raid6: sse2x2 gen() 9363 MB/s Mar 25 02:44:53.619432 kernel: raid6: sse2x1 gen() 9770 MB/s Mar 25 02:44:53.619479 kernel: raid6: using algorithm sse2x4 gen() 13174 MB/s Mar 25 02:44:53.638414 kernel: raid6: .... xor() 7538 MB/s, rmw enabled Mar 25 02:44:53.638473 kernel: raid6: using ssse3x2 recovery algorithm Mar 25 02:44:53.663725 kernel: xor: automatically using best checksumming function avx Mar 25 02:44:53.839102 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 25 02:44:53.853129 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 25 02:44:53.856432 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 25 02:44:53.893942 systemd-udevd[421]: Using default interface naming scheme 'v255'. Mar 25 02:44:53.903119 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 25 02:44:53.909452 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 25 02:44:53.943365 dracut-pre-trigger[431]: rd.md=0: removing MD RAID activation Mar 25 02:44:53.990590 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 25 02:44:53.994865 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 25 02:44:54.138823 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 25 02:44:54.143926 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 25 02:44:54.179042 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 25 02:44:54.190597 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 25 02:44:54.191432 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 25 02:44:54.192167 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 25 02:44:54.201903 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 25 02:44:54.242408 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 25 02:44:54.299725 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Mar 25 02:44:54.375022 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Mar 25 02:44:54.375256 kernel: cryptd: max_cpu_qlen set to 1000 Mar 25 02:44:54.375292 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 25 02:44:54.375313 kernel: GPT:17805311 != 125829119 Mar 25 02:44:54.375331 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 25 02:44:54.375349 kernel: GPT:17805311 != 125829119 Mar 25 02:44:54.375366 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 25 02:44:54.375384 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 25 02:44:54.375402 kernel: AVX version of gcm_enc/dec engaged. Mar 25 02:44:54.375421 kernel: AES CTR mode by8 optimization enabled Mar 25 02:44:54.375438 kernel: ACPI: bus type USB registered Mar 25 02:44:54.375475 kernel: usbcore: registered new interface driver usbfs Mar 25 02:44:54.375497 kernel: usbcore: registered new interface driver hub Mar 25 02:44:54.344208 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 25 02:44:54.344411 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 25 02:44:54.383737 kernel: usbcore: registered new device driver usb Mar 25 02:44:54.345825 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 25 02:44:54.347423 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 25 02:44:54.358208 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 25 02:44:54.359526 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 25 02:44:54.367833 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 25 02:44:54.369104 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 25 02:44:54.407193 kernel: libata version 3.00 loaded. Mar 25 02:44:54.447747 kernel: ahci 0000:00:1f.2: version 3.0 Mar 25 02:44:54.570303 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 25 02:44:54.570335 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 25 02:44:54.570595 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 25 02:44:54.570853 kernel: scsi host0: ahci Mar 25 02:44:54.571118 kernel: BTRFS: device fsid 6d9424cd-1432-492b-b006-b311869817e2 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (478) Mar 25 02:44:54.571142 kernel: scsi host1: ahci Mar 25 02:44:54.571447 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by (udev-worker) (481) Mar 25 02:44:54.571479 kernel: scsi host2: ahci Mar 25 02:44:54.571816 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Mar 25 02:44:54.572066 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Mar 25 02:44:54.572297 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Mar 25 02:44:54.572529 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Mar 25 02:44:54.572792 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Mar 25 02:44:54.573289 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Mar 25 02:44:54.573539 kernel: hub 1-0:1.0: USB hub found Mar 25 02:44:54.573908 kernel: hub 1-0:1.0: 4 ports detected Mar 25 02:44:54.574134 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Mar 25 02:44:54.574433 kernel: scsi host3: ahci Mar 25 02:44:54.574805 kernel: hub 2-0:1.0: USB hub found Mar 25 02:44:54.575073 kernel: scsi host4: ahci Mar 25 02:44:54.575366 kernel: hub 2-0:1.0: 4 ports detected Mar 25 02:44:54.575615 kernel: scsi host5: ahci Mar 25 02:44:54.575930 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 38 Mar 25 02:44:54.575954 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 38 Mar 25 02:44:54.575974 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 38 Mar 25 02:44:54.575992 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 38 Mar 25 02:44:54.576010 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 38 Mar 25 02:44:54.576029 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 38 Mar 25 02:44:54.523103 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 25 02:44:54.557260 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 25 02:44:54.591472 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 25 02:44:54.602489 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 25 02:44:54.603381 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 25 02:44:54.618416 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 25 02:44:54.620824 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 25 02:44:54.624860 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 25 02:44:54.646365 disk-uuid[566]: Primary Header is updated. Mar 25 02:44:54.646365 disk-uuid[566]: Secondary Entries is updated. Mar 25 02:44:54.646365 disk-uuid[566]: Secondary Header is updated. Mar 25 02:44:54.653220 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 25 02:44:54.662020 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 25 02:44:54.665979 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 25 02:44:54.793845 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Mar 25 02:44:54.877119 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 25 02:44:54.877191 kernel: ata3: SATA link down (SStatus 0 SControl 300) Mar 25 02:44:54.877213 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 25 02:44:54.885465 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 25 02:44:54.885507 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 25 02:44:54.885720 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 25 02:44:54.942729 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 25 02:44:54.949164 kernel: usbcore: registered new interface driver usbhid Mar 25 02:44:54.949244 kernel: usbhid: USB HID core driver Mar 25 02:44:54.956967 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Mar 25 02:44:54.957009 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Mar 25 02:44:55.665737 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 25 02:44:55.667368 disk-uuid[569]: The operation has completed successfully. Mar 25 02:44:55.735070 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 25 02:44:55.736275 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 25 02:44:55.779871 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 25 02:44:55.801723 sh[587]: Success Mar 25 02:44:55.819915 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Mar 25 02:44:55.884877 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 25 02:44:55.889817 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 25 02:44:55.901482 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 25 02:44:55.916735 kernel: BTRFS info (device dm-0): first mount of filesystem 6d9424cd-1432-492b-b006-b311869817e2 Mar 25 02:44:55.916792 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 25 02:44:55.916815 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 25 02:44:55.920620 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 25 02:44:55.920668 kernel: BTRFS info (device dm-0): using free space tree Mar 25 02:44:55.931890 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 25 02:44:55.932839 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 25 02:44:55.934873 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 25 02:44:55.939446 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 25 02:44:55.969242 kernel: BTRFS info (device vda6): first mount of filesystem a72930ba-1354-475c-94df-b83a66efea67 Mar 25 02:44:55.969292 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 25 02:44:55.969313 kernel: BTRFS info (device vda6): using free space tree Mar 25 02:44:55.974718 kernel: BTRFS info (device vda6): auto enabling async discard Mar 25 02:44:55.981749 kernel: BTRFS info (device vda6): last unmount of filesystem a72930ba-1354-475c-94df-b83a66efea67 Mar 25 02:44:55.985038 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 25 02:44:55.987900 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 25 02:44:56.092991 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 25 02:44:56.096888 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 25 02:44:56.179329 systemd-networkd[767]: lo: Link UP Mar 25 02:44:56.181867 systemd-networkd[767]: lo: Gained carrier Mar 25 02:44:56.207802 systemd-networkd[767]: Enumeration completed Mar 25 02:44:56.210922 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 25 02:44:56.216517 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 25 02:44:56.216555 systemd-networkd[767]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 25 02:44:56.316087 systemd[1]: Reached target network.target - Network. Mar 25 02:44:56.316550 systemd-networkd[767]: eth0: Link UP Mar 25 02:44:56.316561 systemd-networkd[767]: eth0: Gained carrier Mar 25 02:44:56.316580 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 25 02:44:56.345455 ignition[685]: Ignition 2.20.0 Mar 25 02:44:56.345482 ignition[685]: Stage: fetch-offline Mar 25 02:44:56.345572 ignition[685]: no configs at "/usr/lib/ignition/base.d" Mar 25 02:44:56.345593 ignition[685]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 25 02:44:56.345813 ignition[685]: parsed url from cmdline: "" Mar 25 02:44:56.349070 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 25 02:44:56.345821 ignition[685]: no config URL provided Mar 25 02:44:56.345841 ignition[685]: reading system config file "/usr/lib/ignition/user.ign" Mar 25 02:44:56.345860 ignition[685]: no config at "/usr/lib/ignition/user.ign" Mar 25 02:44:56.351882 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 25 02:44:56.345870 ignition[685]: failed to fetch config: resource requires networking Mar 25 02:44:56.346146 ignition[685]: Ignition finished successfully Mar 25 02:44:56.356082 systemd-networkd[767]: eth0: DHCPv4 address 10.230.58.198/30, gateway 10.230.58.197 acquired from 10.230.58.197 Mar 25 02:44:56.394204 ignition[775]: Ignition 2.20.0 Mar 25 02:44:56.395202 ignition[775]: Stage: fetch Mar 25 02:44:56.395455 ignition[775]: no configs at "/usr/lib/ignition/base.d" Mar 25 02:44:56.395480 ignition[775]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 25 02:44:56.395645 ignition[775]: parsed url from cmdline: "" Mar 25 02:44:56.395654 ignition[775]: no config URL provided Mar 25 02:44:56.395665 ignition[775]: reading system config file "/usr/lib/ignition/user.ign" Mar 25 02:44:56.395684 ignition[775]: no config at "/usr/lib/ignition/user.ign" Mar 25 02:44:56.395852 ignition[775]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Mar 25 02:44:56.395922 ignition[775]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Mar 25 02:44:56.395944 ignition[775]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Mar 25 02:44:56.417484 ignition[775]: GET result: OK Mar 25 02:44:56.418114 ignition[775]: parsing config with SHA512: ad06e965753b7e6a1ebae8641260264ef0492f7a362111ee54563363e830b0ee3c293a72d81e1fd5efd31dc508a24a5ca93990138aaf363be036aa6d5f58efa8 Mar 25 02:44:56.425477 unknown[775]: fetched base config from "system" Mar 25 02:44:56.427150 unknown[775]: fetched base config from "system" Mar 25 02:44:56.427928 unknown[775]: fetched user config from "openstack" Mar 25 02:44:56.428392 ignition[775]: fetch: fetch complete Mar 25 02:44:56.428409 ignition[775]: fetch: fetch passed Mar 25 02:44:56.432131 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 25 02:44:56.428481 ignition[775]: Ignition finished successfully Mar 25 02:44:56.440917 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 25 02:44:56.471276 ignition[783]: Ignition 2.20.0 Mar 25 02:44:56.471310 ignition[783]: Stage: kargs Mar 25 02:44:56.471558 ignition[783]: no configs at "/usr/lib/ignition/base.d" Mar 25 02:44:56.474776 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 25 02:44:56.471580 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 25 02:44:56.472636 ignition[783]: kargs: kargs passed Mar 25 02:44:56.472723 ignition[783]: Ignition finished successfully Mar 25 02:44:56.478750 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 25 02:44:56.504276 ignition[790]: Ignition 2.20.0 Mar 25 02:44:56.504295 ignition[790]: Stage: disks Mar 25 02:44:56.504531 ignition[790]: no configs at "/usr/lib/ignition/base.d" Mar 25 02:44:56.507065 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 25 02:44:56.504554 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 25 02:44:56.508366 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 25 02:44:56.505748 ignition[790]: disks: disks passed Mar 25 02:44:56.509450 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 25 02:44:56.505830 ignition[790]: Ignition finished successfully Mar 25 02:44:56.511027 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 25 02:44:56.512597 systemd[1]: Reached target sysinit.target - System Initialization. Mar 25 02:44:56.514182 systemd[1]: Reached target basic.target - Basic System. Mar 25 02:44:56.518851 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 25 02:44:56.553825 systemd-fsck[798]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Mar 25 02:44:56.557636 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 25 02:44:56.559688 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 25 02:44:56.691725 kernel: EXT4-fs (vda9): mounted filesystem 4e6dca82-2e50-453c-be25-61f944b72008 r/w with ordered data mode. Quota mode: none. Mar 25 02:44:56.693033 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 25 02:44:56.695048 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 25 02:44:56.697513 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 25 02:44:56.701790 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 25 02:44:56.702984 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 25 02:44:56.705451 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Mar 25 02:44:56.707761 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 25 02:44:56.707806 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 25 02:44:56.718959 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 25 02:44:56.721867 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 25 02:44:56.736829 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (806) Mar 25 02:44:56.736860 kernel: BTRFS info (device vda6): first mount of filesystem a72930ba-1354-475c-94df-b83a66efea67 Mar 25 02:44:56.736879 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 25 02:44:56.736897 kernel: BTRFS info (device vda6): using free space tree Mar 25 02:44:56.746723 kernel: BTRFS info (device vda6): auto enabling async discard Mar 25 02:44:56.751300 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 25 02:44:56.828178 initrd-setup-root[835]: cut: /sysroot/etc/passwd: No such file or directory Mar 25 02:44:56.837786 initrd-setup-root[842]: cut: /sysroot/etc/group: No such file or directory Mar 25 02:44:56.845330 initrd-setup-root[849]: cut: /sysroot/etc/shadow: No such file or directory Mar 25 02:44:56.853166 initrd-setup-root[856]: cut: /sysroot/etc/gshadow: No such file or directory Mar 25 02:44:56.961610 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 25 02:44:56.964303 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 25 02:44:56.966867 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 25 02:44:56.988855 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 25 02:44:56.990716 kernel: BTRFS info (device vda6): last unmount of filesystem a72930ba-1354-475c-94df-b83a66efea67 Mar 25 02:44:57.012451 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 25 02:44:57.033771 ignition[924]: INFO : Ignition 2.20.0 Mar 25 02:44:57.033771 ignition[924]: INFO : Stage: mount Mar 25 02:44:57.033771 ignition[924]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 25 02:44:57.033771 ignition[924]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 25 02:44:57.038306 ignition[924]: INFO : mount: mount passed Mar 25 02:44:57.038306 ignition[924]: INFO : Ignition finished successfully Mar 25 02:44:57.036925 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 25 02:44:58.177319 systemd-networkd[767]: eth0: Gained IPv6LL Mar 25 02:44:59.686285 systemd-networkd[767]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8eb1:24:19ff:fee6:3ac6/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8eb1:24:19ff:fee6:3ac6/64 assigned by NDisc. Mar 25 02:44:59.686304 systemd-networkd[767]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Mar 25 02:45:03.889029 coreos-metadata[808]: Mar 25 02:45:03.888 WARN failed to locate config-drive, using the metadata service API instead Mar 25 02:45:03.911297 coreos-metadata[808]: Mar 25 02:45:03.911 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Mar 25 02:45:03.926799 coreos-metadata[808]: Mar 25 02:45:03.926 INFO Fetch successful Mar 25 02:45:03.927647 coreos-metadata[808]: Mar 25 02:45:03.927 INFO wrote hostname srv-nkv7s.gb1.brightbox.com to /sysroot/etc/hostname Mar 25 02:45:03.929753 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Mar 25 02:45:03.930078 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Mar 25 02:45:03.935927 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 25 02:45:03.964868 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 25 02:45:03.992740 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/vda6 scanned by mount (940) Mar 25 02:45:03.999739 kernel: BTRFS info (device vda6): first mount of filesystem a72930ba-1354-475c-94df-b83a66efea67 Mar 25 02:45:03.999787 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 25 02:45:03.999807 kernel: BTRFS info (device vda6): using free space tree Mar 25 02:45:04.005746 kernel: BTRFS info (device vda6): auto enabling async discard Mar 25 02:45:04.008791 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 25 02:45:04.042980 ignition[958]: INFO : Ignition 2.20.0 Mar 25 02:45:04.042980 ignition[958]: INFO : Stage: files Mar 25 02:45:04.044803 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 25 02:45:04.044803 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 25 02:45:04.044803 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Mar 25 02:45:04.047506 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 25 02:45:04.047506 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 25 02:45:04.049616 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 25 02:45:04.050719 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 25 02:45:04.050719 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 25 02:45:04.050479 unknown[958]: wrote ssh authorized keys file for user: core Mar 25 02:45:04.053582 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 25 02:45:04.053582 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Mar 25 02:45:04.559969 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 25 02:45:04.885780 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 25 02:45:04.885780 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 25 02:45:04.885780 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 25 02:45:04.885780 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 25 02:45:04.885780 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 25 02:45:04.885780 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 25 02:45:04.885780 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 25 02:45:04.885780 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 25 02:45:04.885780 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 25 02:45:04.885780 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 25 02:45:04.903936 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 25 02:45:04.903936 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Mar 25 02:45:04.903936 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Mar 25 02:45:04.903936 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Mar 25 02:45:04.903936 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Mar 25 02:45:05.423755 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 25 02:45:07.684825 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Mar 25 02:45:07.684825 ignition[958]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 25 02:45:07.687949 ignition[958]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 25 02:45:07.689352 ignition[958]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 25 02:45:07.689352 ignition[958]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 25 02:45:07.689352 ignition[958]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Mar 25 02:45:07.689352 ignition[958]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Mar 25 02:45:07.696400 ignition[958]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 25 02:45:07.696400 ignition[958]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 25 02:45:07.696400 ignition[958]: INFO : files: files passed Mar 25 02:45:07.696400 ignition[958]: INFO : Ignition finished successfully Mar 25 02:45:07.692801 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 25 02:45:07.698906 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 25 02:45:07.703326 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 25 02:45:07.728851 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 25 02:45:07.729060 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 25 02:45:07.740235 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 25 02:45:07.740235 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 25 02:45:07.742972 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 25 02:45:07.745738 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 25 02:45:07.747488 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 25 02:45:07.749533 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 25 02:45:07.815504 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 25 02:45:07.815685 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 25 02:45:07.818945 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 25 02:45:07.819655 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 25 02:45:07.820510 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 25 02:45:07.821918 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 25 02:45:07.846595 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 25 02:45:07.851626 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 25 02:45:07.875858 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 25 02:45:07.877672 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 25 02:45:07.878624 systemd[1]: Stopped target timers.target - Timer Units. Mar 25 02:45:07.880059 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 25 02:45:07.880269 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 25 02:45:07.881965 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 25 02:45:07.882890 systemd[1]: Stopped target basic.target - Basic System. Mar 25 02:45:07.884307 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 25 02:45:07.885690 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 25 02:45:07.887253 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 25 02:45:07.888851 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 25 02:45:07.890352 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 25 02:45:07.891926 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 25 02:45:07.893494 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 25 02:45:07.895169 systemd[1]: Stopped target swap.target - Swaps. Mar 25 02:45:07.896535 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 25 02:45:07.896777 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 25 02:45:07.898543 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 25 02:45:07.899558 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 25 02:45:07.900987 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 25 02:45:07.901231 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 25 02:45:07.902420 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 25 02:45:07.902613 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 25 02:45:07.904672 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 25 02:45:07.904888 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 25 02:45:07.906518 systemd[1]: ignition-files.service: Deactivated successfully. Mar 25 02:45:07.906687 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 25 02:45:07.910972 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 25 02:45:07.912057 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 25 02:45:07.912336 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 25 02:45:07.918388 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 25 02:45:07.919955 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 25 02:45:07.920992 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 25 02:45:07.922891 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 25 02:45:07.923890 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 25 02:45:07.938099 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 25 02:45:07.939332 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 25 02:45:07.963645 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 25 02:45:07.970421 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 25 02:45:07.971162 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 25 02:45:07.978094 ignition[1011]: INFO : Ignition 2.20.0 Mar 25 02:45:07.978094 ignition[1011]: INFO : Stage: umount Mar 25 02:45:07.979880 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 25 02:45:07.979880 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 25 02:45:07.979880 ignition[1011]: INFO : umount: umount passed Mar 25 02:45:07.979880 ignition[1011]: INFO : Ignition finished successfully Mar 25 02:45:07.980913 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 25 02:45:07.981139 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 25 02:45:07.982991 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 25 02:45:07.983193 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 25 02:45:07.984648 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 25 02:45:07.984767 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 25 02:45:07.985987 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 25 02:45:07.986091 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 25 02:45:07.987329 systemd[1]: Stopped target network.target - Network. Mar 25 02:45:07.988665 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 25 02:45:07.988807 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 25 02:45:07.990214 systemd[1]: Stopped target paths.target - Path Units. Mar 25 02:45:07.991461 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 25 02:45:07.993772 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 25 02:45:08.000849 systemd[1]: Stopped target slices.target - Slice Units. Mar 25 02:45:08.002225 systemd[1]: Stopped target sockets.target - Socket Units. Mar 25 02:45:08.003739 systemd[1]: iscsid.socket: Deactivated successfully. Mar 25 02:45:08.003832 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 25 02:45:08.005317 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 25 02:45:08.005390 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 25 02:45:08.006589 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 25 02:45:08.006755 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 25 02:45:08.007999 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 25 02:45:08.008074 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 25 02:45:08.009282 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 25 02:45:08.009400 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 25 02:45:08.011049 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 25 02:45:08.012968 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 25 02:45:08.017502 systemd-networkd[767]: eth0: DHCPv6 lease lost Mar 25 02:45:08.019547 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 25 02:45:08.019821 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 25 02:45:08.026759 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 25 02:45:08.027246 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 25 02:45:08.027473 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 25 02:45:08.030822 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 25 02:45:08.031868 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 25 02:45:08.032006 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 25 02:45:08.035923 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 25 02:45:08.036593 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 25 02:45:08.036673 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 25 02:45:08.037510 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 25 02:45:08.037583 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 25 02:45:08.039244 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 25 02:45:08.039329 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 25 02:45:08.041863 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 25 02:45:08.041952 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 25 02:45:08.043868 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 25 02:45:08.049681 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 25 02:45:08.049828 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 25 02:45:08.055257 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 25 02:45:08.055553 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 25 02:45:08.060707 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 25 02:45:08.060857 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 25 02:45:08.062875 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 25 02:45:08.062939 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 25 02:45:08.065155 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 25 02:45:08.065255 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 25 02:45:08.068526 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 25 02:45:08.068608 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 25 02:45:08.069965 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 25 02:45:08.070089 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 25 02:45:08.075881 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 25 02:45:08.077904 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 25 02:45:08.077991 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 25 02:45:08.081076 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 25 02:45:08.081185 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 25 02:45:08.084898 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 25 02:45:08.085009 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 25 02:45:08.087671 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 25 02:45:08.088649 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 25 02:45:08.095468 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 25 02:45:08.095637 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 25 02:45:08.096984 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 25 02:45:08.100893 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 25 02:45:08.115729 systemd[1]: Switching root. Mar 25 02:45:08.150403 systemd-journald[202]: Journal stopped Mar 25 02:45:09.946037 systemd-journald[202]: Received SIGTERM from PID 1 (systemd). Mar 25 02:45:09.946228 kernel: SELinux: policy capability network_peer_controls=1 Mar 25 02:45:09.946269 kernel: SELinux: policy capability open_perms=1 Mar 25 02:45:09.946312 kernel: SELinux: policy capability extended_socket_class=1 Mar 25 02:45:09.946358 kernel: SELinux: policy capability always_check_network=0 Mar 25 02:45:09.946388 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 25 02:45:09.946410 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 25 02:45:09.946429 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 25 02:45:09.946449 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 25 02:45:09.946482 kernel: audit: type=1403 audit(1742870708.395:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 25 02:45:09.946505 systemd[1]: Successfully loaded SELinux policy in 60.439ms. Mar 25 02:45:09.946529 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 24.706ms. Mar 25 02:45:09.946567 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 25 02:45:09.946592 systemd[1]: Detected virtualization kvm. Mar 25 02:45:09.946622 systemd[1]: Detected architecture x86-64. Mar 25 02:45:09.946655 systemd[1]: Detected first boot. Mar 25 02:45:09.946686 systemd[1]: Hostname set to . Mar 25 02:45:09.946762 systemd[1]: Initializing machine ID from VM UUID. Mar 25 02:45:09.946788 zram_generator::config[1056]: No configuration found. Mar 25 02:45:09.946827 kernel: Guest personality initialized and is inactive Mar 25 02:45:09.946866 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Mar 25 02:45:09.946888 kernel: Initialized host personality Mar 25 02:45:09.946914 kernel: NET: Registered PF_VSOCK protocol family Mar 25 02:45:09.946936 systemd[1]: Populated /etc with preset unit settings. Mar 25 02:45:09.946958 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 25 02:45:09.946980 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 25 02:45:09.947001 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 25 02:45:09.947033 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 25 02:45:09.947072 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 25 02:45:09.947115 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 25 02:45:09.947139 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 25 02:45:09.947160 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 25 02:45:09.947182 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 25 02:45:09.947204 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 25 02:45:09.947225 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 25 02:45:09.947246 systemd[1]: Created slice user.slice - User and Session Slice. Mar 25 02:45:09.947277 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 25 02:45:09.947315 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 25 02:45:09.947339 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 25 02:45:09.947370 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 25 02:45:09.947394 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 25 02:45:09.947416 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 25 02:45:09.947437 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 25 02:45:09.947477 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 25 02:45:09.947502 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 25 02:45:09.947540 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 25 02:45:09.947623 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 25 02:45:09.947660 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 25 02:45:09.947684 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 25 02:45:09.947747 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 25 02:45:09.947773 systemd[1]: Reached target slices.target - Slice Units. Mar 25 02:45:09.947804 systemd[1]: Reached target swap.target - Swaps. Mar 25 02:45:09.947827 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 25 02:45:09.947849 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 25 02:45:09.947880 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 25 02:45:09.947903 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 25 02:45:09.947933 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 25 02:45:09.947967 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 25 02:45:09.947990 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 25 02:45:09.948025 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 25 02:45:09.948048 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 25 02:45:09.948094 systemd[1]: Mounting media.mount - External Media Directory... Mar 25 02:45:09.948119 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 25 02:45:09.948141 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 25 02:45:09.948162 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 25 02:45:09.948183 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 25 02:45:09.948231 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 25 02:45:09.948274 systemd[1]: Reached target machines.target - Containers. Mar 25 02:45:09.948298 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 25 02:45:09.948321 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 25 02:45:09.948343 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 25 02:45:09.948374 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 25 02:45:09.948397 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 25 02:45:09.948419 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 25 02:45:09.948440 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 25 02:45:09.948476 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 25 02:45:09.948501 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 25 02:45:09.948537 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 25 02:45:09.948570 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 25 02:45:09.948594 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 25 02:45:09.948616 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 25 02:45:09.948650 systemd[1]: Stopped systemd-fsck-usr.service. Mar 25 02:45:09.948674 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 25 02:45:09.948735 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 25 02:45:09.948762 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 25 02:45:09.948784 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 25 02:45:09.948805 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 25 02:45:09.948826 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 25 02:45:09.948846 kernel: loop: module loaded Mar 25 02:45:09.948867 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 25 02:45:09.948898 systemd[1]: verity-setup.service: Deactivated successfully. Mar 25 02:45:09.948922 systemd[1]: Stopped verity-setup.service. Mar 25 02:45:09.948960 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 25 02:45:09.948984 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 25 02:45:09.949006 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 25 02:45:09.949043 systemd[1]: Mounted media.mount - External Media Directory. Mar 25 02:45:09.949078 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 25 02:45:09.949102 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 25 02:45:09.949123 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 25 02:45:09.949145 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 25 02:45:09.949182 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 25 02:45:09.949206 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 25 02:45:09.949242 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 25 02:45:09.949266 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 25 02:45:09.949287 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 25 02:45:09.949318 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 25 02:45:09.949350 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 25 02:45:09.949381 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 25 02:45:09.949413 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 25 02:45:09.949436 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 25 02:45:09.949471 kernel: fuse: init (API version 7.39) Mar 25 02:45:09.949494 kernel: ACPI: bus type drm_connector registered Mar 25 02:45:09.949523 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 25 02:45:09.949547 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 25 02:45:09.949569 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 25 02:45:09.949591 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 25 02:45:09.949611 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 25 02:45:09.949632 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 25 02:45:09.949668 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 25 02:45:09.949750 systemd-journald[1153]: Collecting audit messages is disabled. Mar 25 02:45:09.949819 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 25 02:45:09.949845 systemd-journald[1153]: Journal started Mar 25 02:45:09.949879 systemd-journald[1153]: Runtime Journal (/run/log/journal/2c7bc964f3aa4fc5a0d3f0b11ca11809) is 4.7M, max 37.9M, 33.2M free. Mar 25 02:45:09.357242 systemd[1]: Queued start job for default target multi-user.target. Mar 25 02:45:09.372230 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 25 02:45:09.373100 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 25 02:45:09.957724 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 25 02:45:09.962749 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 25 02:45:09.965779 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 25 02:45:09.970754 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 25 02:45:09.976742 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 25 02:45:09.984716 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 25 02:45:09.984780 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 25 02:45:10.006728 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 25 02:45:10.010722 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 25 02:45:10.014721 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 25 02:45:10.019719 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 25 02:45:10.023776 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 25 02:45:10.034387 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 25 02:45:10.044867 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 25 02:45:10.062421 systemd[1]: Started systemd-journald.service - Journal Service. Mar 25 02:45:10.071143 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 25 02:45:10.073201 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 25 02:45:10.074938 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 25 02:45:10.077148 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 25 02:45:10.078825 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 25 02:45:10.125348 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 25 02:45:10.131893 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 25 02:45:10.139742 kernel: loop0: detected capacity change from 0 to 151640 Mar 25 02:45:10.141014 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 25 02:45:10.191812 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 25 02:45:10.219720 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 25 02:45:10.224902 systemd-journald[1153]: Time spent on flushing to /var/log/journal/2c7bc964f3aa4fc5a0d3f0b11ca11809 is 98.140ms for 1160 entries. Mar 25 02:45:10.224902 systemd-journald[1153]: System Journal (/var/log/journal/2c7bc964f3aa4fc5a0d3f0b11ca11809) is 8M, max 584.8M, 576.8M free. Mar 25 02:45:10.381224 systemd-journald[1153]: Received client request to flush runtime journal. Mar 25 02:45:10.381293 kernel: loop1: detected capacity change from 0 to 109808 Mar 25 02:45:10.381324 kernel: loop2: detected capacity change from 0 to 205544 Mar 25 02:45:10.242946 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 25 02:45:10.306356 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 25 02:45:10.311883 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 25 02:45:10.354899 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 25 02:45:10.373398 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 25 02:45:10.376826 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 25 02:45:10.384194 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 25 02:45:10.396598 systemd-tmpfiles[1212]: ACLs are not supported, ignoring. Mar 25 02:45:10.396629 systemd-tmpfiles[1212]: ACLs are not supported, ignoring. Mar 25 02:45:10.431008 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 25 02:45:10.446740 kernel: loop3: detected capacity change from 0 to 8 Mar 25 02:45:10.455105 udevadm[1215]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 25 02:45:10.473736 kernel: loop4: detected capacity change from 0 to 151640 Mar 25 02:45:10.499736 kernel: loop5: detected capacity change from 0 to 109808 Mar 25 02:45:10.528786 kernel: loop6: detected capacity change from 0 to 205544 Mar 25 02:45:10.546729 kernel: loop7: detected capacity change from 0 to 8 Mar 25 02:45:10.548609 (sd-merge)[1221]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Mar 25 02:45:10.549608 (sd-merge)[1221]: Merged extensions into '/usr'. Mar 25 02:45:10.558060 systemd[1]: Reload requested from client PID 1177 ('systemd-sysext') (unit systemd-sysext.service)... Mar 25 02:45:10.558089 systemd[1]: Reloading... Mar 25 02:45:10.817336 zram_generator::config[1247]: No configuration found. Mar 25 02:45:11.095736 ldconfig[1173]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 25 02:45:11.138493 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 25 02:45:11.237435 systemd[1]: Reloading finished in 678 ms. Mar 25 02:45:11.267009 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 25 02:45:11.277148 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 25 02:45:11.294947 systemd[1]: Starting ensure-sysext.service... Mar 25 02:45:11.300984 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 25 02:45:11.352749 systemd[1]: Reload requested from client PID 1305 ('systemctl') (unit ensure-sysext.service)... Mar 25 02:45:11.352782 systemd[1]: Reloading... Mar 25 02:45:11.406424 systemd-tmpfiles[1306]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 25 02:45:11.409304 systemd-tmpfiles[1306]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 25 02:45:11.412302 systemd-tmpfiles[1306]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 25 02:45:11.414770 systemd-tmpfiles[1306]: ACLs are not supported, ignoring. Mar 25 02:45:11.414925 systemd-tmpfiles[1306]: ACLs are not supported, ignoring. Mar 25 02:45:11.428292 systemd-tmpfiles[1306]: Detected autofs mount point /boot during canonicalization of boot. Mar 25 02:45:11.428312 systemd-tmpfiles[1306]: Skipping /boot Mar 25 02:45:11.577324 systemd-tmpfiles[1306]: Detected autofs mount point /boot during canonicalization of boot. Mar 25 02:45:11.577345 systemd-tmpfiles[1306]: Skipping /boot Mar 25 02:45:11.625978 zram_generator::config[1333]: No configuration found. Mar 25 02:45:11.851793 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 25 02:45:11.953294 systemd[1]: Reloading finished in 599 ms. Mar 25 02:45:11.970251 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 25 02:45:11.989972 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 25 02:45:12.001609 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 25 02:45:12.007023 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 25 02:45:12.012576 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 25 02:45:12.020050 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 25 02:45:12.023254 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 25 02:45:12.029459 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 25 02:45:12.035977 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 25 02:45:12.036302 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 25 02:45:12.043145 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 25 02:45:12.061181 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 25 02:45:12.068105 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 25 02:45:12.069104 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 25 02:45:12.069277 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 25 02:45:12.069426 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 25 02:45:12.081450 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 25 02:45:12.081854 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 25 02:45:12.082197 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 25 02:45:12.082413 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 25 02:45:12.088622 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 25 02:45:12.089603 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 25 02:45:12.095177 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 25 02:45:12.118588 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 25 02:45:12.119024 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 25 02:45:12.122008 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 25 02:45:12.125299 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 25 02:45:12.125485 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 25 02:45:12.125710 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 25 02:45:12.136442 systemd[1]: Finished ensure-sysext.service. Mar 25 02:45:12.149692 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 25 02:45:12.157413 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 25 02:45:12.162985 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 25 02:45:12.184811 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 25 02:45:12.186224 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 25 02:45:12.186574 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 25 02:45:12.188786 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 25 02:45:12.189466 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 25 02:45:12.191746 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 25 02:45:12.192459 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 25 02:45:12.200455 systemd-udevd[1398]: Using default interface naming scheme 'v255'. Mar 25 02:45:12.205377 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 25 02:45:12.205480 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 25 02:45:12.205530 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 25 02:45:12.215475 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 25 02:45:12.216795 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 25 02:45:12.228355 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 25 02:45:12.240124 augenrules[1434]: No rules Mar 25 02:45:12.242755 systemd[1]: audit-rules.service: Deactivated successfully. Mar 25 02:45:12.243784 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 25 02:45:12.251795 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 25 02:45:12.269843 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 25 02:45:12.279882 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 25 02:45:12.400213 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 25 02:45:12.402285 systemd[1]: Reached target time-set.target - System Time Set. Mar 25 02:45:12.504640 systemd-resolved[1396]: Positive Trust Anchors: Mar 25 02:45:12.504661 systemd-resolved[1396]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 25 02:45:12.504733 systemd-resolved[1396]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 25 02:45:12.511030 systemd-networkd[1446]: lo: Link UP Mar 25 02:45:12.511677 systemd-networkd[1446]: lo: Gained carrier Mar 25 02:45:12.513678 systemd-networkd[1446]: Enumeration completed Mar 25 02:45:12.514033 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 25 02:45:12.522084 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 25 02:45:12.529478 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 25 02:45:12.530876 systemd-resolved[1396]: Using system hostname 'srv-nkv7s.gb1.brightbox.com'. Mar 25 02:45:12.538859 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 25 02:45:12.539675 systemd[1]: Reached target network.target - Network. Mar 25 02:45:12.540346 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 25 02:45:12.559970 systemd-networkd[1446]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 25 02:45:12.560155 systemd-networkd[1446]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 25 02:45:12.562582 systemd-networkd[1446]: eth0: Link UP Mar 25 02:45:12.562715 systemd-networkd[1446]: eth0: Gained carrier Mar 25 02:45:12.562838 systemd-networkd[1446]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 25 02:45:12.586814 systemd-networkd[1446]: eth0: DHCPv4 address 10.230.58.198/30, gateway 10.230.58.197 acquired from 10.230.58.197 Mar 25 02:45:12.590116 systemd-timesyncd[1414]: Network configuration changed, trying to establish connection. Mar 25 02:45:12.593222 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 25 02:45:12.609756 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 25 02:45:12.617748 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1458) Mar 25 02:45:12.726741 kernel: mousedev: PS/2 mouse device common for all mice Mar 25 02:45:12.738737 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Mar 25 02:45:12.757198 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 25 02:45:12.760325 kernel: ACPI: button: Power Button [PWRF] Mar 25 02:45:12.761515 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 25 02:45:12.789600 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 25 02:45:12.806753 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 25 02:45:12.811894 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 25 02:45:12.816840 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 25 02:45:12.828716 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Mar 25 02:45:12.870581 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 25 02:45:13.074475 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 25 02:45:13.101288 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 25 02:45:13.104855 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 25 02:45:13.134796 lvm[1486]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 25 02:45:13.175621 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 25 02:45:13.176922 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 25 02:45:13.177710 systemd[1]: Reached target sysinit.target - System Initialization. Mar 25 02:45:13.178611 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 25 02:45:13.179511 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 25 02:45:13.180861 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 25 02:45:13.181785 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 25 02:45:13.182631 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 25 02:45:13.183432 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 25 02:45:13.183484 systemd[1]: Reached target paths.target - Path Units. Mar 25 02:45:13.184152 systemd[1]: Reached target timers.target - Timer Units. Mar 25 02:45:13.187189 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 25 02:45:13.190316 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 25 02:45:13.195216 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 25 02:45:13.196257 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 25 02:45:13.197052 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 25 02:45:13.205573 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 25 02:45:13.207230 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 25 02:45:13.210020 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 25 02:45:13.211504 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 25 02:45:13.212371 systemd[1]: Reached target sockets.target - Socket Units. Mar 25 02:45:13.213105 systemd[1]: Reached target basic.target - Basic System. Mar 25 02:45:13.213857 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 25 02:45:13.213955 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 25 02:45:13.215573 systemd[1]: Starting containerd.service - containerd container runtime... Mar 25 02:45:13.224956 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 25 02:45:13.228462 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 25 02:45:13.234600 lvm[1490]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 25 02:45:13.234843 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 25 02:45:13.244062 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 25 02:45:13.245450 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 25 02:45:13.255105 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 25 02:45:13.259933 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 25 02:45:13.266898 jq[1494]: false Mar 25 02:45:13.268018 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 25 02:45:13.277069 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 25 02:45:13.295279 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 25 02:45:13.299405 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 25 02:45:13.300447 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 25 02:45:13.309088 systemd[1]: Starting update-engine.service - Update Engine... Mar 25 02:45:13.312843 dbus-daemon[1493]: [system] SELinux support is enabled Mar 25 02:45:13.316790 dbus-daemon[1493]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1446 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Mar 25 02:45:13.321923 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 25 02:45:13.326316 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 25 02:45:13.334788 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 25 02:45:13.360578 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 25 02:45:13.361808 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 25 02:45:13.374255 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 25 02:45:13.375846 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 25 02:45:13.402093 dbus-daemon[1493]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 25 02:45:13.418724 extend-filesystems[1495]: Found loop4 Mar 25 02:45:13.418724 extend-filesystems[1495]: Found loop5 Mar 25 02:45:13.418724 extend-filesystems[1495]: Found loop6 Mar 25 02:45:13.418724 extend-filesystems[1495]: Found loop7 Mar 25 02:45:13.439430 extend-filesystems[1495]: Found vda Mar 25 02:45:13.439430 extend-filesystems[1495]: Found vda1 Mar 25 02:45:13.439430 extend-filesystems[1495]: Found vda2 Mar 25 02:45:13.439430 extend-filesystems[1495]: Found vda3 Mar 25 02:45:13.439430 extend-filesystems[1495]: Found usr Mar 25 02:45:13.439430 extend-filesystems[1495]: Found vda4 Mar 25 02:45:13.439430 extend-filesystems[1495]: Found vda6 Mar 25 02:45:13.439430 extend-filesystems[1495]: Found vda7 Mar 25 02:45:13.439430 extend-filesystems[1495]: Found vda9 Mar 25 02:45:13.439430 extend-filesystems[1495]: Checking size of /dev/vda9 Mar 25 02:45:13.492543 jq[1505]: true Mar 25 02:45:13.425366 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 25 02:45:13.494249 tar[1508]: linux-amd64/helm Mar 25 02:45:13.425416 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 25 02:45:13.432906 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Mar 25 02:45:13.433677 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 25 02:45:13.433718 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 25 02:45:13.447351 (ntainerd)[1520]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 25 02:45:13.458465 systemd-logind[1503]: Watching system buttons on /dev/input/event2 (Power Button) Mar 25 02:45:13.458504 systemd-logind[1503]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 25 02:45:13.458845 systemd-logind[1503]: New seat seat0. Mar 25 02:45:13.465896 systemd[1]: Started systemd-logind.service - User Login Management. Mar 25 02:45:13.476103 systemd[1]: motdgen.service: Deactivated successfully. Mar 25 02:45:13.476480 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 25 02:45:13.518055 update_engine[1504]: I20250325 02:45:13.515358 1504 main.cc:92] Flatcar Update Engine starting Mar 25 02:45:13.567752 update_engine[1504]: I20250325 02:45:13.529464 1504 update_check_scheduler.cc:74] Next update check in 3m1s Mar 25 02:45:13.567907 extend-filesystems[1495]: Resized partition /dev/vda9 Mar 25 02:45:13.577033 jq[1526]: true Mar 25 02:45:13.571523 systemd[1]: Started update-engine.service - Update Engine. Mar 25 02:45:13.577302 extend-filesystems[1533]: resize2fs 1.47.2 (1-Jan-2025) Mar 25 02:45:13.593733 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Mar 25 02:45:13.618444 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 25 02:45:13.664775 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1456) Mar 25 02:45:13.720733 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 25 02:45:13.842869 locksmithd[1536]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 25 02:45:14.051122 systemd-networkd[1446]: eth0: Gained IPv6LL Mar 25 02:45:14.068319 systemd-timesyncd[1414]: Network configuration changed, trying to establish connection. Mar 25 02:45:14.116324 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 25 02:45:14.136058 bash[1556]: Updated "/home/core/.ssh/authorized_keys" Mar 25 02:45:14.128862 systemd[1]: Reached target network-online.target - Network is Online. Mar 25 02:45:14.150481 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 02:45:14.161423 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 25 02:45:14.165512 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 25 02:45:14.179221 systemd[1]: Starting sshkeys.service... Mar 25 02:45:14.269815 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 25 02:45:14.278837 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 25 02:45:14.284491 dbus-daemon[1493]: [system] Successfully activated service 'org.freedesktop.hostname1' Mar 25 02:45:14.290632 dbus-daemon[1493]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1525 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Mar 25 02:45:14.291261 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Mar 25 02:45:14.305862 systemd[1]: Starting polkit.service - Authorization Manager... Mar 25 02:45:14.339742 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Mar 25 02:45:14.427230 extend-filesystems[1533]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 25 02:45:14.427230 extend-filesystems[1533]: old_desc_blocks = 1, new_desc_blocks = 8 Mar 25 02:45:14.427230 extend-filesystems[1533]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Mar 25 02:45:14.426472 polkitd[1570]: Started polkitd version 121 Mar 25 02:45:14.427667 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 25 02:45:14.450677 extend-filesystems[1495]: Resized filesystem in /dev/vda9 Mar 25 02:45:14.428260 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 25 02:45:14.602508 sshd_keygen[1514]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 25 02:45:14.618857 polkitd[1570]: Loading rules from directory /etc/polkit-1/rules.d Mar 25 02:45:14.619014 polkitd[1570]: Loading rules from directory /usr/share/polkit-1/rules.d Mar 25 02:45:14.624680 polkitd[1570]: Finished loading, compiling and executing 2 rules Mar 25 02:45:14.625897 systemd[1]: Started polkit.service - Authorization Manager. Mar 25 02:45:14.625597 dbus-daemon[1493]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Mar 25 02:45:14.632453 polkitd[1570]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Mar 25 02:45:14.635644 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 25 02:45:14.671135 systemd-hostnamed[1525]: Hostname set to (static) Mar 25 02:45:14.689310 systemd-timesyncd[1414]: Network configuration changed, trying to establish connection. Mar 25 02:45:14.694187 systemd-networkd[1446]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8eb1:24:19ff:fee6:3ac6/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8eb1:24:19ff:fee6:3ac6/64 assigned by NDisc. Mar 25 02:45:14.694200 systemd-networkd[1446]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Mar 25 02:45:14.717405 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 25 02:45:14.726660 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 25 02:45:14.731121 systemd[1]: Started sshd@0-10.230.58.198:22-139.178.68.195:34914.service - OpenSSH per-connection server daemon (139.178.68.195:34914). Mar 25 02:45:14.744167 systemd[1]: Started sshd@1-10.230.58.198:22-81.192.87.130:23469.service - OpenSSH per-connection server daemon (81.192.87.130:23469). Mar 25 02:45:14.781214 systemd[1]: issuegen.service: Deactivated successfully. Mar 25 02:45:14.781607 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 25 02:45:14.811001 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 25 02:45:14.829586 containerd[1520]: time="2025-03-25T02:45:14Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 25 02:45:14.847990 containerd[1520]: time="2025-03-25T02:45:14.847733821Z" level=info msg="starting containerd" revision=88aa2f531d6c2922003cc7929e51daf1c14caa0a version=v2.0.1 Mar 25 02:45:14.894690 containerd[1520]: time="2025-03-25T02:45:14.893299002Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.25µs" Mar 25 02:45:14.894690 containerd[1520]: time="2025-03-25T02:45:14.893373214Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 25 02:45:14.894690 containerd[1520]: time="2025-03-25T02:45:14.893485137Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 25 02:45:14.896176 containerd[1520]: time="2025-03-25T02:45:14.896141710Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 25 02:45:14.896776 containerd[1520]: time="2025-03-25T02:45:14.896742742Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 25 02:45:14.897399 containerd[1520]: time="2025-03-25T02:45:14.897366569Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 25 02:45:14.897691 containerd[1520]: time="2025-03-25T02:45:14.897657730Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 25 02:45:14.898169 containerd[1520]: time="2025-03-25T02:45:14.898137738Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 25 02:45:14.900822 containerd[1520]: time="2025-03-25T02:45:14.899101496Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 25 02:45:14.900822 containerd[1520]: time="2025-03-25T02:45:14.899135659Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 25 02:45:14.900822 containerd[1520]: time="2025-03-25T02:45:14.899184210Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 25 02:45:14.900822 containerd[1520]: time="2025-03-25T02:45:14.899208014Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 25 02:45:14.900822 containerd[1520]: time="2025-03-25T02:45:14.899429116Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 25 02:45:14.903838 containerd[1520]: time="2025-03-25T02:45:14.903805154Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 25 02:45:14.904003 containerd[1520]: time="2025-03-25T02:45:14.903972031Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 25 02:45:14.904102 containerd[1520]: time="2025-03-25T02:45:14.904076302Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 25 02:45:14.904486 containerd[1520]: time="2025-03-25T02:45:14.904454463Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 25 02:45:14.905398 containerd[1520]: time="2025-03-25T02:45:14.905325827Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 25 02:45:14.905607 containerd[1520]: time="2025-03-25T02:45:14.905573479Z" level=info msg="metadata content store policy set" policy=shared Mar 25 02:45:15.020437 containerd[1520]: time="2025-03-25T02:45:15.007982874Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 25 02:45:15.020437 containerd[1520]: time="2025-03-25T02:45:15.009966125Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 25 02:45:15.020437 containerd[1520]: time="2025-03-25T02:45:15.010260026Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 25 02:45:15.020437 containerd[1520]: time="2025-03-25T02:45:15.010642367Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 25 02:45:15.020437 containerd[1520]: time="2025-03-25T02:45:15.013403783Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 25 02:45:15.020437 containerd[1520]: time="2025-03-25T02:45:15.013582011Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 25 02:45:15.020437 containerd[1520]: time="2025-03-25T02:45:15.013678458Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 25 02:45:15.020437 containerd[1520]: time="2025-03-25T02:45:15.013870410Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 25 02:45:15.020437 containerd[1520]: time="2025-03-25T02:45:15.013900036Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 25 02:45:15.020437 containerd[1520]: time="2025-03-25T02:45:15.014084724Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 25 02:45:15.020437 containerd[1520]: time="2025-03-25T02:45:15.014114922Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 25 02:45:15.020437 containerd[1520]: time="2025-03-25T02:45:15.014187283Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 25 02:45:15.026657 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 25 02:45:15.028364 containerd[1520]: time="2025-03-25T02:45:15.025882418Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 25 02:45:15.028364 containerd[1520]: time="2025-03-25T02:45:15.026052944Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 25 02:45:15.028364 containerd[1520]: time="2025-03-25T02:45:15.026117311Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 25 02:45:15.028364 containerd[1520]: time="2025-03-25T02:45:15.026156186Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 25 02:45:15.028364 containerd[1520]: time="2025-03-25T02:45:15.026213318Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 25 02:45:15.028364 containerd[1520]: time="2025-03-25T02:45:15.026237567Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 25 02:45:15.028364 containerd[1520]: time="2025-03-25T02:45:15.026316452Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 25 02:45:15.028364 containerd[1520]: time="2025-03-25T02:45:15.026379579Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 25 02:45:15.028364 containerd[1520]: time="2025-03-25T02:45:15.026513881Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 25 02:45:15.028364 containerd[1520]: time="2025-03-25T02:45:15.026550116Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 25 02:45:15.028364 containerd[1520]: time="2025-03-25T02:45:15.027487313Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 25 02:45:15.035421 containerd[1520]: time="2025-03-25T02:45:15.032643372Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 25 02:45:15.035421 containerd[1520]: time="2025-03-25T02:45:15.032862370Z" level=info msg="Start snapshots syncer" Mar 25 02:45:15.035421 containerd[1520]: time="2025-03-25T02:45:15.033747662Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 25 02:45:15.035603 containerd[1520]: time="2025-03-25T02:45:15.034774003Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 25 02:45:15.035603 containerd[1520]: time="2025-03-25T02:45:15.034889193Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 25 02:45:15.036532 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 25 02:45:15.041832 containerd[1520]: time="2025-03-25T02:45:15.041418385Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 25 02:45:15.044743 containerd[1520]: time="2025-03-25T02:45:15.041684829Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 25 02:45:15.044743 containerd[1520]: time="2025-03-25T02:45:15.044200144Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 25 02:45:15.044743 containerd[1520]: time="2025-03-25T02:45:15.044232526Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 25 02:45:15.044743 containerd[1520]: time="2025-03-25T02:45:15.044254408Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 25 02:45:15.044743 containerd[1520]: time="2025-03-25T02:45:15.044286305Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 25 02:45:15.044743 containerd[1520]: time="2025-03-25T02:45:15.044317822Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 25 02:45:15.044743 containerd[1520]: time="2025-03-25T02:45:15.044345618Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 25 02:45:15.045314 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 25 02:45:15.049842 containerd[1520]: time="2025-03-25T02:45:15.046181832Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 25 02:45:15.049842 containerd[1520]: time="2025-03-25T02:45:15.046243843Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 25 02:45:15.049842 containerd[1520]: time="2025-03-25T02:45:15.046272728Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 25 02:45:15.049842 containerd[1520]: time="2025-03-25T02:45:15.046348135Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 25 02:45:15.049842 containerd[1520]: time="2025-03-25T02:45:15.046378649Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 25 02:45:15.049842 containerd[1520]: time="2025-03-25T02:45:15.046395427Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 25 02:45:15.049842 containerd[1520]: time="2025-03-25T02:45:15.046413483Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 25 02:45:15.049842 containerd[1520]: time="2025-03-25T02:45:15.046428987Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 25 02:45:15.049842 containerd[1520]: time="2025-03-25T02:45:15.046446329Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 25 02:45:15.049842 containerd[1520]: time="2025-03-25T02:45:15.048791455Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 25 02:45:15.049842 containerd[1520]: time="2025-03-25T02:45:15.048864277Z" level=info msg="runtime interface created" Mar 25 02:45:15.049842 containerd[1520]: time="2025-03-25T02:45:15.048881242Z" level=info msg="created NRI interface" Mar 25 02:45:15.049842 containerd[1520]: time="2025-03-25T02:45:15.048916092Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 25 02:45:15.049842 containerd[1520]: time="2025-03-25T02:45:15.048951129Z" level=info msg="Connect containerd service" Mar 25 02:45:15.049842 containerd[1520]: time="2025-03-25T02:45:15.049046527Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 25 02:45:15.047005 systemd[1]: Reached target getty.target - Login Prompts. Mar 25 02:45:15.063293 containerd[1520]: time="2025-03-25T02:45:15.061110916Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 25 02:45:15.439112 sshd[1603]: Invalid user uno85 from 81.192.87.130 port 23469 Mar 25 02:45:15.471932 containerd[1520]: time="2025-03-25T02:45:15.469786128Z" level=info msg="Start subscribing containerd event" Mar 25 02:45:15.471932 containerd[1520]: time="2025-03-25T02:45:15.469920996Z" level=info msg="Start recovering state" Mar 25 02:45:15.471932 containerd[1520]: time="2025-03-25T02:45:15.470271856Z" level=info msg="Start event monitor" Mar 25 02:45:15.471932 containerd[1520]: time="2025-03-25T02:45:15.470316130Z" level=info msg="Start cni network conf syncer for default" Mar 25 02:45:15.471932 containerd[1520]: time="2025-03-25T02:45:15.470344648Z" level=info msg="Start streaming server" Mar 25 02:45:15.471932 containerd[1520]: time="2025-03-25T02:45:15.470390570Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 25 02:45:15.471932 containerd[1520]: time="2025-03-25T02:45:15.470416046Z" level=info msg="runtime interface starting up..." Mar 25 02:45:15.471932 containerd[1520]: time="2025-03-25T02:45:15.470458799Z" level=info msg="starting plugins..." Mar 25 02:45:15.471932 containerd[1520]: time="2025-03-25T02:45:15.470514399Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 25 02:45:15.478261 containerd[1520]: time="2025-03-25T02:45:15.473069143Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 25 02:45:15.478261 containerd[1520]: time="2025-03-25T02:45:15.478024426Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 25 02:45:15.479661 systemd[1]: Started containerd.service - containerd container runtime. Mar 25 02:45:15.503034 containerd[1520]: time="2025-03-25T02:45:15.496619758Z" level=info msg="containerd successfully booted in 0.668150s" Mar 25 02:45:15.509128 sshd[1603]: Received disconnect from 81.192.87.130 port 23469:11: Bye Bye [preauth] Mar 25 02:45:15.514725 sshd[1603]: Disconnected from invalid user uno85 81.192.87.130 port 23469 [preauth] Mar 25 02:45:15.518547 systemd[1]: sshd@1-10.230.58.198:22-81.192.87.130:23469.service: Deactivated successfully. Mar 25 02:45:15.861650 tar[1508]: linux-amd64/LICENSE Mar 25 02:45:15.862747 tar[1508]: linux-amd64/README.md Mar 25 02:45:15.889760 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 25 02:45:16.004723 sshd[1602]: Accepted publickey for core from 139.178.68.195 port 34914 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:45:16.012976 sshd-session[1602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:45:16.031806 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 25 02:45:16.036192 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 25 02:45:16.063533 systemd-logind[1503]: New session 1 of user core. Mar 25 02:45:16.095959 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 25 02:45:16.105149 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 25 02:45:16.118901 (systemd)[1634]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 25 02:45:16.125756 systemd-logind[1503]: New session c1 of user core. Mar 25 02:45:16.181330 systemd-timesyncd[1414]: Network configuration changed, trying to establish connection. Mar 25 02:45:16.415516 systemd[1634]: Queued start job for default target default.target. Mar 25 02:45:16.423933 systemd[1634]: Created slice app.slice - User Application Slice. Mar 25 02:45:16.423976 systemd[1634]: Reached target paths.target - Paths. Mar 25 02:45:16.424182 systemd[1634]: Reached target timers.target - Timers. Mar 25 02:45:16.427888 systemd[1634]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 25 02:45:16.461708 systemd[1634]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 25 02:45:16.461949 systemd[1634]: Reached target sockets.target - Sockets. Mar 25 02:45:16.462022 systemd[1634]: Reached target basic.target - Basic System. Mar 25 02:45:16.462103 systemd[1634]: Reached target default.target - Main User Target. Mar 25 02:45:16.462174 systemd[1634]: Startup finished in 323ms. Mar 25 02:45:16.462388 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 25 02:45:16.481212 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 25 02:45:16.553978 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 02:45:16.566255 (kubelet)[1648]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 25 02:45:17.133123 systemd[1]: Started sshd@2-10.230.58.198:22-139.178.68.195:53460.service - OpenSSH per-connection server daemon (139.178.68.195:53460). Mar 25 02:45:17.410524 kubelet[1648]: E0325 02:45:17.408477 1648 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 25 02:45:17.413397 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 25 02:45:17.414020 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 25 02:45:17.415166 systemd[1]: kubelet.service: Consumed 1.510s CPU time, 238.2M memory peak. Mar 25 02:45:18.047014 sshd[1657]: Accepted publickey for core from 139.178.68.195 port 53460 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:45:18.049276 sshd-session[1657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:45:18.058567 systemd-logind[1503]: New session 2 of user core. Mar 25 02:45:18.072038 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 25 02:45:18.666245 sshd[1661]: Connection closed by 139.178.68.195 port 53460 Mar 25 02:45:18.667750 sshd-session[1657]: pam_unix(sshd:session): session closed for user core Mar 25 02:45:18.674594 systemd[1]: sshd@2-10.230.58.198:22-139.178.68.195:53460.service: Deactivated successfully. Mar 25 02:45:18.678262 systemd[1]: session-2.scope: Deactivated successfully. Mar 25 02:45:18.679654 systemd-logind[1503]: Session 2 logged out. Waiting for processes to exit. Mar 25 02:45:18.681554 systemd-logind[1503]: Removed session 2. Mar 25 02:45:18.823811 systemd[1]: Started sshd@3-10.230.58.198:22-139.178.68.195:53468.service - OpenSSH per-connection server daemon (139.178.68.195:53468). Mar 25 02:45:19.778015 sshd[1667]: Accepted publickey for core from 139.178.68.195 port 53468 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:45:19.780654 sshd-session[1667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:45:19.790806 systemd-logind[1503]: New session 3 of user core. Mar 25 02:45:19.800031 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 25 02:45:20.135088 login[1612]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Mar 25 02:45:20.144157 systemd-logind[1503]: New session 4 of user core. Mar 25 02:45:20.151992 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 25 02:45:20.166060 login[1610]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Mar 25 02:45:20.177458 systemd-logind[1503]: New session 5 of user core. Mar 25 02:45:20.186093 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 25 02:45:20.398841 sshd[1669]: Connection closed by 139.178.68.195 port 53468 Mar 25 02:45:20.397987 sshd-session[1667]: pam_unix(sshd:session): session closed for user core Mar 25 02:45:20.402861 systemd[1]: sshd@3-10.230.58.198:22-139.178.68.195:53468.service: Deactivated successfully. Mar 25 02:45:20.405838 systemd[1]: session-3.scope: Deactivated successfully. Mar 25 02:45:20.408381 systemd-logind[1503]: Session 3 logged out. Waiting for processes to exit. Mar 25 02:45:20.409980 systemd-logind[1503]: Removed session 3. Mar 25 02:45:20.898210 coreos-metadata[1492]: Mar 25 02:45:20.898 WARN failed to locate config-drive, using the metadata service API instead Mar 25 02:45:20.944849 coreos-metadata[1492]: Mar 25 02:45:20.944 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Mar 25 02:45:20.951007 coreos-metadata[1492]: Mar 25 02:45:20.950 INFO Fetch failed with 404: resource not found Mar 25 02:45:20.951007 coreos-metadata[1492]: Mar 25 02:45:20.950 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Mar 25 02:45:20.951817 coreos-metadata[1492]: Mar 25 02:45:20.951 INFO Fetch successful Mar 25 02:45:20.951957 coreos-metadata[1492]: Mar 25 02:45:20.951 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Mar 25 02:45:20.964052 coreos-metadata[1492]: Mar 25 02:45:20.964 INFO Fetch successful Mar 25 02:45:20.964242 coreos-metadata[1492]: Mar 25 02:45:20.964 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Mar 25 02:45:20.976649 coreos-metadata[1492]: Mar 25 02:45:20.976 INFO Fetch successful Mar 25 02:45:20.976859 coreos-metadata[1492]: Mar 25 02:45:20.976 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Mar 25 02:45:20.992652 coreos-metadata[1492]: Mar 25 02:45:20.992 INFO Fetch successful Mar 25 02:45:20.992838 coreos-metadata[1492]: Mar 25 02:45:20.992 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Mar 25 02:45:21.009162 coreos-metadata[1492]: Mar 25 02:45:21.009 INFO Fetch successful Mar 25 02:45:21.049294 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 25 02:45:21.050937 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 25 02:45:21.757921 coreos-metadata[1567]: Mar 25 02:45:21.757 WARN failed to locate config-drive, using the metadata service API instead Mar 25 02:45:21.781224 coreos-metadata[1567]: Mar 25 02:45:21.781 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Mar 25 02:45:21.801560 coreos-metadata[1567]: Mar 25 02:45:21.801 INFO Fetch successful Mar 25 02:45:21.801770 coreos-metadata[1567]: Mar 25 02:45:21.801 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Mar 25 02:45:21.831858 coreos-metadata[1567]: Mar 25 02:45:21.831 INFO Fetch successful Mar 25 02:45:21.835242 unknown[1567]: wrote ssh authorized keys file for user: core Mar 25 02:45:21.879850 update-ssh-keys[1707]: Updated "/home/core/.ssh/authorized_keys" Mar 25 02:45:21.880954 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 25 02:45:21.884128 systemd[1]: Finished sshkeys.service. Mar 25 02:45:21.888411 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 25 02:45:21.889943 systemd[1]: Startup finished in 1.615s (kernel) + 15.608s (initrd) + 13.553s (userspace) = 30.777s. Mar 25 02:45:27.604984 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 25 02:45:27.607899 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 02:45:27.945781 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 02:45:27.959255 (kubelet)[1719]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 25 02:45:28.052024 kubelet[1719]: E0325 02:45:28.051866 1719 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 25 02:45:28.057488 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 25 02:45:28.058027 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 25 02:45:28.058957 systemd[1]: kubelet.service: Consumed 392ms CPU time, 97.7M memory peak. Mar 25 02:45:30.556570 systemd[1]: Started sshd@4-10.230.58.198:22-139.178.68.195:34546.service - OpenSSH per-connection server daemon (139.178.68.195:34546). Mar 25 02:45:31.465932 sshd[1727]: Accepted publickey for core from 139.178.68.195 port 34546 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:45:31.468187 sshd-session[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:45:31.476124 systemd-logind[1503]: New session 6 of user core. Mar 25 02:45:31.485000 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 25 02:45:32.085661 sshd[1729]: Connection closed by 139.178.68.195 port 34546 Mar 25 02:45:32.087095 sshd-session[1727]: pam_unix(sshd:session): session closed for user core Mar 25 02:45:32.091817 systemd[1]: sshd@4-10.230.58.198:22-139.178.68.195:34546.service: Deactivated successfully. Mar 25 02:45:32.094614 systemd[1]: session-6.scope: Deactivated successfully. Mar 25 02:45:32.096859 systemd-logind[1503]: Session 6 logged out. Waiting for processes to exit. Mar 25 02:45:32.098546 systemd-logind[1503]: Removed session 6. Mar 25 02:45:32.241954 systemd[1]: Started sshd@5-10.230.58.198:22-139.178.68.195:34548.service - OpenSSH per-connection server daemon (139.178.68.195:34548). Mar 25 02:45:33.155410 sshd[1735]: Accepted publickey for core from 139.178.68.195 port 34548 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:45:33.157570 sshd-session[1735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:45:33.165956 systemd-logind[1503]: New session 7 of user core. Mar 25 02:45:33.175988 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 25 02:45:33.770620 sshd[1737]: Connection closed by 139.178.68.195 port 34548 Mar 25 02:45:33.771611 sshd-session[1735]: pam_unix(sshd:session): session closed for user core Mar 25 02:45:33.776682 systemd[1]: sshd@5-10.230.58.198:22-139.178.68.195:34548.service: Deactivated successfully. Mar 25 02:45:33.779470 systemd[1]: session-7.scope: Deactivated successfully. Mar 25 02:45:33.780725 systemd-logind[1503]: Session 7 logged out. Waiting for processes to exit. Mar 25 02:45:33.782473 systemd-logind[1503]: Removed session 7. Mar 25 02:45:33.929348 systemd[1]: Started sshd@6-10.230.58.198:22-139.178.68.195:34550.service - OpenSSH per-connection server daemon (139.178.68.195:34550). Mar 25 02:45:34.847080 sshd[1743]: Accepted publickey for core from 139.178.68.195 port 34550 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:45:34.851115 sshd-session[1743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:45:34.859951 systemd-logind[1503]: New session 8 of user core. Mar 25 02:45:34.868991 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 25 02:45:35.469338 sshd[1745]: Connection closed by 139.178.68.195 port 34550 Mar 25 02:45:35.470564 sshd-session[1743]: pam_unix(sshd:session): session closed for user core Mar 25 02:45:35.475236 systemd-logind[1503]: Session 8 logged out. Waiting for processes to exit. Mar 25 02:45:35.475934 systemd[1]: sshd@6-10.230.58.198:22-139.178.68.195:34550.service: Deactivated successfully. Mar 25 02:45:35.479506 systemd[1]: session-8.scope: Deactivated successfully. Mar 25 02:45:35.482079 systemd-logind[1503]: Removed session 8. Mar 25 02:45:35.626042 systemd[1]: Started sshd@7-10.230.58.198:22-139.178.68.195:34456.service - OpenSSH per-connection server daemon (139.178.68.195:34456). Mar 25 02:45:36.525979 sshd[1751]: Accepted publickey for core from 139.178.68.195 port 34456 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:45:36.528213 sshd-session[1751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:45:36.536398 systemd-logind[1503]: New session 9 of user core. Mar 25 02:45:36.542929 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 25 02:45:37.014626 sudo[1754]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 25 02:45:37.015126 sudo[1754]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 25 02:45:37.031385 sudo[1754]: pam_unix(sudo:session): session closed for user root Mar 25 02:45:37.174424 sshd[1753]: Connection closed by 139.178.68.195 port 34456 Mar 25 02:45:37.175822 sshd-session[1751]: pam_unix(sshd:session): session closed for user core Mar 25 02:45:37.181867 systemd[1]: sshd@7-10.230.58.198:22-139.178.68.195:34456.service: Deactivated successfully. Mar 25 02:45:37.184417 systemd[1]: session-9.scope: Deactivated successfully. Mar 25 02:45:37.185498 systemd-logind[1503]: Session 9 logged out. Waiting for processes to exit. Mar 25 02:45:37.187253 systemd-logind[1503]: Removed session 9. Mar 25 02:45:37.331828 systemd[1]: Started sshd@8-10.230.58.198:22-139.178.68.195:34462.service - OpenSSH per-connection server daemon (139.178.68.195:34462). Mar 25 02:45:38.084138 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 25 02:45:38.087531 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 02:45:38.238631 sshd[1760]: Accepted publickey for core from 139.178.68.195 port 34462 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:45:38.240095 sshd-session[1760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:45:38.251913 systemd-logind[1503]: New session 10 of user core. Mar 25 02:45:38.266020 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 25 02:45:38.269737 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 02:45:38.286239 (kubelet)[1769]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 25 02:45:38.360951 kubelet[1769]: E0325 02:45:38.360775 1769 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 25 02:45:38.364949 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 25 02:45:38.365416 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 25 02:45:38.366438 systemd[1]: kubelet.service: Consumed 221ms CPU time, 93.9M memory peak. Mar 25 02:45:38.715441 sudo[1779]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 25 02:45:38.716112 sudo[1779]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 25 02:45:38.722318 sudo[1779]: pam_unix(sudo:session): session closed for user root Mar 25 02:45:38.730893 sudo[1778]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 25 02:45:38.731355 sudo[1778]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 25 02:45:38.745787 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 25 02:45:38.800972 augenrules[1801]: No rules Mar 25 02:45:38.801944 systemd[1]: audit-rules.service: Deactivated successfully. Mar 25 02:45:38.802380 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 25 02:45:38.803671 sudo[1778]: pam_unix(sudo:session): session closed for user root Mar 25 02:45:38.947199 sshd[1771]: Connection closed by 139.178.68.195 port 34462 Mar 25 02:45:38.948386 sshd-session[1760]: pam_unix(sshd:session): session closed for user core Mar 25 02:45:38.953072 systemd[1]: sshd@8-10.230.58.198:22-139.178.68.195:34462.service: Deactivated successfully. Mar 25 02:45:38.956176 systemd[1]: session-10.scope: Deactivated successfully. Mar 25 02:45:38.958591 systemd-logind[1503]: Session 10 logged out. Waiting for processes to exit. Mar 25 02:45:38.959976 systemd-logind[1503]: Removed session 10. Mar 25 02:45:39.104782 systemd[1]: Started sshd@9-10.230.58.198:22-139.178.68.195:34476.service - OpenSSH per-connection server daemon (139.178.68.195:34476). Mar 25 02:45:40.007962 sshd[1810]: Accepted publickey for core from 139.178.68.195 port 34476 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:45:40.010217 sshd-session[1810]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:45:40.018092 systemd-logind[1503]: New session 11 of user core. Mar 25 02:45:40.026943 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 25 02:45:40.485910 sudo[1813]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 25 02:45:40.486406 sudo[1813]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 25 02:45:41.422658 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 25 02:45:41.437362 (dockerd)[1831]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 25 02:45:42.136173 dockerd[1831]: time="2025-03-25T02:45:42.136023394Z" level=info msg="Starting up" Mar 25 02:45:42.138999 dockerd[1831]: time="2025-03-25T02:45:42.138564214Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 25 02:45:42.245417 dockerd[1831]: time="2025-03-25T02:45:42.245092917Z" level=info msg="Loading containers: start." Mar 25 02:45:42.480954 kernel: Initializing XFRM netlink socket Mar 25 02:45:42.489859 systemd-timesyncd[1414]: Network configuration changed, trying to establish connection. Mar 25 02:45:42.603582 systemd-networkd[1446]: docker0: Link UP Mar 25 02:45:42.673613 dockerd[1831]: time="2025-03-25T02:45:42.673547305Z" level=info msg="Loading containers: done." Mar 25 02:45:42.699841 dockerd[1831]: time="2025-03-25T02:45:42.699765098Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 25 02:45:42.700083 dockerd[1831]: time="2025-03-25T02:45:42.699891242Z" level=info msg="Docker daemon" commit=c710b88579fcb5e0d53f96dcae976d79323b9166 containerd-snapshotter=false storage-driver=overlay2 version=27.4.1 Mar 25 02:45:42.700083 dockerd[1831]: time="2025-03-25T02:45:42.700072867Z" level=info msg="Daemon has completed initialization" Mar 25 02:45:42.741937 dockerd[1831]: time="2025-03-25T02:45:42.740598415Z" level=info msg="API listen on /run/docker.sock" Mar 25 02:45:42.741087 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 25 02:45:43.405974 systemd-resolved[1396]: Clock change detected. Flushing caches. Mar 25 02:45:43.406369 systemd-timesyncd[1414]: Contacted time server [2a05:d01c:9d2:d200::be00:5]:123 (2.flatcar.pool.ntp.org). Mar 25 02:45:43.406470 systemd-timesyncd[1414]: Initial clock synchronization to Tue 2025-03-25 02:45:43.405841 UTC. Mar 25 02:45:44.553935 containerd[1520]: time="2025-03-25T02:45:44.552683243Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.7\"" Mar 25 02:45:45.254237 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Mar 25 02:45:45.772044 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2559342341.mount: Deactivated successfully. Mar 25 02:45:49.021450 containerd[1520]: time="2025-03-25T02:45:49.021326730Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:45:49.023317 containerd[1520]: time="2025-03-25T02:45:49.023233028Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.7: active requests=0, bytes read=27959276" Mar 25 02:45:49.024245 containerd[1520]: time="2025-03-25T02:45:49.024169250Z" level=info msg="ImageCreate event name:\"sha256:f084bc047a8cf7c8484d47c51e70e646dde3977d916f282feb99207b7b9241af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:45:49.028108 containerd[1520]: time="2025-03-25T02:45:49.028004124Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:22c19cc70fe5806d0a2cb28a6b6b33fd34e6f9e50616bdf6d53649bcfafbc277\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:45:49.030586 containerd[1520]: time="2025-03-25T02:45:49.029661761Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.7\" with image id \"sha256:f084bc047a8cf7c8484d47c51e70e646dde3977d916f282feb99207b7b9241af\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:22c19cc70fe5806d0a2cb28a6b6b33fd34e6f9e50616bdf6d53649bcfafbc277\", size \"27956068\" in 4.476776541s" Mar 25 02:45:49.030586 containerd[1520]: time="2025-03-25T02:45:49.029731212Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.7\" returns image reference \"sha256:f084bc047a8cf7c8484d47c51e70e646dde3977d916f282feb99207b7b9241af\"" Mar 25 02:45:49.033377 containerd[1520]: time="2025-03-25T02:45:49.033118500Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.7\"" Mar 25 02:45:49.132336 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 25 02:45:49.138045 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 02:45:49.361404 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 02:45:49.373684 (kubelet)[2097]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 25 02:45:49.468074 kubelet[2097]: E0325 02:45:49.467804 2097 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 25 02:45:49.471059 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 25 02:45:49.471319 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 25 02:45:49.472157 systemd[1]: kubelet.service: Consumed 299ms CPU time, 97.7M memory peak. Mar 25 02:45:52.036910 containerd[1520]: time="2025-03-25T02:45:52.035334582Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:45:52.037691 containerd[1520]: time="2025-03-25T02:45:52.036865485Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.7: active requests=0, bytes read=24713784" Mar 25 02:45:52.038114 containerd[1520]: time="2025-03-25T02:45:52.038078941Z" level=info msg="ImageCreate event name:\"sha256:652dcad615a9a0c252c253860d5b5b7bfebd3efe159dc033a8555bc15a6d1985\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:45:52.041997 containerd[1520]: time="2025-03-25T02:45:52.041954939Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6abe7a0accecf29db6ebab18a10f844678ffed693d79e2e51a18a6f2b4530cbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:45:52.043592 containerd[1520]: time="2025-03-25T02:45:52.043554018Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.7\" with image id \"sha256:652dcad615a9a0c252c253860d5b5b7bfebd3efe159dc033a8555bc15a6d1985\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6abe7a0accecf29db6ebab18a10f844678ffed693d79e2e51a18a6f2b4530cbb\", size \"26201384\" in 3.010387905s" Mar 25 02:45:52.043844 containerd[1520]: time="2025-03-25T02:45:52.043811571Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.7\" returns image reference \"sha256:652dcad615a9a0c252c253860d5b5b7bfebd3efe159dc033a8555bc15a6d1985\"" Mar 25 02:45:52.045612 containerd[1520]: time="2025-03-25T02:45:52.045325879Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.7\"" Mar 25 02:45:54.621558 containerd[1520]: time="2025-03-25T02:45:54.619954505Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:45:54.621558 containerd[1520]: time="2025-03-25T02:45:54.621158650Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.7: active requests=0, bytes read=18780376" Mar 25 02:45:54.622588 containerd[1520]: time="2025-03-25T02:45:54.622548932Z" level=info msg="ImageCreate event name:\"sha256:7f1f6a63d8aa14cf61d0045e912ad312b4ade24637cecccc933b163582eae68c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:45:54.626371 containerd[1520]: time="2025-03-25T02:45:54.625764426Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:fb80249bcb77ee72b1c9fa5b70bc28a83ed107c9ca71957841ad91db379963bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:45:54.627483 containerd[1520]: time="2025-03-25T02:45:54.627433189Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.7\" with image id \"sha256:7f1f6a63d8aa14cf61d0045e912ad312b4ade24637cecccc933b163582eae68c\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:fb80249bcb77ee72b1c9fa5b70bc28a83ed107c9ca71957841ad91db379963bf\", size \"20267994\" in 2.581641239s" Mar 25 02:45:54.627568 containerd[1520]: time="2025-03-25T02:45:54.627489439Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.7\" returns image reference \"sha256:7f1f6a63d8aa14cf61d0045e912ad312b4ade24637cecccc933b163582eae68c\"" Mar 25 02:45:54.628553 containerd[1520]: time="2025-03-25T02:45:54.628526142Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.7\"" Mar 25 02:45:56.676505 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount917143977.mount: Deactivated successfully. Mar 25 02:45:57.647944 containerd[1520]: time="2025-03-25T02:45:57.647703336Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:45:57.649623 containerd[1520]: time="2025-03-25T02:45:57.649549585Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.7: active requests=0, bytes read=30354638" Mar 25 02:45:57.650869 containerd[1520]: time="2025-03-25T02:45:57.650806143Z" level=info msg="ImageCreate event name:\"sha256:dcfc039c372ea285997a302d60e58a75b80905b4c4dba969993b9b22e8ac66d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:45:57.654110 containerd[1520]: time="2025-03-25T02:45:57.654060377Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e5839270c96c3ad1bea1dce4935126d3281297527f3655408d2970aa4b5cf178\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:45:57.655480 containerd[1520]: time="2025-03-25T02:45:57.655141324Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.7\" with image id \"sha256:dcfc039c372ea285997a302d60e58a75b80905b4c4dba969993b9b22e8ac66d1\", repo tag \"registry.k8s.io/kube-proxy:v1.31.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:e5839270c96c3ad1bea1dce4935126d3281297527f3655408d2970aa4b5cf178\", size \"30353649\" in 3.026460725s" Mar 25 02:45:57.655480 containerd[1520]: time="2025-03-25T02:45:57.655189424Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.7\" returns image reference \"sha256:dcfc039c372ea285997a302d60e58a75b80905b4c4dba969993b9b22e8ac66d1\"" Mar 25 02:45:57.657552 containerd[1520]: time="2025-03-25T02:45:57.657520793Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Mar 25 02:45:58.340401 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3073892795.mount: Deactivated successfully. Mar 25 02:45:59.632537 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 25 02:45:59.639085 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 02:45:59.646789 update_engine[1504]: I20250325 02:45:59.645141 1504 update_attempter.cc:509] Updating boot flags... Mar 25 02:45:59.864962 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2174) Mar 25 02:46:00.147917 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2177) Mar 25 02:46:00.555098 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 02:46:00.570350 (kubelet)[2187]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 25 02:46:00.731290 kubelet[2187]: E0325 02:46:00.731181 2187 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 25 02:46:00.733974 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 25 02:46:00.734689 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 25 02:46:00.735533 systemd[1]: kubelet.service: Consumed 612ms CPU time, 97.2M memory peak. Mar 25 02:46:01.029199 containerd[1520]: time="2025-03-25T02:46:01.029110723Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:46:01.032728 containerd[1520]: time="2025-03-25T02:46:01.031662187Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Mar 25 02:46:01.039210 containerd[1520]: time="2025-03-25T02:46:01.038098497Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:46:01.096280 containerd[1520]: time="2025-03-25T02:46:01.096182932Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 3.438315618s" Mar 25 02:46:01.096280 containerd[1520]: time="2025-03-25T02:46:01.096286720Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Mar 25 02:46:01.096586 containerd[1520]: time="2025-03-25T02:46:01.096494393Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:46:01.100579 containerd[1520]: time="2025-03-25T02:46:01.100540164Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 25 02:46:01.841535 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1900224269.mount: Deactivated successfully. Mar 25 02:46:01.849572 containerd[1520]: time="2025-03-25T02:46:01.849510329Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 25 02:46:01.850960 containerd[1520]: time="2025-03-25T02:46:01.850903749Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Mar 25 02:46:01.852133 containerd[1520]: time="2025-03-25T02:46:01.852100267Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 25 02:46:01.854998 containerd[1520]: time="2025-03-25T02:46:01.854956891Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 25 02:46:01.856308 containerd[1520]: time="2025-03-25T02:46:01.856264579Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 755.66437ms" Mar 25 02:46:01.856389 containerd[1520]: time="2025-03-25T02:46:01.856312757Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Mar 25 02:46:01.857218 containerd[1520]: time="2025-03-25T02:46:01.857183942Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Mar 25 02:46:02.547083 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2561782317.mount: Deactivated successfully. Mar 25 02:46:06.180813 containerd[1520]: time="2025-03-25T02:46:06.180693095Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:46:06.192265 containerd[1520]: time="2025-03-25T02:46:06.192196887Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56779981" Mar 25 02:46:06.204569 containerd[1520]: time="2025-03-25T02:46:06.204455656Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:46:06.219549 containerd[1520]: time="2025-03-25T02:46:06.219468587Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:46:06.221433 containerd[1520]: time="2025-03-25T02:46:06.221179498Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 4.36395271s" Mar 25 02:46:06.221433 containerd[1520]: time="2025-03-25T02:46:06.221228369Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Mar 25 02:46:10.882100 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Mar 25 02:46:10.887084 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 02:46:11.096069 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 02:46:11.107414 (kubelet)[2278]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 25 02:46:11.183515 kubelet[2278]: E0325 02:46:11.183076 2278 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 25 02:46:11.186116 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 02:46:11.188143 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 25 02:46:11.188500 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 25 02:46:11.189111 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 02:46:11.189485 systemd[1]: kubelet.service: Consumed 237ms CPU time, 93.6M memory peak. Mar 25 02:46:11.193387 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 02:46:11.234026 systemd[1]: Reload requested from client PID 2293 ('systemctl') (unit session-11.scope)... Mar 25 02:46:11.234345 systemd[1]: Reloading... Mar 25 02:46:11.425965 zram_generator::config[2335]: No configuration found. Mar 25 02:46:11.616400 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 25 02:46:11.774618 systemd[1]: Reloading finished in 539 ms. Mar 25 02:46:11.853379 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 02:46:11.859169 systemd[1]: kubelet.service: Deactivated successfully. Mar 25 02:46:11.859605 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 02:46:11.859693 systemd[1]: kubelet.service: Consumed 134ms CPU time, 83.6M memory peak. Mar 25 02:46:11.862746 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 02:46:12.028945 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 02:46:12.044421 (kubelet)[2408]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 25 02:46:12.121296 kubelet[2408]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 25 02:46:12.121296 kubelet[2408]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 25 02:46:12.121296 kubelet[2408]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 25 02:46:12.121296 kubelet[2408]: I0325 02:46:12.121164 2408 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 25 02:46:12.464913 kubelet[2408]: I0325 02:46:12.463398 2408 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Mar 25 02:46:12.464913 kubelet[2408]: I0325 02:46:12.463469 2408 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 25 02:46:12.464913 kubelet[2408]: I0325 02:46:12.463976 2408 server.go:929] "Client rotation is on, will bootstrap in background" Mar 25 02:46:12.492220 kubelet[2408]: I0325 02:46:12.492178 2408 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 25 02:46:12.496192 kubelet[2408]: E0325 02:46:12.495640 2408 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.230.58.198:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.58.198:6443: connect: connection refused" logger="UnhandledError" Mar 25 02:46:12.520914 kubelet[2408]: I0325 02:46:12.520857 2408 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 25 02:46:12.536088 kubelet[2408]: I0325 02:46:12.536026 2408 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 25 02:46:12.537827 kubelet[2408]: I0325 02:46:12.537738 2408 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 25 02:46:12.538137 kubelet[2408]: I0325 02:46:12.538078 2408 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 25 02:46:12.538546 kubelet[2408]: I0325 02:46:12.538131 2408 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-nkv7s.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 25 02:46:12.538997 kubelet[2408]: I0325 02:46:12.538596 2408 topology_manager.go:138] "Creating topology manager with none policy" Mar 25 02:46:12.538997 kubelet[2408]: I0325 02:46:12.538618 2408 container_manager_linux.go:300] "Creating device plugin manager" Mar 25 02:46:12.538997 kubelet[2408]: I0325 02:46:12.538955 2408 state_mem.go:36] "Initialized new in-memory state store" Mar 25 02:46:12.542153 kubelet[2408]: I0325 02:46:12.542106 2408 kubelet.go:408] "Attempting to sync node with API server" Mar 25 02:46:12.542153 kubelet[2408]: I0325 02:46:12.542147 2408 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 25 02:46:12.543385 kubelet[2408]: I0325 02:46:12.543324 2408 kubelet.go:314] "Adding apiserver pod source" Mar 25 02:46:12.543519 kubelet[2408]: I0325 02:46:12.543392 2408 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 25 02:46:12.550511 kubelet[2408]: W0325 02:46:12.549970 2408 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.58.198:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-nkv7s.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.58.198:6443: connect: connection refused Mar 25 02:46:12.550511 kubelet[2408]: E0325 02:46:12.550150 2408 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.230.58.198:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-nkv7s.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.58.198:6443: connect: connection refused" logger="UnhandledError" Mar 25 02:46:12.550511 kubelet[2408]: I0325 02:46:12.550324 2408 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" Mar 25 02:46:12.552737 kubelet[2408]: I0325 02:46:12.552553 2408 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 25 02:46:12.554340 kubelet[2408]: W0325 02:46:12.553463 2408 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 25 02:46:12.556011 kubelet[2408]: I0325 02:46:12.555772 2408 server.go:1269] "Started kubelet" Mar 25 02:46:12.560197 kubelet[2408]: W0325 02:46:12.559545 2408 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.58.198:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.230.58.198:6443: connect: connection refused Mar 25 02:46:12.560197 kubelet[2408]: E0325 02:46:12.559623 2408 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.230.58.198:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.58.198:6443: connect: connection refused" logger="UnhandledError" Mar 25 02:46:12.560197 kubelet[2408]: I0325 02:46:12.559735 2408 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 25 02:46:12.561835 kubelet[2408]: I0325 02:46:12.561762 2408 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 25 02:46:12.562685 kubelet[2408]: I0325 02:46:12.562659 2408 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 25 02:46:12.565918 kubelet[2408]: I0325 02:46:12.565866 2408 server.go:460] "Adding debug handlers to kubelet server" Mar 25 02:46:12.569572 kubelet[2408]: E0325 02:46:12.565299 2408 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.58.198:6443/api/v1/namespaces/default/events\": dial tcp 10.230.58.198:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-nkv7s.gb1.brightbox.com.182febbbc7985aef default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-nkv7s.gb1.brightbox.com,UID:srv-nkv7s.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-nkv7s.gb1.brightbox.com,},FirstTimestamp:2025-03-25 02:46:12.555733743 +0000 UTC m=+0.506054655,LastTimestamp:2025-03-25 02:46:12.555733743 +0000 UTC m=+0.506054655,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-nkv7s.gb1.brightbox.com,}" Mar 25 02:46:12.569572 kubelet[2408]: I0325 02:46:12.568552 2408 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 25 02:46:12.574892 kubelet[2408]: I0325 02:46:12.574848 2408 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 25 02:46:12.579276 kubelet[2408]: I0325 02:46:12.579240 2408 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 25 02:46:12.579758 kubelet[2408]: E0325 02:46:12.579723 2408 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-nkv7s.gb1.brightbox.com\" not found" Mar 25 02:46:12.581893 kubelet[2408]: I0325 02:46:12.580243 2408 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 25 02:46:12.581893 kubelet[2408]: I0325 02:46:12.580400 2408 reconciler.go:26] "Reconciler: start to sync state" Mar 25 02:46:12.588909 kubelet[2408]: W0325 02:46:12.588502 2408 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.58.198:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.58.198:6443: connect: connection refused Mar 25 02:46:12.588909 kubelet[2408]: E0325 02:46:12.588585 2408 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.230.58.198:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.58.198:6443: connect: connection refused" logger="UnhandledError" Mar 25 02:46:12.588909 kubelet[2408]: E0325 02:46:12.588698 2408 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.58.198:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-nkv7s.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.58.198:6443: connect: connection refused" interval="200ms" Mar 25 02:46:12.589216 kubelet[2408]: I0325 02:46:12.589174 2408 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 25 02:46:12.591801 kubelet[2408]: I0325 02:46:12.591754 2408 factory.go:221] Registration of the containerd container factory successfully Mar 25 02:46:12.591801 kubelet[2408]: I0325 02:46:12.591781 2408 factory.go:221] Registration of the systemd container factory successfully Mar 25 02:46:12.603010 kubelet[2408]: I0325 02:46:12.602959 2408 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 25 02:46:12.604765 kubelet[2408]: I0325 02:46:12.604740 2408 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 25 02:46:12.605538 kubelet[2408]: I0325 02:46:12.605005 2408 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 25 02:46:12.605538 kubelet[2408]: I0325 02:46:12.605055 2408 kubelet.go:2321] "Starting kubelet main sync loop" Mar 25 02:46:12.605538 kubelet[2408]: E0325 02:46:12.605148 2408 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 25 02:46:12.617574 kubelet[2408]: W0325 02:46:12.617497 2408 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.58.198:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.58.198:6443: connect: connection refused Mar 25 02:46:12.617832 kubelet[2408]: E0325 02:46:12.617759 2408 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.230.58.198:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.58.198:6443: connect: connection refused" logger="UnhandledError" Mar 25 02:46:12.618188 kubelet[2408]: E0325 02:46:12.618161 2408 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 25 02:46:12.644908 kubelet[2408]: I0325 02:46:12.644828 2408 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 25 02:46:12.644908 kubelet[2408]: I0325 02:46:12.644859 2408 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 25 02:46:12.645147 kubelet[2408]: I0325 02:46:12.644922 2408 state_mem.go:36] "Initialized new in-memory state store" Mar 25 02:46:12.649993 kubelet[2408]: I0325 02:46:12.649950 2408 policy_none.go:49] "None policy: Start" Mar 25 02:46:12.650942 kubelet[2408]: I0325 02:46:12.650913 2408 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 25 02:46:12.651647 kubelet[2408]: I0325 02:46:12.651159 2408 state_mem.go:35] "Initializing new in-memory state store" Mar 25 02:46:12.662336 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 25 02:46:12.678989 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 25 02:46:12.680172 kubelet[2408]: E0325 02:46:12.679999 2408 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-nkv7s.gb1.brightbox.com\" not found" Mar 25 02:46:12.685476 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 25 02:46:12.695475 kubelet[2408]: I0325 02:46:12.695432 2408 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 25 02:46:12.696219 kubelet[2408]: I0325 02:46:12.696090 2408 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 25 02:46:12.696909 kubelet[2408]: I0325 02:46:12.696173 2408 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 25 02:46:12.698508 kubelet[2408]: I0325 02:46:12.698373 2408 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 25 02:46:12.701053 kubelet[2408]: E0325 02:46:12.701026 2408 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-nkv7s.gb1.brightbox.com\" not found" Mar 25 02:46:12.723124 systemd[1]: Created slice kubepods-burstable-podc15fa8b7d77fdf3856c23f9ef1dd8abd.slice - libcontainer container kubepods-burstable-podc15fa8b7d77fdf3856c23f9ef1dd8abd.slice. Mar 25 02:46:12.744291 systemd[1]: Created slice kubepods-burstable-pod49ba9e31da68319dc75c820fefd17689.slice - libcontainer container kubepods-burstable-pod49ba9e31da68319dc75c820fefd17689.slice. Mar 25 02:46:12.751084 systemd[1]: Created slice kubepods-burstable-podd8b9134bbd9fa78573d99669f09f6778.slice - libcontainer container kubepods-burstable-podd8b9134bbd9fa78573d99669f09f6778.slice. Mar 25 02:46:12.790112 kubelet[2408]: E0325 02:46:12.790038 2408 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.58.198:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-nkv7s.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.58.198:6443: connect: connection refused" interval="400ms" Mar 25 02:46:12.801098 kubelet[2408]: I0325 02:46:12.801018 2408 kubelet_node_status.go:72] "Attempting to register node" node="srv-nkv7s.gb1.brightbox.com" Mar 25 02:46:12.801728 kubelet[2408]: E0325 02:46:12.801664 2408 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.230.58.198:6443/api/v1/nodes\": dial tcp 10.230.58.198:6443: connect: connection refused" node="srv-nkv7s.gb1.brightbox.com" Mar 25 02:46:12.881611 kubelet[2408]: I0325 02:46:12.881524 2408 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c15fa8b7d77fdf3856c23f9ef1dd8abd-ca-certs\") pod \"kube-apiserver-srv-nkv7s.gb1.brightbox.com\" (UID: \"c15fa8b7d77fdf3856c23f9ef1dd8abd\") " pod="kube-system/kube-apiserver-srv-nkv7s.gb1.brightbox.com" Mar 25 02:46:12.882394 kubelet[2408]: I0325 02:46:12.882039 2408 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c15fa8b7d77fdf3856c23f9ef1dd8abd-k8s-certs\") pod \"kube-apiserver-srv-nkv7s.gb1.brightbox.com\" (UID: \"c15fa8b7d77fdf3856c23f9ef1dd8abd\") " pod="kube-system/kube-apiserver-srv-nkv7s.gb1.brightbox.com" Mar 25 02:46:12.882394 kubelet[2408]: I0325 02:46:12.882151 2408 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c15fa8b7d77fdf3856c23f9ef1dd8abd-usr-share-ca-certificates\") pod \"kube-apiserver-srv-nkv7s.gb1.brightbox.com\" (UID: \"c15fa8b7d77fdf3856c23f9ef1dd8abd\") " pod="kube-system/kube-apiserver-srv-nkv7s.gb1.brightbox.com" Mar 25 02:46:12.882394 kubelet[2408]: I0325 02:46:12.882240 2408 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/49ba9e31da68319dc75c820fefd17689-ca-certs\") pod \"kube-controller-manager-srv-nkv7s.gb1.brightbox.com\" (UID: \"49ba9e31da68319dc75c820fefd17689\") " pod="kube-system/kube-controller-manager-srv-nkv7s.gb1.brightbox.com" Mar 25 02:46:12.882394 kubelet[2408]: I0325 02:46:12.882323 2408 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d8b9134bbd9fa78573d99669f09f6778-kubeconfig\") pod \"kube-scheduler-srv-nkv7s.gb1.brightbox.com\" (UID: \"d8b9134bbd9fa78573d99669f09f6778\") " pod="kube-system/kube-scheduler-srv-nkv7s.gb1.brightbox.com" Mar 25 02:46:12.883022 kubelet[2408]: I0325 02:46:12.882713 2408 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/49ba9e31da68319dc75c820fefd17689-flexvolume-dir\") pod \"kube-controller-manager-srv-nkv7s.gb1.brightbox.com\" (UID: \"49ba9e31da68319dc75c820fefd17689\") " pod="kube-system/kube-controller-manager-srv-nkv7s.gb1.brightbox.com" Mar 25 02:46:12.883022 kubelet[2408]: I0325 02:46:12.882814 2408 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/49ba9e31da68319dc75c820fefd17689-k8s-certs\") pod \"kube-controller-manager-srv-nkv7s.gb1.brightbox.com\" (UID: \"49ba9e31da68319dc75c820fefd17689\") " pod="kube-system/kube-controller-manager-srv-nkv7s.gb1.brightbox.com" Mar 25 02:46:12.883022 kubelet[2408]: I0325 02:46:12.882862 2408 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/49ba9e31da68319dc75c820fefd17689-kubeconfig\") pod \"kube-controller-manager-srv-nkv7s.gb1.brightbox.com\" (UID: \"49ba9e31da68319dc75c820fefd17689\") " pod="kube-system/kube-controller-manager-srv-nkv7s.gb1.brightbox.com" Mar 25 02:46:12.883022 kubelet[2408]: I0325 02:46:12.882939 2408 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/49ba9e31da68319dc75c820fefd17689-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-nkv7s.gb1.brightbox.com\" (UID: \"49ba9e31da68319dc75c820fefd17689\") " pod="kube-system/kube-controller-manager-srv-nkv7s.gb1.brightbox.com" Mar 25 02:46:13.006528 kubelet[2408]: I0325 02:46:13.005436 2408 kubelet_node_status.go:72] "Attempting to register node" node="srv-nkv7s.gb1.brightbox.com" Mar 25 02:46:13.006528 kubelet[2408]: E0325 02:46:13.006114 2408 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.230.58.198:6443/api/v1/nodes\": dial tcp 10.230.58.198:6443: connect: connection refused" node="srv-nkv7s.gb1.brightbox.com" Mar 25 02:46:13.040911 containerd[1520]: time="2025-03-25T02:46:13.040770913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-nkv7s.gb1.brightbox.com,Uid:c15fa8b7d77fdf3856c23f9ef1dd8abd,Namespace:kube-system,Attempt:0,}" Mar 25 02:46:13.050145 containerd[1520]: time="2025-03-25T02:46:13.049688892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-nkv7s.gb1.brightbox.com,Uid:49ba9e31da68319dc75c820fefd17689,Namespace:kube-system,Attempt:0,}" Mar 25 02:46:13.055499 containerd[1520]: time="2025-03-25T02:46:13.055154628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-nkv7s.gb1.brightbox.com,Uid:d8b9134bbd9fa78573d99669f09f6778,Namespace:kube-system,Attempt:0,}" Mar 25 02:46:13.193576 kubelet[2408]: E0325 02:46:13.191631 2408 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.58.198:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-nkv7s.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.58.198:6443: connect: connection refused" interval="800ms" Mar 25 02:46:13.373654 kubelet[2408]: W0325 02:46:13.372974 2408 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.58.198:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.230.58.198:6443: connect: connection refused Mar 25 02:46:13.373654 kubelet[2408]: E0325 02:46:13.373089 2408 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.230.58.198:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.58.198:6443: connect: connection refused" logger="UnhandledError" Mar 25 02:46:13.375143 containerd[1520]: time="2025-03-25T02:46:13.375087084Z" level=info msg="connecting to shim 70728ff1feeca6dbc7ebdaa7724a92a217a3bd5f4c946b4fba31cba108fe6dc8" address="unix:///run/containerd/s/baca6cf5829fd8cba38d61bbd7cc23babf3119037b36c945395c9d3515573072" namespace=k8s.io protocol=ttrpc version=3 Mar 25 02:46:13.386382 containerd[1520]: time="2025-03-25T02:46:13.386308522Z" level=info msg="connecting to shim 2c7c79768f7f3d30881d9b8813ea3087edeb8500d8d9fdad6ebe0e4ae5bd7707" address="unix:///run/containerd/s/4a33c2a863ed879161c3bd93f727287e68ee74e65218515ea3acf7cfa3413139" namespace=k8s.io protocol=ttrpc version=3 Mar 25 02:46:13.393105 containerd[1520]: time="2025-03-25T02:46:13.393051513Z" level=info msg="connecting to shim 87eece11be12cb1a33224a87f8378d2c7023e17077013acbca146e9ffbb98b20" address="unix:///run/containerd/s/ada3a183d23372510ccbdfb4f8d0b9b4ae09ca882d7aa21260fee8cb61f2251c" namespace=k8s.io protocol=ttrpc version=3 Mar 25 02:46:13.419313 systemd[1]: Started sshd@10-10.230.58.198:22-81.192.87.130:35078.service - OpenSSH per-connection server daemon (81.192.87.130:35078). Mar 25 02:46:13.430435 kubelet[2408]: I0325 02:46:13.429745 2408 kubelet_node_status.go:72] "Attempting to register node" node="srv-nkv7s.gb1.brightbox.com" Mar 25 02:46:13.430435 kubelet[2408]: E0325 02:46:13.430361 2408 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.230.58.198:6443/api/v1/nodes\": dial tcp 10.230.58.198:6443: connect: connection refused" node="srv-nkv7s.gb1.brightbox.com" Mar 25 02:46:13.545942 systemd[1]: Started cri-containerd-70728ff1feeca6dbc7ebdaa7724a92a217a3bd5f4c946b4fba31cba108fe6dc8.scope - libcontainer container 70728ff1feeca6dbc7ebdaa7724a92a217a3bd5f4c946b4fba31cba108fe6dc8. Mar 25 02:46:13.548994 systemd[1]: Started cri-containerd-87eece11be12cb1a33224a87f8378d2c7023e17077013acbca146e9ffbb98b20.scope - libcontainer container 87eece11be12cb1a33224a87f8378d2c7023e17077013acbca146e9ffbb98b20. Mar 25 02:46:13.557819 systemd[1]: Started cri-containerd-2c7c79768f7f3d30881d9b8813ea3087edeb8500d8d9fdad6ebe0e4ae5bd7707.scope - libcontainer container 2c7c79768f7f3d30881d9b8813ea3087edeb8500d8d9fdad6ebe0e4ae5bd7707. Mar 25 02:46:13.787363 containerd[1520]: time="2025-03-25T02:46:13.787293266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-nkv7s.gb1.brightbox.com,Uid:49ba9e31da68319dc75c820fefd17689,Namespace:kube-system,Attempt:0,} returns sandbox id \"2c7c79768f7f3d30881d9b8813ea3087edeb8500d8d9fdad6ebe0e4ae5bd7707\"" Mar 25 02:46:13.795221 kubelet[2408]: W0325 02:46:13.794942 2408 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.58.198:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.58.198:6443: connect: connection refused Mar 25 02:46:13.795400 kubelet[2408]: E0325 02:46:13.795241 2408 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.230.58.198:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.58.198:6443: connect: connection refused" logger="UnhandledError" Mar 25 02:46:13.797809 containerd[1520]: time="2025-03-25T02:46:13.797402332Z" level=info msg="CreateContainer within sandbox \"2c7c79768f7f3d30881d9b8813ea3087edeb8500d8d9fdad6ebe0e4ae5bd7707\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 25 02:46:13.798296 containerd[1520]: time="2025-03-25T02:46:13.797424184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-nkv7s.gb1.brightbox.com,Uid:c15fa8b7d77fdf3856c23f9ef1dd8abd,Namespace:kube-system,Attempt:0,} returns sandbox id \"87eece11be12cb1a33224a87f8378d2c7023e17077013acbca146e9ffbb98b20\"" Mar 25 02:46:13.804269 containerd[1520]: time="2025-03-25T02:46:13.804214994Z" level=info msg="CreateContainer within sandbox \"87eece11be12cb1a33224a87f8378d2c7023e17077013acbca146e9ffbb98b20\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 25 02:46:13.821136 containerd[1520]: time="2025-03-25T02:46:13.821072059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-nkv7s.gb1.brightbox.com,Uid:d8b9134bbd9fa78573d99669f09f6778,Namespace:kube-system,Attempt:0,} returns sandbox id \"70728ff1feeca6dbc7ebdaa7724a92a217a3bd5f4c946b4fba31cba108fe6dc8\"" Mar 25 02:46:13.825699 containerd[1520]: time="2025-03-25T02:46:13.825660965Z" level=info msg="CreateContainer within sandbox \"70728ff1feeca6dbc7ebdaa7724a92a217a3bd5f4c946b4fba31cba108fe6dc8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 25 02:46:13.826391 containerd[1520]: time="2025-03-25T02:46:13.826356614Z" level=info msg="Container e36b477fb47d837183caeeba9553ca1d9278e95b0d706b0fb3bf1b8b4fd14fb4: CDI devices from CRI Config.CDIDevices: []" Mar 25 02:46:13.830819 containerd[1520]: time="2025-03-25T02:46:13.830702157Z" level=info msg="Container 01312a9b006d97e5056525a49ab71d90d7293353655a4b18838c8efe6617639f: CDI devices from CRI Config.CDIDevices: []" Mar 25 02:46:13.847061 containerd[1520]: time="2025-03-25T02:46:13.846844366Z" level=info msg="CreateContainer within sandbox \"87eece11be12cb1a33224a87f8378d2c7023e17077013acbca146e9ffbb98b20\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"01312a9b006d97e5056525a49ab71d90d7293353655a4b18838c8efe6617639f\"" Mar 25 02:46:13.848057 containerd[1520]: time="2025-03-25T02:46:13.847794229Z" level=info msg="CreateContainer within sandbox \"2c7c79768f7f3d30881d9b8813ea3087edeb8500d8d9fdad6ebe0e4ae5bd7707\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e36b477fb47d837183caeeba9553ca1d9278e95b0d706b0fb3bf1b8b4fd14fb4\"" Mar 25 02:46:13.849289 containerd[1520]: time="2025-03-25T02:46:13.849240706Z" level=info msg="StartContainer for \"e36b477fb47d837183caeeba9553ca1d9278e95b0d706b0fb3bf1b8b4fd14fb4\"" Mar 25 02:46:13.851668 containerd[1520]: time="2025-03-25T02:46:13.849298748Z" level=info msg="StartContainer for \"01312a9b006d97e5056525a49ab71d90d7293353655a4b18838c8efe6617639f\"" Mar 25 02:46:13.853326 containerd[1520]: time="2025-03-25T02:46:13.853272474Z" level=info msg="connecting to shim 01312a9b006d97e5056525a49ab71d90d7293353655a4b18838c8efe6617639f" address="unix:///run/containerd/s/ada3a183d23372510ccbdfb4f8d0b9b4ae09ca882d7aa21260fee8cb61f2251c" protocol=ttrpc version=3 Mar 25 02:46:13.854579 containerd[1520]: time="2025-03-25T02:46:13.854544553Z" level=info msg="connecting to shim e36b477fb47d837183caeeba9553ca1d9278e95b0d706b0fb3bf1b8b4fd14fb4" address="unix:///run/containerd/s/4a33c2a863ed879161c3bd93f727287e68ee74e65218515ea3acf7cfa3413139" protocol=ttrpc version=3 Mar 25 02:46:13.857310 containerd[1520]: time="2025-03-25T02:46:13.857220394Z" level=info msg="Container d25d085429eb398a39e3d3c9db10c18cb0ea87cb4e04210e321a8ba8786c8064: CDI devices from CRI Config.CDIDevices: []" Mar 25 02:46:13.871042 containerd[1520]: time="2025-03-25T02:46:13.870971519Z" level=info msg="CreateContainer within sandbox \"70728ff1feeca6dbc7ebdaa7724a92a217a3bd5f4c946b4fba31cba108fe6dc8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d25d085429eb398a39e3d3c9db10c18cb0ea87cb4e04210e321a8ba8786c8064\"" Mar 25 02:46:13.872697 containerd[1520]: time="2025-03-25T02:46:13.872651727Z" level=info msg="StartContainer for \"d25d085429eb398a39e3d3c9db10c18cb0ea87cb4e04210e321a8ba8786c8064\"" Mar 25 02:46:13.874402 containerd[1520]: time="2025-03-25T02:46:13.874347982Z" level=info msg="connecting to shim d25d085429eb398a39e3d3c9db10c18cb0ea87cb4e04210e321a8ba8786c8064" address="unix:///run/containerd/s/baca6cf5829fd8cba38d61bbd7cc23babf3119037b36c945395c9d3515573072" protocol=ttrpc version=3 Mar 25 02:46:13.899938 systemd[1]: Started cri-containerd-01312a9b006d97e5056525a49ab71d90d7293353655a4b18838c8efe6617639f.scope - libcontainer container 01312a9b006d97e5056525a49ab71d90d7293353655a4b18838c8efe6617639f. Mar 25 02:46:13.914131 systemd[1]: Started cri-containerd-e36b477fb47d837183caeeba9553ca1d9278e95b0d706b0fb3bf1b8b4fd14fb4.scope - libcontainer container e36b477fb47d837183caeeba9553ca1d9278e95b0d706b0fb3bf1b8b4fd14fb4. Mar 25 02:46:13.918103 sshd[2494]: Invalid user scetin from 81.192.87.130 port 35078 Mar 25 02:46:13.940510 systemd[1]: Started cri-containerd-d25d085429eb398a39e3d3c9db10c18cb0ea87cb4e04210e321a8ba8786c8064.scope - libcontainer container d25d085429eb398a39e3d3c9db10c18cb0ea87cb4e04210e321a8ba8786c8064. Mar 25 02:46:13.982732 kubelet[2408]: W0325 02:46:13.982026 2408 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.58.198:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-nkv7s.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.58.198:6443: connect: connection refused Mar 25 02:46:13.982732 kubelet[2408]: E0325 02:46:13.982144 2408 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.230.58.198:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-nkv7s.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.58.198:6443: connect: connection refused" logger="UnhandledError" Mar 25 02:46:13.991894 sshd[2494]: Received disconnect from 81.192.87.130 port 35078:11: Bye Bye [preauth] Mar 25 02:46:13.991894 sshd[2494]: Disconnected from invalid user scetin 81.192.87.130 port 35078 [preauth] Mar 25 02:46:13.992917 kubelet[2408]: E0325 02:46:13.992433 2408 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.58.198:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-nkv7s.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.58.198:6443: connect: connection refused" interval="1.6s" Mar 25 02:46:13.993340 systemd[1]: sshd@10-10.230.58.198:22-81.192.87.130:35078.service: Deactivated successfully. Mar 25 02:46:14.150066 kubelet[2408]: W0325 02:46:14.148700 2408 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.58.198:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.58.198:6443: connect: connection refused Mar 25 02:46:14.150066 kubelet[2408]: E0325 02:46:14.149999 2408 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.230.58.198:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.58.198:6443: connect: connection refused" logger="UnhandledError" Mar 25 02:46:14.175342 containerd[1520]: time="2025-03-25T02:46:14.175285507Z" level=info msg="StartContainer for \"e36b477fb47d837183caeeba9553ca1d9278e95b0d706b0fb3bf1b8b4fd14fb4\" returns successfully" Mar 25 02:46:14.215563 containerd[1520]: time="2025-03-25T02:46:14.215487568Z" level=info msg="StartContainer for \"01312a9b006d97e5056525a49ab71d90d7293353655a4b18838c8efe6617639f\" returns successfully" Mar 25 02:46:14.235367 kubelet[2408]: I0325 02:46:14.234451 2408 kubelet_node_status.go:72] "Attempting to register node" node="srv-nkv7s.gb1.brightbox.com" Mar 25 02:46:14.236848 kubelet[2408]: E0325 02:46:14.236699 2408 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.230.58.198:6443/api/v1/nodes\": dial tcp 10.230.58.198:6443: connect: connection refused" node="srv-nkv7s.gb1.brightbox.com" Mar 25 02:46:14.254157 containerd[1520]: time="2025-03-25T02:46:14.254080125Z" level=info msg="StartContainer for \"d25d085429eb398a39e3d3c9db10c18cb0ea87cb4e04210e321a8ba8786c8064\" returns successfully" Mar 25 02:46:14.544476 kubelet[2408]: E0325 02:46:14.544314 2408 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.230.58.198:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.58.198:6443: connect: connection refused" logger="UnhandledError" Mar 25 02:46:15.847244 kubelet[2408]: I0325 02:46:15.847175 2408 kubelet_node_status.go:72] "Attempting to register node" node="srv-nkv7s.gb1.brightbox.com" Mar 25 02:46:17.309654 kubelet[2408]: E0325 02:46:17.309583 2408 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-nkv7s.gb1.brightbox.com\" not found" node="srv-nkv7s.gb1.brightbox.com" Mar 25 02:46:17.391302 kubelet[2408]: I0325 02:46:17.391027 2408 kubelet_node_status.go:75] "Successfully registered node" node="srv-nkv7s.gb1.brightbox.com" Mar 25 02:46:17.391302 kubelet[2408]: E0325 02:46:17.391090 2408 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"srv-nkv7s.gb1.brightbox.com\": node \"srv-nkv7s.gb1.brightbox.com\" not found" Mar 25 02:46:17.562163 kubelet[2408]: I0325 02:46:17.561770 2408 apiserver.go:52] "Watching apiserver" Mar 25 02:46:17.580430 kubelet[2408]: I0325 02:46:17.580381 2408 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 25 02:46:19.570517 systemd[1]: Reload requested from client PID 2688 ('systemctl') (unit session-11.scope)... Mar 25 02:46:19.570548 systemd[1]: Reloading... Mar 25 02:46:19.736041 zram_generator::config[2737]: No configuration found. Mar 25 02:46:19.942688 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 25 02:46:20.124615 systemd[1]: Reloading finished in 553 ms. Mar 25 02:46:20.168610 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 02:46:20.185072 systemd[1]: kubelet.service: Deactivated successfully. Mar 25 02:46:20.185618 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 02:46:20.185759 systemd[1]: kubelet.service: Consumed 1.060s CPU time, 114.1M memory peak. Mar 25 02:46:20.190165 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 02:46:20.417754 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 02:46:20.432718 (kubelet)[2798]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 25 02:46:20.512977 kubelet[2798]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 25 02:46:20.512977 kubelet[2798]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 25 02:46:20.512977 kubelet[2798]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 25 02:46:20.513787 kubelet[2798]: I0325 02:46:20.513082 2798 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 25 02:46:20.527062 kubelet[2798]: I0325 02:46:20.527003 2798 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Mar 25 02:46:20.527062 kubelet[2798]: I0325 02:46:20.527056 2798 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 25 02:46:20.528889 kubelet[2798]: I0325 02:46:20.527771 2798 server.go:929] "Client rotation is on, will bootstrap in background" Mar 25 02:46:20.531077 kubelet[2798]: I0325 02:46:20.531040 2798 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 25 02:46:20.534943 kubelet[2798]: I0325 02:46:20.534638 2798 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 25 02:46:20.543013 kubelet[2798]: I0325 02:46:20.541981 2798 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 25 02:46:20.551622 kubelet[2798]: I0325 02:46:20.551088 2798 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 25 02:46:20.551622 kubelet[2798]: I0325 02:46:20.551307 2798 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 25 02:46:20.551622 kubelet[2798]: I0325 02:46:20.551579 2798 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 25 02:46:20.552027 kubelet[2798]: I0325 02:46:20.551614 2798 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-nkv7s.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 25 02:46:20.552027 kubelet[2798]: I0325 02:46:20.551858 2798 topology_manager.go:138] "Creating topology manager with none policy" Mar 25 02:46:20.552027 kubelet[2798]: I0325 02:46:20.551925 2798 container_manager_linux.go:300] "Creating device plugin manager" Mar 25 02:46:20.552027 kubelet[2798]: I0325 02:46:20.551978 2798 state_mem.go:36] "Initialized new in-memory state store" Mar 25 02:46:20.555156 kubelet[2798]: I0325 02:46:20.552167 2798 kubelet.go:408] "Attempting to sync node with API server" Mar 25 02:46:20.555156 kubelet[2798]: I0325 02:46:20.552198 2798 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 25 02:46:20.555156 kubelet[2798]: I0325 02:46:20.552251 2798 kubelet.go:314] "Adding apiserver pod source" Mar 25 02:46:20.555156 kubelet[2798]: I0325 02:46:20.552296 2798 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 25 02:46:20.556213 kubelet[2798]: I0325 02:46:20.556175 2798 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" Mar 25 02:46:20.557566 kubelet[2798]: I0325 02:46:20.557535 2798 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 25 02:46:20.558323 kubelet[2798]: I0325 02:46:20.558295 2798 server.go:1269] "Started kubelet" Mar 25 02:46:20.567840 kubelet[2798]: I0325 02:46:20.567590 2798 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 25 02:46:20.572096 kubelet[2798]: I0325 02:46:20.570792 2798 server.go:460] "Adding debug handlers to kubelet server" Mar 25 02:46:20.579730 kubelet[2798]: I0325 02:46:20.579668 2798 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 25 02:46:20.582969 kubelet[2798]: I0325 02:46:20.582767 2798 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 25 02:46:20.584206 kubelet[2798]: I0325 02:46:20.583497 2798 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 25 02:46:20.599093 kubelet[2798]: I0325 02:46:20.597846 2798 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 25 02:46:20.602941 kubelet[2798]: I0325 02:46:20.600801 2798 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 25 02:46:20.602941 kubelet[2798]: E0325 02:46:20.601291 2798 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-nkv7s.gb1.brightbox.com\" not found" Mar 25 02:46:20.603467 kubelet[2798]: I0325 02:46:20.603402 2798 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 25 02:46:20.605010 kubelet[2798]: I0325 02:46:20.603917 2798 reconciler.go:26] "Reconciler: start to sync state" Mar 25 02:46:20.621206 kubelet[2798]: I0325 02:46:20.620490 2798 factory.go:221] Registration of the containerd container factory successfully Mar 25 02:46:20.621511 kubelet[2798]: I0325 02:46:20.621411 2798 factory.go:221] Registration of the systemd container factory successfully Mar 25 02:46:20.622651 kubelet[2798]: I0325 02:46:20.621812 2798 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 25 02:46:20.629906 kubelet[2798]: I0325 02:46:20.628844 2798 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 25 02:46:20.630526 kubelet[2798]: I0325 02:46:20.630425 2798 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 25 02:46:20.630526 kubelet[2798]: I0325 02:46:20.630471 2798 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 25 02:46:20.630526 kubelet[2798]: I0325 02:46:20.630497 2798 kubelet.go:2321] "Starting kubelet main sync loop" Mar 25 02:46:20.631296 kubelet[2798]: E0325 02:46:20.630559 2798 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 25 02:46:20.635000 kubelet[2798]: E0325 02:46:20.634129 2798 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 25 02:46:20.747093 kubelet[2798]: E0325 02:46:20.745812 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 25 02:46:20.762492 kubelet[2798]: I0325 02:46:20.762412 2798 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 25 02:46:20.762492 kubelet[2798]: I0325 02:46:20.762473 2798 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 25 02:46:20.762492 kubelet[2798]: I0325 02:46:20.762509 2798 state_mem.go:36] "Initialized new in-memory state store" Mar 25 02:46:20.762812 kubelet[2798]: I0325 02:46:20.762763 2798 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 25 02:46:20.762812 kubelet[2798]: I0325 02:46:20.762784 2798 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 25 02:46:20.762812 kubelet[2798]: I0325 02:46:20.762816 2798 policy_none.go:49] "None policy: Start" Mar 25 02:46:20.767096 kubelet[2798]: I0325 02:46:20.766792 2798 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 25 02:46:20.767096 kubelet[2798]: I0325 02:46:20.766828 2798 state_mem.go:35] "Initializing new in-memory state store" Mar 25 02:46:20.767650 kubelet[2798]: I0325 02:46:20.767616 2798 state_mem.go:75] "Updated machine memory state" Mar 25 02:46:20.787209 kubelet[2798]: I0325 02:46:20.786099 2798 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 25 02:46:20.787712 kubelet[2798]: I0325 02:46:20.787651 2798 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 25 02:46:20.788767 kubelet[2798]: I0325 02:46:20.787809 2798 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 25 02:46:20.788767 kubelet[2798]: I0325 02:46:20.788674 2798 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 25 02:46:20.929300 kubelet[2798]: I0325 02:46:20.929246 2798 kubelet_node_status.go:72] "Attempting to register node" node="srv-nkv7s.gb1.brightbox.com" Mar 25 02:46:20.962744 kubelet[2798]: I0325 02:46:20.962202 2798 kubelet_node_status.go:111] "Node was previously registered" node="srv-nkv7s.gb1.brightbox.com" Mar 25 02:46:20.962744 kubelet[2798]: I0325 02:46:20.962507 2798 kubelet_node_status.go:75] "Successfully registered node" node="srv-nkv7s.gb1.brightbox.com" Mar 25 02:46:20.978523 kubelet[2798]: W0325 02:46:20.975278 2798 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 25 02:46:20.978523 kubelet[2798]: W0325 02:46:20.975811 2798 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 25 02:46:20.978523 kubelet[2798]: W0325 02:46:20.976467 2798 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 25 02:46:21.008184 kubelet[2798]: I0325 02:46:21.007787 2798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c15fa8b7d77fdf3856c23f9ef1dd8abd-ca-certs\") pod \"kube-apiserver-srv-nkv7s.gb1.brightbox.com\" (UID: \"c15fa8b7d77fdf3856c23f9ef1dd8abd\") " pod="kube-system/kube-apiserver-srv-nkv7s.gb1.brightbox.com" Mar 25 02:46:21.008184 kubelet[2798]: I0325 02:46:21.007896 2798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c15fa8b7d77fdf3856c23f9ef1dd8abd-usr-share-ca-certificates\") pod \"kube-apiserver-srv-nkv7s.gb1.brightbox.com\" (UID: \"c15fa8b7d77fdf3856c23f9ef1dd8abd\") " pod="kube-system/kube-apiserver-srv-nkv7s.gb1.brightbox.com" Mar 25 02:46:21.008184 kubelet[2798]: I0325 02:46:21.007997 2798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/49ba9e31da68319dc75c820fefd17689-k8s-certs\") pod \"kube-controller-manager-srv-nkv7s.gb1.brightbox.com\" (UID: \"49ba9e31da68319dc75c820fefd17689\") " pod="kube-system/kube-controller-manager-srv-nkv7s.gb1.brightbox.com" Mar 25 02:46:21.008184 kubelet[2798]: I0325 02:46:21.008029 2798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/49ba9e31da68319dc75c820fefd17689-kubeconfig\") pod \"kube-controller-manager-srv-nkv7s.gb1.brightbox.com\" (UID: \"49ba9e31da68319dc75c820fefd17689\") " pod="kube-system/kube-controller-manager-srv-nkv7s.gb1.brightbox.com" Mar 25 02:46:21.008184 kubelet[2798]: I0325 02:46:21.008057 2798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c15fa8b7d77fdf3856c23f9ef1dd8abd-k8s-certs\") pod \"kube-apiserver-srv-nkv7s.gb1.brightbox.com\" (UID: \"c15fa8b7d77fdf3856c23f9ef1dd8abd\") " pod="kube-system/kube-apiserver-srv-nkv7s.gb1.brightbox.com" Mar 25 02:46:21.008661 kubelet[2798]: I0325 02:46:21.008088 2798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/49ba9e31da68319dc75c820fefd17689-ca-certs\") pod \"kube-controller-manager-srv-nkv7s.gb1.brightbox.com\" (UID: \"49ba9e31da68319dc75c820fefd17689\") " pod="kube-system/kube-controller-manager-srv-nkv7s.gb1.brightbox.com" Mar 25 02:46:21.008661 kubelet[2798]: I0325 02:46:21.008126 2798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/49ba9e31da68319dc75c820fefd17689-flexvolume-dir\") pod \"kube-controller-manager-srv-nkv7s.gb1.brightbox.com\" (UID: \"49ba9e31da68319dc75c820fefd17689\") " pod="kube-system/kube-controller-manager-srv-nkv7s.gb1.brightbox.com" Mar 25 02:46:21.008661 kubelet[2798]: I0325 02:46:21.008155 2798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/49ba9e31da68319dc75c820fefd17689-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-nkv7s.gb1.brightbox.com\" (UID: \"49ba9e31da68319dc75c820fefd17689\") " pod="kube-system/kube-controller-manager-srv-nkv7s.gb1.brightbox.com" Mar 25 02:46:21.008661 kubelet[2798]: I0325 02:46:21.008220 2798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d8b9134bbd9fa78573d99669f09f6778-kubeconfig\") pod \"kube-scheduler-srv-nkv7s.gb1.brightbox.com\" (UID: \"d8b9134bbd9fa78573d99669f09f6778\") " pod="kube-system/kube-scheduler-srv-nkv7s.gb1.brightbox.com" Mar 25 02:46:21.557642 kubelet[2798]: I0325 02:46:21.557301 2798 apiserver.go:52] "Watching apiserver" Mar 25 02:46:21.603936 kubelet[2798]: I0325 02:46:21.603683 2798 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 25 02:46:21.709047 kubelet[2798]: W0325 02:46:21.708805 2798 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 25 02:46:21.709673 kubelet[2798]: E0325 02:46:21.709430 2798 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-srv-nkv7s.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-nkv7s.gb1.brightbox.com" Mar 25 02:46:21.719399 kubelet[2798]: I0325 02:46:21.718987 2798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-nkv7s.gb1.brightbox.com" podStartSLOduration=1.7189426270000001 podStartE2EDuration="1.718942627s" podCreationTimestamp="2025-03-25 02:46:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 02:46:21.71247019 +0000 UTC m=+1.270689614" watchObservedRunningTime="2025-03-25 02:46:21.718942627 +0000 UTC m=+1.277162038" Mar 25 02:46:21.779780 kubelet[2798]: I0325 02:46:21.779693 2798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-nkv7s.gb1.brightbox.com" podStartSLOduration=1.7796635699999999 podStartE2EDuration="1.77966357s" podCreationTimestamp="2025-03-25 02:46:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 02:46:21.741835051 +0000 UTC m=+1.300054499" watchObservedRunningTime="2025-03-25 02:46:21.77966357 +0000 UTC m=+1.337882981" Mar 25 02:46:21.813773 kubelet[2798]: I0325 02:46:21.813449 2798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-nkv7s.gb1.brightbox.com" podStartSLOduration=1.813422826 podStartE2EDuration="1.813422826s" podCreationTimestamp="2025-03-25 02:46:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 02:46:21.780270702 +0000 UTC m=+1.338490134" watchObservedRunningTime="2025-03-25 02:46:21.813422826 +0000 UTC m=+1.371642260" Mar 25 02:46:25.366923 kubelet[2798]: I0325 02:46:25.366695 2798 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 25 02:46:25.370240 kubelet[2798]: I0325 02:46:25.369478 2798 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 25 02:46:25.370318 containerd[1520]: time="2025-03-25T02:46:25.369219843Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 25 02:46:26.363586 systemd[1]: Created slice kubepods-besteffort-podd0817a2d_0832_4e97_89b2_069eb19e0e74.slice - libcontainer container kubepods-besteffort-podd0817a2d_0832_4e97_89b2_069eb19e0e74.slice. Mar 25 02:46:26.442175 kubelet[2798]: I0325 02:46:26.441548 2798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d0817a2d-0832-4e97-89b2-069eb19e0e74-lib-modules\") pod \"kube-proxy-px9g5\" (UID: \"d0817a2d-0832-4e97-89b2-069eb19e0e74\") " pod="kube-system/kube-proxy-px9g5" Mar 25 02:46:26.442175 kubelet[2798]: I0325 02:46:26.441627 2798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vv6vd\" (UniqueName: \"kubernetes.io/projected/d0817a2d-0832-4e97-89b2-069eb19e0e74-kube-api-access-vv6vd\") pod \"kube-proxy-px9g5\" (UID: \"d0817a2d-0832-4e97-89b2-069eb19e0e74\") " pod="kube-system/kube-proxy-px9g5" Mar 25 02:46:26.442175 kubelet[2798]: I0325 02:46:26.441669 2798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d0817a2d-0832-4e97-89b2-069eb19e0e74-kube-proxy\") pod \"kube-proxy-px9g5\" (UID: \"d0817a2d-0832-4e97-89b2-069eb19e0e74\") " pod="kube-system/kube-proxy-px9g5" Mar 25 02:46:26.442175 kubelet[2798]: I0325 02:46:26.441696 2798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d0817a2d-0832-4e97-89b2-069eb19e0e74-xtables-lock\") pod \"kube-proxy-px9g5\" (UID: \"d0817a2d-0832-4e97-89b2-069eb19e0e74\") " pod="kube-system/kube-proxy-px9g5" Mar 25 02:46:26.500175 systemd[1]: Created slice kubepods-besteffort-podaedbac56_18f9_48f4_92b1_d20ac14bc2ed.slice - libcontainer container kubepods-besteffort-podaedbac56_18f9_48f4_92b1_d20ac14bc2ed.slice. Mar 25 02:46:26.542782 kubelet[2798]: I0325 02:46:26.542697 2798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nw27h\" (UniqueName: \"kubernetes.io/projected/aedbac56-18f9-48f4-92b1-d20ac14bc2ed-kube-api-access-nw27h\") pod \"tigera-operator-64ff5465b7-hw6vg\" (UID: \"aedbac56-18f9-48f4-92b1-d20ac14bc2ed\") " pod="tigera-operator/tigera-operator-64ff5465b7-hw6vg" Mar 25 02:46:26.542782 kubelet[2798]: I0325 02:46:26.542781 2798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/aedbac56-18f9-48f4-92b1-d20ac14bc2ed-var-lib-calico\") pod \"tigera-operator-64ff5465b7-hw6vg\" (UID: \"aedbac56-18f9-48f4-92b1-d20ac14bc2ed\") " pod="tigera-operator/tigera-operator-64ff5465b7-hw6vg" Mar 25 02:46:26.677507 containerd[1520]: time="2025-03-25T02:46:26.677403357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-px9g5,Uid:d0817a2d-0832-4e97-89b2-069eb19e0e74,Namespace:kube-system,Attempt:0,}" Mar 25 02:46:26.715027 containerd[1520]: time="2025-03-25T02:46:26.714295394Z" level=info msg="connecting to shim 82af7f5c2ca59cdef53d3019b97004580520f4450e4c5b29f43f5b541dc256b9" address="unix:///run/containerd/s/c035001f77930d379766bbebcf924976b01937943d19c4cf9f57ac4f6c4efd9e" namespace=k8s.io protocol=ttrpc version=3 Mar 25 02:46:26.760282 systemd[1]: Started cri-containerd-82af7f5c2ca59cdef53d3019b97004580520f4450e4c5b29f43f5b541dc256b9.scope - libcontainer container 82af7f5c2ca59cdef53d3019b97004580520f4450e4c5b29f43f5b541dc256b9. Mar 25 02:46:26.806064 containerd[1520]: time="2025-03-25T02:46:26.806004949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-64ff5465b7-hw6vg,Uid:aedbac56-18f9-48f4-92b1-d20ac14bc2ed,Namespace:tigera-operator,Attempt:0,}" Mar 25 02:46:26.807776 containerd[1520]: time="2025-03-25T02:46:26.807741548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-px9g5,Uid:d0817a2d-0832-4e97-89b2-069eb19e0e74,Namespace:kube-system,Attempt:0,} returns sandbox id \"82af7f5c2ca59cdef53d3019b97004580520f4450e4c5b29f43f5b541dc256b9\"" Mar 25 02:46:26.815782 containerd[1520]: time="2025-03-25T02:46:26.815713852Z" level=info msg="CreateContainer within sandbox \"82af7f5c2ca59cdef53d3019b97004580520f4450e4c5b29f43f5b541dc256b9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 25 02:46:26.849918 containerd[1520]: time="2025-03-25T02:46:26.848396551Z" level=info msg="Container 077ce73f9ccfb7ebb1eb90dde767c5f8967dd15b3033ff27dd5217883c70ad9f: CDI devices from CRI Config.CDIDevices: []" Mar 25 02:46:26.888139 containerd[1520]: time="2025-03-25T02:46:26.888071632Z" level=info msg="connecting to shim 589aee5f6abcc85abe0cf4f40100bf3afd0e65c1e94c5de0a59d1969f6cc6583" address="unix:///run/containerd/s/035616505ba0d28b791630a6c503ceaf9008048bcaf998d5ad4c20463e5c15ce" namespace=k8s.io protocol=ttrpc version=3 Mar 25 02:46:26.894638 containerd[1520]: time="2025-03-25T02:46:26.894577647Z" level=info msg="CreateContainer within sandbox \"82af7f5c2ca59cdef53d3019b97004580520f4450e4c5b29f43f5b541dc256b9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"077ce73f9ccfb7ebb1eb90dde767c5f8967dd15b3033ff27dd5217883c70ad9f\"" Mar 25 02:46:26.897300 containerd[1520]: time="2025-03-25T02:46:26.896846043Z" level=info msg="StartContainer for \"077ce73f9ccfb7ebb1eb90dde767c5f8967dd15b3033ff27dd5217883c70ad9f\"" Mar 25 02:46:26.901474 containerd[1520]: time="2025-03-25T02:46:26.901368252Z" level=info msg="connecting to shim 077ce73f9ccfb7ebb1eb90dde767c5f8967dd15b3033ff27dd5217883c70ad9f" address="unix:///run/containerd/s/c035001f77930d379766bbebcf924976b01937943d19c4cf9f57ac4f6c4efd9e" protocol=ttrpc version=3 Mar 25 02:46:26.941154 systemd[1]: Started cri-containerd-077ce73f9ccfb7ebb1eb90dde767c5f8967dd15b3033ff27dd5217883c70ad9f.scope - libcontainer container 077ce73f9ccfb7ebb1eb90dde767c5f8967dd15b3033ff27dd5217883c70ad9f. Mar 25 02:46:26.965457 systemd[1]: Started cri-containerd-589aee5f6abcc85abe0cf4f40100bf3afd0e65c1e94c5de0a59d1969f6cc6583.scope - libcontainer container 589aee5f6abcc85abe0cf4f40100bf3afd0e65c1e94c5de0a59d1969f6cc6583. Mar 25 02:46:27.087580 containerd[1520]: time="2025-03-25T02:46:27.086167622Z" level=info msg="StartContainer for \"077ce73f9ccfb7ebb1eb90dde767c5f8967dd15b3033ff27dd5217883c70ad9f\" returns successfully" Mar 25 02:46:27.114653 containerd[1520]: time="2025-03-25T02:46:27.114572078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-64ff5465b7-hw6vg,Uid:aedbac56-18f9-48f4-92b1-d20ac14bc2ed,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"589aee5f6abcc85abe0cf4f40100bf3afd0e65c1e94c5de0a59d1969f6cc6583\"" Mar 25 02:46:27.121285 containerd[1520]: time="2025-03-25T02:46:27.121150709Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.5\"" Mar 25 02:46:27.191294 sudo[1813]: pam_unix(sudo:session): session closed for user root Mar 25 02:46:27.345111 sshd[1812]: Connection closed by 139.178.68.195 port 34476 Mar 25 02:46:27.350312 sshd-session[1810]: pam_unix(sshd:session): session closed for user core Mar 25 02:46:27.361092 systemd[1]: sshd@9-10.230.58.198:22-139.178.68.195:34476.service: Deactivated successfully. Mar 25 02:46:27.366538 systemd[1]: session-11.scope: Deactivated successfully. Mar 25 02:46:27.367190 systemd[1]: session-11.scope: Consumed 7.721s CPU time, 149.2M memory peak. Mar 25 02:46:27.373242 systemd-logind[1503]: Session 11 logged out. Waiting for processes to exit. Mar 25 02:46:27.376147 systemd-logind[1503]: Removed session 11. Mar 25 02:46:27.569753 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3662193031.mount: Deactivated successfully. Mar 25 02:46:31.229829 kubelet[2798]: I0325 02:46:31.229089 2798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-px9g5" podStartSLOduration=5.229061269 podStartE2EDuration="5.229061269s" podCreationTimestamp="2025-03-25 02:46:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 02:46:27.720948574 +0000 UTC m=+7.279168009" watchObservedRunningTime="2025-03-25 02:46:31.229061269 +0000 UTC m=+10.787280686" Mar 25 02:46:33.088615 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount840573833.mount: Deactivated successfully. Mar 25 02:46:33.980907 containerd[1520]: time="2025-03-25T02:46:33.980736914Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:46:33.982086 containerd[1520]: time="2025-03-25T02:46:33.981941329Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.5: active requests=0, bytes read=21945008" Mar 25 02:46:33.983335 containerd[1520]: time="2025-03-25T02:46:33.983296145Z" level=info msg="ImageCreate event name:\"sha256:dc4a8a56c133edb1bc4c3d6bc94bcd96f2bde82413370cb1783ac2d7f3a46d53\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:46:33.987376 containerd[1520]: time="2025-03-25T02:46:33.987329959Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:3341fa9475c0325b86228c8726389f9bae9fd6c430c66fe5cd5dc39d7bb6ad4b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:46:33.989724 containerd[1520]: time="2025-03-25T02:46:33.989668323Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.5\" with image id \"sha256:dc4a8a56c133edb1bc4c3d6bc94bcd96f2bde82413370cb1783ac2d7f3a46d53\", repo tag \"quay.io/tigera/operator:v1.36.5\", repo digest \"quay.io/tigera/operator@sha256:3341fa9475c0325b86228c8726389f9bae9fd6c430c66fe5cd5dc39d7bb6ad4b\", size \"21941003\" in 6.86841637s" Mar 25 02:46:33.989836 containerd[1520]: time="2025-03-25T02:46:33.989753540Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.5\" returns image reference \"sha256:dc4a8a56c133edb1bc4c3d6bc94bcd96f2bde82413370cb1783ac2d7f3a46d53\"" Mar 25 02:46:33.994004 containerd[1520]: time="2025-03-25T02:46:33.993278987Z" level=info msg="CreateContainer within sandbox \"589aee5f6abcc85abe0cf4f40100bf3afd0e65c1e94c5de0a59d1969f6cc6583\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 25 02:46:34.027712 containerd[1520]: time="2025-03-25T02:46:34.027605912Z" level=info msg="Container f453b2c5bcc3f9b1c6ba81749dbd3d51f18527d55a707d1bfa590a4256829c64: CDI devices from CRI Config.CDIDevices: []" Mar 25 02:46:34.035832 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1569363143.mount: Deactivated successfully. Mar 25 02:46:34.041063 containerd[1520]: time="2025-03-25T02:46:34.041025611Z" level=info msg="CreateContainer within sandbox \"589aee5f6abcc85abe0cf4f40100bf3afd0e65c1e94c5de0a59d1969f6cc6583\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"f453b2c5bcc3f9b1c6ba81749dbd3d51f18527d55a707d1bfa590a4256829c64\"" Mar 25 02:46:34.042242 containerd[1520]: time="2025-03-25T02:46:34.042200876Z" level=info msg="StartContainer for \"f453b2c5bcc3f9b1c6ba81749dbd3d51f18527d55a707d1bfa590a4256829c64\"" Mar 25 02:46:34.043707 containerd[1520]: time="2025-03-25T02:46:34.043664152Z" level=info msg="connecting to shim f453b2c5bcc3f9b1c6ba81749dbd3d51f18527d55a707d1bfa590a4256829c64" address="unix:///run/containerd/s/035616505ba0d28b791630a6c503ceaf9008048bcaf998d5ad4c20463e5c15ce" protocol=ttrpc version=3 Mar 25 02:46:34.084073 systemd[1]: Started cri-containerd-f453b2c5bcc3f9b1c6ba81749dbd3d51f18527d55a707d1bfa590a4256829c64.scope - libcontainer container f453b2c5bcc3f9b1c6ba81749dbd3d51f18527d55a707d1bfa590a4256829c64. Mar 25 02:46:34.146675 containerd[1520]: time="2025-03-25T02:46:34.146488420Z" level=info msg="StartContainer for \"f453b2c5bcc3f9b1c6ba81749dbd3d51f18527d55a707d1bfa590a4256829c64\" returns successfully" Mar 25 02:46:34.740304 kubelet[2798]: I0325 02:46:34.739755 2798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-64ff5465b7-hw6vg" podStartSLOduration=1.8686196160000001 podStartE2EDuration="8.739733504s" podCreationTimestamp="2025-03-25 02:46:26 +0000 UTC" firstStartedPulling="2025-03-25 02:46:27.119918725 +0000 UTC m=+6.678138131" lastFinishedPulling="2025-03-25 02:46:33.99103261 +0000 UTC m=+13.549252019" observedRunningTime="2025-03-25 02:46:34.739478874 +0000 UTC m=+14.297698331" watchObservedRunningTime="2025-03-25 02:46:34.739733504 +0000 UTC m=+14.297952931" Mar 25 02:46:37.471692 systemd[1]: Created slice kubepods-besteffort-podcc364839_809a_4c22_869e_01d17cf44089.slice - libcontainer container kubepods-besteffort-podcc364839_809a_4c22_869e_01d17cf44089.slice. Mar 25 02:46:37.474823 kubelet[2798]: W0325 02:46:37.474778 2798 reflector.go:561] object-"calico-system"/"typha-certs": failed to list *v1.Secret: secrets "typha-certs" is forbidden: User "system:node:srv-nkv7s.gb1.brightbox.com" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'srv-nkv7s.gb1.brightbox.com' and this object Mar 25 02:46:37.475327 kubelet[2798]: E0325 02:46:37.474859 2798 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"typha-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"typha-certs\" is forbidden: User \"system:node:srv-nkv7s.gb1.brightbox.com\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'srv-nkv7s.gb1.brightbox.com' and this object" logger="UnhandledError" Mar 25 02:46:37.475327 kubelet[2798]: W0325 02:46:37.475143 2798 reflector.go:561] object-"calico-system"/"tigera-ca-bundle": failed to list *v1.ConfigMap: configmaps "tigera-ca-bundle" is forbidden: User "system:node:srv-nkv7s.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'srv-nkv7s.gb1.brightbox.com' and this object Mar 25 02:46:37.475327 kubelet[2798]: E0325 02:46:37.475193 2798 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"tigera-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"tigera-ca-bundle\" is forbidden: User \"system:node:srv-nkv7s.gb1.brightbox.com\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'srv-nkv7s.gb1.brightbox.com' and this object" logger="UnhandledError" Mar 25 02:46:37.475531 kubelet[2798]: W0325 02:46:37.475444 2798 reflector.go:561] object-"calico-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:srv-nkv7s.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'srv-nkv7s.gb1.brightbox.com' and this object Mar 25 02:46:37.475531 kubelet[2798]: E0325 02:46:37.475473 2798 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:srv-nkv7s.gb1.brightbox.com\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'srv-nkv7s.gb1.brightbox.com' and this object" logger="UnhandledError" Mar 25 02:46:37.521007 kubelet[2798]: I0325 02:46:37.520907 2798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/cc364839-809a-4c22-869e-01d17cf44089-typha-certs\") pod \"calico-typha-6bc4bd9dcd-767tk\" (UID: \"cc364839-809a-4c22-869e-01d17cf44089\") " pod="calico-system/calico-typha-6bc4bd9dcd-767tk" Mar 25 02:46:37.521007 kubelet[2798]: I0325 02:46:37.521014 2798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfwkg\" (UniqueName: \"kubernetes.io/projected/cc364839-809a-4c22-869e-01d17cf44089-kube-api-access-pfwkg\") pod \"calico-typha-6bc4bd9dcd-767tk\" (UID: \"cc364839-809a-4c22-869e-01d17cf44089\") " pod="calico-system/calico-typha-6bc4bd9dcd-767tk" Mar 25 02:46:37.521386 kubelet[2798]: I0325 02:46:37.521054 2798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc364839-809a-4c22-869e-01d17cf44089-tigera-ca-bundle\") pod \"calico-typha-6bc4bd9dcd-767tk\" (UID: \"cc364839-809a-4c22-869e-01d17cf44089\") " pod="calico-system/calico-typha-6bc4bd9dcd-767tk" Mar 25 02:46:37.700167 systemd[1]: Created slice kubepods-besteffort-pod0955d07c_084c_4019_8850_7cc3d58361bd.slice - libcontainer container kubepods-besteffort-pod0955d07c_084c_4019_8850_7cc3d58361bd.slice. Mar 25 02:46:37.722031 kubelet[2798]: I0325 02:46:37.721839 2798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0955d07c-084c-4019-8850-7cc3d58361bd-policysync\") pod \"calico-node-x74dh\" (UID: \"0955d07c-084c-4019-8850-7cc3d58361bd\") " pod="calico-system/calico-node-x74dh" Mar 25 02:46:37.722031 kubelet[2798]: I0325 02:46:37.721911 2798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0955d07c-084c-4019-8850-7cc3d58361bd-flexvol-driver-host\") pod \"calico-node-x74dh\" (UID: \"0955d07c-084c-4019-8850-7cc3d58361bd\") " pod="calico-system/calico-node-x74dh" Mar 25 02:46:37.722031 kubelet[2798]: I0325 02:46:37.721985 2798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0955d07c-084c-4019-8850-7cc3d58361bd-lib-modules\") pod \"calico-node-x74dh\" (UID: \"0955d07c-084c-4019-8850-7cc3d58361bd\") " pod="calico-system/calico-node-x74dh" Mar 25 02:46:37.722370 kubelet[2798]: I0325 02:46:37.722035 2798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0955d07c-084c-4019-8850-7cc3d58361bd-cni-net-dir\") pod \"calico-node-x74dh\" (UID: \"0955d07c-084c-4019-8850-7cc3d58361bd\") " pod="calico-system/calico-node-x74dh" Mar 25 02:46:37.722370 kubelet[2798]: I0325 02:46:37.722101 2798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0955d07c-084c-4019-8850-7cc3d58361bd-var-run-calico\") pod \"calico-node-x74dh\" (UID: \"0955d07c-084c-4019-8850-7cc3d58361bd\") " pod="calico-system/calico-node-x74dh" Mar 25 02:46:37.722370 kubelet[2798]: I0325 02:46:37.722217 2798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7rh6\" (UniqueName: \"kubernetes.io/projected/0955d07c-084c-4019-8850-7cc3d58361bd-kube-api-access-w7rh6\") pod \"calico-node-x74dh\" (UID: \"0955d07c-084c-4019-8850-7cc3d58361bd\") " pod="calico-system/calico-node-x74dh" Mar 25 02:46:37.722370 kubelet[2798]: I0325 02:46:37.722277 2798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0955d07c-084c-4019-8850-7cc3d58361bd-var-lib-calico\") pod \"calico-node-x74dh\" (UID: \"0955d07c-084c-4019-8850-7cc3d58361bd\") " pod="calico-system/calico-node-x74dh" Mar 25 02:46:37.722370 kubelet[2798]: I0325 02:46:37.722307 2798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0955d07c-084c-4019-8850-7cc3d58361bd-cni-bin-dir\") pod \"calico-node-x74dh\" (UID: \"0955d07c-084c-4019-8850-7cc3d58361bd\") " pod="calico-system/calico-node-x74dh" Mar 25 02:46:37.722670 kubelet[2798]: I0325 02:46:37.722339 2798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0955d07c-084c-4019-8850-7cc3d58361bd-cni-log-dir\") pod \"calico-node-x74dh\" (UID: \"0955d07c-084c-4019-8850-7cc3d58361bd\") " pod="calico-system/calico-node-x74dh" Mar 25 02:46:37.722670 kubelet[2798]: I0325 02:46:37.722404 2798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0955d07c-084c-4019-8850-7cc3d58361bd-node-certs\") pod \"calico-node-x74dh\" (UID: \"0955d07c-084c-4019-8850-7cc3d58361bd\") " pod="calico-system/calico-node-x74dh" Mar 25 02:46:37.722670 kubelet[2798]: I0325 02:46:37.722435 2798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0955d07c-084c-4019-8850-7cc3d58361bd-tigera-ca-bundle\") pod \"calico-node-x74dh\" (UID: \"0955d07c-084c-4019-8850-7cc3d58361bd\") " pod="calico-system/calico-node-x74dh" Mar 25 02:46:37.722670 kubelet[2798]: I0325 02:46:37.722487 2798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0955d07c-084c-4019-8850-7cc3d58361bd-xtables-lock\") pod \"calico-node-x74dh\" (UID: \"0955d07c-084c-4019-8850-7cc3d58361bd\") " pod="calico-system/calico-node-x74dh" Mar 25 02:46:37.821265 kubelet[2798]: E0325 02:46:37.820996 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h6zz7" podUID="2d008efc-d6e3-44fa-b5c6-37d59264bf67" Mar 25 02:46:37.834779 kubelet[2798]: E0325 02:46:37.834508 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:37.835186 kubelet[2798]: W0325 02:46:37.834945 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:37.836251 kubelet[2798]: E0325 02:46:37.836203 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:37.836978 kubelet[2798]: E0325 02:46:37.836931 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:37.836978 kubelet[2798]: W0325 02:46:37.836952 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:37.837652 kubelet[2798]: E0325 02:46:37.837184 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:37.838479 kubelet[2798]: E0325 02:46:37.837787 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:37.838479 kubelet[2798]: W0325 02:46:37.837808 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:37.838479 kubelet[2798]: E0325 02:46:37.837836 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:37.842667 kubelet[2798]: E0325 02:46:37.842643 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:37.843040 kubelet[2798]: W0325 02:46:37.842860 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:37.845805 kubelet[2798]: E0325 02:46:37.843944 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:37.846308 kubelet[2798]: E0325 02:46:37.846118 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:37.846308 kubelet[2798]: W0325 02:46:37.846139 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:37.850678 kubelet[2798]: E0325 02:46:37.848788 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:37.850678 kubelet[2798]: E0325 02:46:37.849051 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:37.850678 kubelet[2798]: W0325 02:46:37.849070 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:37.851133 kubelet[2798]: E0325 02:46:37.851110 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:37.851445 kubelet[2798]: W0325 02:46:37.851257 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:37.853086 kubelet[2798]: E0325 02:46:37.853061 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:37.854419 kubelet[2798]: W0325 02:46:37.853195 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:37.855088 kubelet[2798]: E0325 02:46:37.855065 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:37.855445 kubelet[2798]: W0325 02:46:37.855197 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:37.855445 kubelet[2798]: E0325 02:46:37.855226 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:37.856083 kubelet[2798]: E0325 02:46:37.856061 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:37.856979 kubelet[2798]: W0325 02:46:37.856197 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:37.856979 kubelet[2798]: E0325 02:46:37.856225 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:37.856979 kubelet[2798]: E0325 02:46:37.856944 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:37.857262 kubelet[2798]: E0325 02:46:37.857005 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:37.857440 kubelet[2798]: E0325 02:46:37.857412 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:37.857440 kubelet[2798]: W0325 02:46:37.857438 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:37.857637 kubelet[2798]: E0325 02:46:37.857457 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:37.857845 kubelet[2798]: E0325 02:46:37.857820 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:37.928736 kubelet[2798]: E0325 02:46:37.928685 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:37.928736 kubelet[2798]: W0325 02:46:37.928718 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:37.929403 kubelet[2798]: E0325 02:46:37.928768 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:37.929403 kubelet[2798]: E0325 02:46:37.929187 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:37.929403 kubelet[2798]: W0325 02:46:37.929203 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:37.929403 kubelet[2798]: E0325 02:46:37.929244 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:37.929684 kubelet[2798]: E0325 02:46:37.929576 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:37.929684 kubelet[2798]: W0325 02:46:37.929591 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:37.929684 kubelet[2798]: E0325 02:46:37.929627 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:37.931246 kubelet[2798]: E0325 02:46:37.931069 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:37.931246 kubelet[2798]: W0325 02:46:37.931099 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:37.931246 kubelet[2798]: E0325 02:46:37.931119 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:37.932531 kubelet[2798]: E0325 02:46:37.932358 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:37.932531 kubelet[2798]: W0325 02:46:37.932378 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:37.932531 kubelet[2798]: E0325 02:46:37.932395 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:37.933698 kubelet[2798]: E0325 02:46:37.933579 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:37.933698 kubelet[2798]: W0325 02:46:37.933599 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:37.933698 kubelet[2798]: E0325 02:46:37.933628 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:37.934574 kubelet[2798]: E0325 02:46:37.934171 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:37.934574 kubelet[2798]: W0325 02:46:37.934477 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:37.934574 kubelet[2798]: E0325 02:46:37.934502 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:37.935758 kubelet[2798]: E0325 02:46:37.935607 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:37.935758 kubelet[2798]: W0325 02:46:37.935646 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:37.935758 kubelet[2798]: E0325 02:46:37.935664 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:37.938172 kubelet[2798]: E0325 02:46:37.937261 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:37.938172 kubelet[2798]: W0325 02:46:37.937383 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:37.938172 kubelet[2798]: E0325 02:46:37.937409 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:37.938525 kubelet[2798]: E0325 02:46:37.938504 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:37.938903 kubelet[2798]: W0325 02:46:37.938719 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:37.938903 kubelet[2798]: E0325 02:46:37.938747 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:37.940909 kubelet[2798]: E0325 02:46:37.940302 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:37.940909 kubelet[2798]: W0325 02:46:37.940322 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:37.940909 kubelet[2798]: E0325 02:46:37.940455 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:37.941723 kubelet[2798]: E0325 02:46:37.941545 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:37.941723 kubelet[2798]: W0325 02:46:37.941565 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:37.941723 kubelet[2798]: E0325 02:46:37.941582 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:37.943364 kubelet[2798]: E0325 02:46:37.942994 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:37.943364 kubelet[2798]: W0325 02:46:37.943014 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:37.943364 kubelet[2798]: E0325 02:46:37.943031 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:37.944049 kubelet[2798]: E0325 02:46:37.943855 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:37.944049 kubelet[2798]: W0325 02:46:37.943931 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:37.944049 kubelet[2798]: E0325 02:46:37.943953 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:37.945335 kubelet[2798]: E0325 02:46:37.945139 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:37.945335 kubelet[2798]: W0325 02:46:37.945159 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:37.945335 kubelet[2798]: E0325 02:46:37.945206 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:37.946815 kubelet[2798]: E0325 02:46:37.946746 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:37.946815 kubelet[2798]: W0325 02:46:37.946770 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:37.946815 kubelet[2798]: E0325 02:46:37.946787 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:37.948461 kubelet[2798]: E0325 02:46:37.948290 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:37.948461 kubelet[2798]: W0325 02:46:37.948312 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:37.948461 kubelet[2798]: E0325 02:46:37.948330 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:37.948783 kubelet[2798]: E0325 02:46:37.948763 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:37.948999 kubelet[2798]: W0325 02:46:37.948908 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:37.948999 kubelet[2798]: E0325 02:46:37.948935 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:37.950034 kubelet[2798]: E0325 02:46:37.949491 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:37.950034 kubelet[2798]: W0325 02:46:37.949510 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:37.950034 kubelet[2798]: E0325 02:46:37.949527 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:37.950907 kubelet[2798]: E0325 02:46:37.950394 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:37.950907 kubelet[2798]: W0325 02:46:37.950413 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:37.950907 kubelet[2798]: E0325 02:46:37.950429 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:37.952374 kubelet[2798]: E0325 02:46:37.952353 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:37.952660 kubelet[2798]: W0325 02:46:37.952504 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:37.952660 kubelet[2798]: E0325 02:46:37.952534 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:37.953288 kubelet[2798]: E0325 02:46:37.953267 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:37.953453 kubelet[2798]: W0325 02:46:37.953387 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:37.953453 kubelet[2798]: E0325 02:46:37.953415 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:37.953977 kubelet[2798]: I0325 02:46:37.953939 2798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2d008efc-d6e3-44fa-b5c6-37d59264bf67-kubelet-dir\") pod \"csi-node-driver-h6zz7\" (UID: \"2d008efc-d6e3-44fa-b5c6-37d59264bf67\") " pod="calico-system/csi-node-driver-h6zz7" Mar 25 02:46:37.954514 kubelet[2798]: E0325 02:46:37.954348 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:37.954514 kubelet[2798]: W0325 02:46:37.954484 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:37.955009 kubelet[2798]: E0325 02:46:37.954776 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:37.956315 kubelet[2798]: E0325 02:46:37.956149 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:37.956315 kubelet[2798]: W0325 02:46:37.956169 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:37.956315 kubelet[2798]: E0325 02:46:37.956221 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:37.957462 kubelet[2798]: E0325 02:46:37.957211 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:37.957462 kubelet[2798]: W0325 02:46:37.957230 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:37.957462 kubelet[2798]: E0325 02:46:37.957247 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:37.958231 kubelet[2798]: E0325 02:46:37.957993 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:37.958231 kubelet[2798]: W0325 02:46:37.958008 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:37.958231 kubelet[2798]: E0325 02:46:37.958025 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:37.958901 kubelet[2798]: E0325 02:46:37.958810 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:37.958901 kubelet[2798]: W0325 02:46:37.958829 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:37.959157 kubelet[2798]: E0325 02:46:37.958976 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:37.959479 kubelet[2798]: I0325 02:46:37.959015 2798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xt4bv\" (UniqueName: \"kubernetes.io/projected/2d008efc-d6e3-44fa-b5c6-37d59264bf67-kube-api-access-xt4bv\") pod \"csi-node-driver-h6zz7\" (UID: \"2d008efc-d6e3-44fa-b5c6-37d59264bf67\") " pod="calico-system/csi-node-driver-h6zz7" Mar 25 02:46:37.961045 kubelet[2798]: E0325 02:46:37.960983 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:37.961045 kubelet[2798]: W0325 02:46:37.961011 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:37.961045 kubelet[2798]: E0325 02:46:37.961034 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:37.961349 kubelet[2798]: I0325 02:46:37.961317 2798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2d008efc-d6e3-44fa-b5c6-37d59264bf67-registration-dir\") pod \"csi-node-driver-h6zz7\" (UID: \"2d008efc-d6e3-44fa-b5c6-37d59264bf67\") " pod="calico-system/csi-node-driver-h6zz7" Mar 25 02:46:37.962209 kubelet[2798]: E0325 02:46:37.961640 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:37.962209 kubelet[2798]: W0325 02:46:37.961666 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:37.962209 kubelet[2798]: E0325 02:46:37.961693 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:37.962765 kubelet[2798]: E0325 02:46:37.962673 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:37.962765 kubelet[2798]: W0325 02:46:37.962692 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:37.962765 kubelet[2798]: E0325 02:46:37.962738 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:37.963904 kubelet[2798]: E0325 02:46:37.963201 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:37.963904 kubelet[2798]: W0325 02:46:37.963221 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:37.963904 kubelet[2798]: E0325 02:46:37.963263 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:37.963904 kubelet[2798]: I0325 02:46:37.963297 2798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2d008efc-d6e3-44fa-b5c6-37d59264bf67-socket-dir\") pod \"csi-node-driver-h6zz7\" (UID: \"2d008efc-d6e3-44fa-b5c6-37d59264bf67\") " pod="calico-system/csi-node-driver-h6zz7" Mar 25 02:46:37.964663 kubelet[2798]: E0325 02:46:37.964475 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:37.964663 kubelet[2798]: W0325 02:46:37.964494 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:37.964663 kubelet[2798]: E0325 02:46:37.964521 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:37.965061 kubelet[2798]: E0325 02:46:37.965040 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:37.965251 kubelet[2798]: W0325 02:46:37.965152 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:37.965449 kubelet[2798]: E0325 02:46:37.965353 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:37.966000 kubelet[2798]: E0325 02:46:37.965979 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:37.966211 kubelet[2798]: W0325 02:46:37.966079 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:37.966211 kubelet[2798]: E0325 02:46:37.966191 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:37.966537 kubelet[2798]: E0325 02:46:37.966516 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:37.966830 kubelet[2798]: W0325 02:46:37.966675 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:37.966830 kubelet[2798]: E0325 02:46:37.966710 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:37.967405 kubelet[2798]: E0325 02:46:37.967229 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:37.967405 kubelet[2798]: W0325 02:46:37.967248 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:37.967405 kubelet[2798]: E0325 02:46:37.967274 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:37.967739 kubelet[2798]: E0325 02:46:37.967718 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:37.967832 kubelet[2798]: W0325 02:46:37.967812 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:37.967971 kubelet[2798]: E0325 02:46:37.967949 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:37.968540 kubelet[2798]: E0325 02:46:37.968407 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:37.968540 kubelet[2798]: W0325 02:46:37.968426 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:37.968540 kubelet[2798]: E0325 02:46:37.968455 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:37.968540 kubelet[2798]: I0325 02:46:37.968489 2798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/2d008efc-d6e3-44fa-b5c6-37d59264bf67-varrun\") pod \"csi-node-driver-h6zz7\" (UID: \"2d008efc-d6e3-44fa-b5c6-37d59264bf67\") " pod="calico-system/csi-node-driver-h6zz7" Mar 25 02:46:37.969217 kubelet[2798]: E0325 02:46:37.969189 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:37.969217 kubelet[2798]: W0325 02:46:37.969214 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:37.969331 kubelet[2798]: E0325 02:46:37.969233 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:37.970157 kubelet[2798]: E0325 02:46:37.970135 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:37.970306 kubelet[2798]: W0325 02:46:37.970250 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:37.970306 kubelet[2798]: E0325 02:46:37.970275 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:38.070554 kubelet[2798]: E0325 02:46:38.070303 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:38.070554 kubelet[2798]: W0325 02:46:38.070356 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:38.070554 kubelet[2798]: E0325 02:46:38.070393 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:38.073170 kubelet[2798]: E0325 02:46:38.073132 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:38.073170 kubelet[2798]: W0325 02:46:38.073156 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:38.073318 kubelet[2798]: E0325 02:46:38.073175 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:38.073961 kubelet[2798]: E0325 02:46:38.073935 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:38.073961 kubelet[2798]: W0325 02:46:38.073958 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:38.074597 kubelet[2798]: E0325 02:46:38.074444 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:38.076064 kubelet[2798]: E0325 02:46:38.076037 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:38.076064 kubelet[2798]: W0325 02:46:38.076060 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:38.076184 kubelet[2798]: E0325 02:46:38.076103 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:38.076957 kubelet[2798]: E0325 02:46:38.076932 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:38.076957 kubelet[2798]: W0325 02:46:38.076955 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:38.077276 kubelet[2798]: E0325 02:46:38.077226 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:38.077670 kubelet[2798]: E0325 02:46:38.077634 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:38.077670 kubelet[2798]: W0325 02:46:38.077659 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:38.077787 kubelet[2798]: E0325 02:46:38.077696 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:38.078335 kubelet[2798]: E0325 02:46:38.078300 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:38.078335 kubelet[2798]: W0325 02:46:38.078326 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:38.078464 kubelet[2798]: E0325 02:46:38.078351 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:38.078805 kubelet[2798]: E0325 02:46:38.078779 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:38.078805 kubelet[2798]: W0325 02:46:38.078802 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:38.079019 kubelet[2798]: E0325 02:46:38.078994 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:38.079431 kubelet[2798]: E0325 02:46:38.079406 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:38.079431 kubelet[2798]: W0325 02:46:38.079428 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:38.079839 kubelet[2798]: E0325 02:46:38.079708 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:38.079968 kubelet[2798]: E0325 02:46:38.079943 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:38.079968 kubelet[2798]: W0325 02:46:38.079958 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:38.080183 kubelet[2798]: E0325 02:46:38.080156 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:38.080587 kubelet[2798]: E0325 02:46:38.080563 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:38.080587 kubelet[2798]: W0325 02:46:38.080587 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:38.080753 kubelet[2798]: E0325 02:46:38.080721 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:38.081445 kubelet[2798]: E0325 02:46:38.081245 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:38.081445 kubelet[2798]: W0325 02:46:38.081269 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:38.081570 kubelet[2798]: E0325 02:46:38.081476 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:38.081926 kubelet[2798]: E0325 02:46:38.081860 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:38.081926 kubelet[2798]: W0325 02:46:38.081893 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:38.082226 kubelet[2798]: E0325 02:46:38.082192 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:38.082376 kubelet[2798]: E0325 02:46:38.082354 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:38.082376 kubelet[2798]: W0325 02:46:38.082376 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:38.082499 kubelet[2798]: E0325 02:46:38.082471 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:38.082813 kubelet[2798]: E0325 02:46:38.082789 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:38.082813 kubelet[2798]: W0325 02:46:38.082813 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:38.083481 kubelet[2798]: E0325 02:46:38.083362 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:38.083590 kubelet[2798]: E0325 02:46:38.083522 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:38.083590 kubelet[2798]: W0325 02:46:38.083537 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:38.083855 kubelet[2798]: E0325 02:46:38.083691 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:38.084250 kubelet[2798]: E0325 02:46:38.084219 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:38.084250 kubelet[2798]: W0325 02:46:38.084240 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:38.084651 kubelet[2798]: E0325 02:46:38.084626 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:38.085072 kubelet[2798]: E0325 02:46:38.085048 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:38.085072 kubelet[2798]: W0325 02:46:38.085070 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:38.086221 kubelet[2798]: E0325 02:46:38.086078 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:38.086295 kubelet[2798]: E0325 02:46:38.086283 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:38.086399 kubelet[2798]: W0325 02:46:38.086298 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:38.086399 kubelet[2798]: E0325 02:46:38.086339 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:38.086869 kubelet[2798]: E0325 02:46:38.086833 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:38.086869 kubelet[2798]: W0325 02:46:38.086859 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:38.087131 kubelet[2798]: E0325 02:46:38.087105 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:38.087509 kubelet[2798]: E0325 02:46:38.087484 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:38.087509 kubelet[2798]: W0325 02:46:38.087507 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:38.087988 kubelet[2798]: E0325 02:46:38.087960 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:38.088620 kubelet[2798]: E0325 02:46:38.088588 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:38.088709 kubelet[2798]: W0325 02:46:38.088621 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:38.089035 kubelet[2798]: E0325 02:46:38.088913 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:38.089509 kubelet[2798]: E0325 02:46:38.089457 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:38.089509 kubelet[2798]: W0325 02:46:38.089481 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:38.089659 kubelet[2798]: E0325 02:46:38.089518 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:38.090516 kubelet[2798]: E0325 02:46:38.090387 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:38.091003 kubelet[2798]: W0325 02:46:38.090519 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:38.091003 kubelet[2798]: E0325 02:46:38.090808 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:38.092174 kubelet[2798]: E0325 02:46:38.092148 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:38.092174 kubelet[2798]: W0325 02:46:38.092172 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:38.092315 kubelet[2798]: E0325 02:46:38.092200 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:38.092645 kubelet[2798]: E0325 02:46:38.092581 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:38.092645 kubelet[2798]: W0325 02:46:38.092603 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:38.093266 kubelet[2798]: E0325 02:46:38.093093 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:38.093592 kubelet[2798]: E0325 02:46:38.093567 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:38.093592 kubelet[2798]: W0325 02:46:38.093588 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:38.093747 kubelet[2798]: E0325 02:46:38.093654 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:38.094499 kubelet[2798]: E0325 02:46:38.094285 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:38.094821 kubelet[2798]: W0325 02:46:38.094501 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:38.094821 kubelet[2798]: E0325 02:46:38.094519 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:38.095322 kubelet[2798]: E0325 02:46:38.095298 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:38.095322 kubelet[2798]: W0325 02:46:38.095319 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:38.095444 kubelet[2798]: E0325 02:46:38.095338 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:38.096016 kubelet[2798]: E0325 02:46:38.095989 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:38.096016 kubelet[2798]: W0325 02:46:38.096010 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:38.096156 kubelet[2798]: E0325 02:46:38.096028 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:38.193485 kubelet[2798]: E0325 02:46:38.193282 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:38.193485 kubelet[2798]: W0325 02:46:38.193322 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:38.193485 kubelet[2798]: E0325 02:46:38.193354 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:38.194065 kubelet[2798]: E0325 02:46:38.193715 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:38.194065 kubelet[2798]: W0325 02:46:38.193730 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:38.194065 kubelet[2798]: E0325 02:46:38.193749 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:38.194232 kubelet[2798]: E0325 02:46:38.194068 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:38.194232 kubelet[2798]: W0325 02:46:38.194083 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:38.194232 kubelet[2798]: E0325 02:46:38.194098 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:38.194515 kubelet[2798]: E0325 02:46:38.194382 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:38.194515 kubelet[2798]: W0325 02:46:38.194404 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:38.194515 kubelet[2798]: E0325 02:46:38.194420 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:38.194715 kubelet[2798]: E0325 02:46:38.194691 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:38.194715 kubelet[2798]: W0325 02:46:38.194712 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:38.194840 kubelet[2798]: E0325 02:46:38.194729 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:38.195066 kubelet[2798]: E0325 02:46:38.195046 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:38.195066 kubelet[2798]: W0325 02:46:38.195066 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:38.195153 kubelet[2798]: E0325 02:46:38.195103 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:38.296618 kubelet[2798]: E0325 02:46:38.296544 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:38.296618 kubelet[2798]: W0325 02:46:38.296587 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:38.296618 kubelet[2798]: E0325 02:46:38.296630 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:38.297066 kubelet[2798]: E0325 02:46:38.297038 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:38.297066 kubelet[2798]: W0325 02:46:38.297054 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:38.297225 kubelet[2798]: E0325 02:46:38.297070 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:38.297549 kubelet[2798]: E0325 02:46:38.297457 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:38.297549 kubelet[2798]: W0325 02:46:38.297484 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:38.297549 kubelet[2798]: E0325 02:46:38.297502 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:38.297972 kubelet[2798]: E0325 02:46:38.297798 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:38.297972 kubelet[2798]: W0325 02:46:38.297813 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:38.297972 kubelet[2798]: E0325 02:46:38.297829 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:38.298155 kubelet[2798]: E0325 02:46:38.298132 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:38.298216 kubelet[2798]: W0325 02:46:38.298155 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:38.298216 kubelet[2798]: E0325 02:46:38.298171 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:38.298539 kubelet[2798]: E0325 02:46:38.298495 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:38.298539 kubelet[2798]: W0325 02:46:38.298531 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:38.298689 kubelet[2798]: E0325 02:46:38.298549 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:38.381125 kubelet[2798]: E0325 02:46:38.380899 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:38.381125 kubelet[2798]: W0325 02:46:38.380938 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:38.381125 kubelet[2798]: E0325 02:46:38.380973 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:38.399640 kubelet[2798]: E0325 02:46:38.399584 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:38.400029 kubelet[2798]: W0325 02:46:38.399837 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:38.400029 kubelet[2798]: E0325 02:46:38.399916 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:38.400801 kubelet[2798]: E0325 02:46:38.400573 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:38.400801 kubelet[2798]: W0325 02:46:38.400596 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:38.400801 kubelet[2798]: E0325 02:46:38.400624 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:38.401286 kubelet[2798]: E0325 02:46:38.401110 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:38.401286 kubelet[2798]: W0325 02:46:38.401129 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:38.401286 kubelet[2798]: E0325 02:46:38.401146 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:38.401681 kubelet[2798]: E0325 02:46:38.401551 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:38.401681 kubelet[2798]: W0325 02:46:38.401570 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:38.401681 kubelet[2798]: E0325 02:46:38.401587 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:38.402321 kubelet[2798]: E0325 02:46:38.402217 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:38.402321 kubelet[2798]: W0325 02:46:38.402236 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:38.402321 kubelet[2798]: E0325 02:46:38.402253 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:38.503620 kubelet[2798]: E0325 02:46:38.503557 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:38.503620 kubelet[2798]: W0325 02:46:38.503596 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:38.504698 kubelet[2798]: E0325 02:46:38.503638 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:38.504698 kubelet[2798]: E0325 02:46:38.503968 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:38.504698 kubelet[2798]: W0325 02:46:38.503982 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:38.504698 kubelet[2798]: E0325 02:46:38.503999 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:38.504698 kubelet[2798]: E0325 02:46:38.504281 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:38.504698 kubelet[2798]: W0325 02:46:38.504295 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:38.504698 kubelet[2798]: E0325 02:46:38.504321 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:38.504698 kubelet[2798]: E0325 02:46:38.504605 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:38.504698 kubelet[2798]: W0325 02:46:38.504630 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:38.504698 kubelet[2798]: E0325 02:46:38.504645 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:38.505230 kubelet[2798]: E0325 02:46:38.504923 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:38.505230 kubelet[2798]: W0325 02:46:38.504940 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:38.505230 kubelet[2798]: E0325 02:46:38.504959 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:38.607288 kubelet[2798]: E0325 02:46:38.607189 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:38.608281 kubelet[2798]: W0325 02:46:38.607228 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:38.608281 kubelet[2798]: E0325 02:46:38.608080 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:38.608497 kubelet[2798]: E0325 02:46:38.608388 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:38.608497 kubelet[2798]: W0325 02:46:38.608404 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:38.608497 kubelet[2798]: E0325 02:46:38.608420 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:38.609202 kubelet[2798]: E0325 02:46:38.608703 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:38.609202 kubelet[2798]: W0325 02:46:38.608726 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:38.609202 kubelet[2798]: E0325 02:46:38.608744 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:38.609410 kubelet[2798]: E0325 02:46:38.609222 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:38.609410 kubelet[2798]: W0325 02:46:38.609250 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:38.609410 kubelet[2798]: E0325 02:46:38.609266 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:38.610070 kubelet[2798]: E0325 02:46:38.609591 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:38.610070 kubelet[2798]: W0325 02:46:38.609622 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:38.610070 kubelet[2798]: E0325 02:46:38.609640 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:38.623551 kubelet[2798]: E0325 02:46:38.623130 2798 configmap.go:193] Couldn't get configMap calico-system/tigera-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 25 02:46:38.623551 kubelet[2798]: E0325 02:46:38.623347 2798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cc364839-809a-4c22-869e-01d17cf44089-tigera-ca-bundle podName:cc364839-809a-4c22-869e-01d17cf44089 nodeName:}" failed. No retries permitted until 2025-03-25 02:46:39.123289788 +0000 UTC m=+18.681509198 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tigera-ca-bundle" (UniqueName: "kubernetes.io/configmap/cc364839-809a-4c22-869e-01d17cf44089-tigera-ca-bundle") pod "calico-typha-6bc4bd9dcd-767tk" (UID: "cc364839-809a-4c22-869e-01d17cf44089") : failed to sync configmap cache: timed out waiting for the condition Mar 25 02:46:38.650444 kubelet[2798]: E0325 02:46:38.649409 2798 projected.go:288] Couldn't get configMap calico-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Mar 25 02:46:38.650444 kubelet[2798]: E0325 02:46:38.649517 2798 projected.go:194] Error preparing data for projected volume kube-api-access-pfwkg for pod calico-system/calico-typha-6bc4bd9dcd-767tk: failed to sync configmap cache: timed out waiting for the condition Mar 25 02:46:38.650444 kubelet[2798]: E0325 02:46:38.649683 2798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cc364839-809a-4c22-869e-01d17cf44089-kube-api-access-pfwkg podName:cc364839-809a-4c22-869e-01d17cf44089 nodeName:}" failed. No retries permitted until 2025-03-25 02:46:39.149655076 +0000 UTC m=+18.707874486 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-pfwkg" (UniqueName: "kubernetes.io/projected/cc364839-809a-4c22-869e-01d17cf44089-kube-api-access-pfwkg") pod "calico-typha-6bc4bd9dcd-767tk" (UID: "cc364839-809a-4c22-869e-01d17cf44089") : failed to sync configmap cache: timed out waiting for the condition Mar 25 02:46:38.711271 kubelet[2798]: E0325 02:46:38.711201 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:38.711271 kubelet[2798]: W0325 02:46:38.711264 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:38.711522 kubelet[2798]: E0325 02:46:38.711296 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:38.712394 kubelet[2798]: E0325 02:46:38.711741 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:38.712394 kubelet[2798]: W0325 02:46:38.711758 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:38.712394 kubelet[2798]: E0325 02:46:38.711775 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:38.712394 kubelet[2798]: E0325 02:46:38.712310 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:38.712394 kubelet[2798]: W0325 02:46:38.712326 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:38.712394 kubelet[2798]: E0325 02:46:38.712342 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:38.712915 kubelet[2798]: E0325 02:46:38.712713 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:38.712915 kubelet[2798]: W0325 02:46:38.712728 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:38.712915 kubelet[2798]: E0325 02:46:38.712744 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:38.713119 kubelet[2798]: E0325 02:46:38.713097 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:38.713119 kubelet[2798]: W0325 02:46:38.713118 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:38.713216 kubelet[2798]: E0325 02:46:38.713135 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:38.731947 kubelet[2798]: E0325 02:46:38.730807 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:38.731947 kubelet[2798]: W0325 02:46:38.730840 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:38.731947 kubelet[2798]: E0325 02:46:38.730864 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:38.747257 kubelet[2798]: E0325 02:46:38.747215 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:38.747257 kubelet[2798]: W0325 02:46:38.747251 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:38.747401 kubelet[2798]: E0325 02:46:38.747286 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:38.758844 kubelet[2798]: E0325 02:46:38.758815 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:38.758844 kubelet[2798]: W0325 02:46:38.758840 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:38.758996 kubelet[2798]: E0325 02:46:38.758861 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:38.814952 kubelet[2798]: E0325 02:46:38.814860 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:38.814952 kubelet[2798]: W0325 02:46:38.814938 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:38.815202 kubelet[2798]: E0325 02:46:38.814982 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:38.816061 kubelet[2798]: E0325 02:46:38.815340 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:38.816061 kubelet[2798]: W0325 02:46:38.815355 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:38.816061 kubelet[2798]: E0325 02:46:38.815372 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:38.908107 containerd[1520]: time="2025-03-25T02:46:38.907812425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-x74dh,Uid:0955d07c-084c-4019-8850-7cc3d58361bd,Namespace:calico-system,Attempt:0,}" Mar 25 02:46:38.917202 kubelet[2798]: E0325 02:46:38.916907 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:38.917202 kubelet[2798]: W0325 02:46:38.916989 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:38.917202 kubelet[2798]: E0325 02:46:38.917029 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:38.917687 kubelet[2798]: E0325 02:46:38.917663 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:38.917841 kubelet[2798]: W0325 02:46:38.917766 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:38.917841 kubelet[2798]: E0325 02:46:38.917792 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:38.947912 containerd[1520]: time="2025-03-25T02:46:38.947651968Z" level=info msg="connecting to shim 4de55a2ef9c13aa37edd653fe64c7e3febdfa1703842c9fe6fa87ceb72667821" address="unix:///run/containerd/s/82e6d539e6fa4cf0851ac20cfdddd0fec7892e8951ce688e05a54d9d79646c8f" namespace=k8s.io protocol=ttrpc version=3 Mar 25 02:46:38.994110 systemd[1]: Started cri-containerd-4de55a2ef9c13aa37edd653fe64c7e3febdfa1703842c9fe6fa87ceb72667821.scope - libcontainer container 4de55a2ef9c13aa37edd653fe64c7e3febdfa1703842c9fe6fa87ceb72667821. Mar 25 02:46:39.020103 kubelet[2798]: E0325 02:46:39.020053 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:39.020103 kubelet[2798]: W0325 02:46:39.020100 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:39.020542 kubelet[2798]: E0325 02:46:39.020132 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:39.020670 kubelet[2798]: E0325 02:46:39.020648 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:39.020771 kubelet[2798]: W0325 02:46:39.020670 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:39.020860 kubelet[2798]: E0325 02:46:39.020792 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:39.049751 containerd[1520]: time="2025-03-25T02:46:39.049572528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-x74dh,Uid:0955d07c-084c-4019-8850-7cc3d58361bd,Namespace:calico-system,Attempt:0,} returns sandbox id \"4de55a2ef9c13aa37edd653fe64c7e3febdfa1703842c9fe6fa87ceb72667821\"" Mar 25 02:46:39.051738 containerd[1520]: time="2025-03-25T02:46:39.051694923Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\"" Mar 25 02:46:39.121815 kubelet[2798]: E0325 02:46:39.121747 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:39.121815 kubelet[2798]: W0325 02:46:39.121794 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:39.121815 kubelet[2798]: E0325 02:46:39.121828 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:39.122429 kubelet[2798]: E0325 02:46:39.122404 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:39.122490 kubelet[2798]: W0325 02:46:39.122442 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:39.122490 kubelet[2798]: E0325 02:46:39.122462 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:39.223812 kubelet[2798]: E0325 02:46:39.223642 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:39.223812 kubelet[2798]: W0325 02:46:39.223680 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:39.223812 kubelet[2798]: E0325 02:46:39.223713 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:39.225406 kubelet[2798]: E0325 02:46:39.224176 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:39.225406 kubelet[2798]: W0325 02:46:39.224192 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:39.225406 kubelet[2798]: E0325 02:46:39.224218 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:39.225406 kubelet[2798]: E0325 02:46:39.224646 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:39.225406 kubelet[2798]: W0325 02:46:39.224679 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:39.225406 kubelet[2798]: E0325 02:46:39.224735 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:39.226381 kubelet[2798]: E0325 02:46:39.225583 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:39.226381 kubelet[2798]: W0325 02:46:39.225611 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:39.226381 kubelet[2798]: E0325 02:46:39.225690 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:39.226381 kubelet[2798]: E0325 02:46:39.225913 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:39.226381 kubelet[2798]: W0325 02:46:39.225929 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:39.226381 kubelet[2798]: E0325 02:46:39.225957 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:39.226381 kubelet[2798]: E0325 02:46:39.226225 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:39.226381 kubelet[2798]: W0325 02:46:39.226240 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:39.226381 kubelet[2798]: E0325 02:46:39.226267 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:39.228399 kubelet[2798]: E0325 02:46:39.226521 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:39.228399 kubelet[2798]: W0325 02:46:39.226535 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:39.228399 kubelet[2798]: E0325 02:46:39.226561 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:39.228399 kubelet[2798]: E0325 02:46:39.226979 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:39.228399 kubelet[2798]: W0325 02:46:39.226997 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:39.228399 kubelet[2798]: E0325 02:46:39.227052 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:39.228399 kubelet[2798]: E0325 02:46:39.227258 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:39.228399 kubelet[2798]: W0325 02:46:39.227273 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:39.228399 kubelet[2798]: E0325 02:46:39.227289 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:39.228399 kubelet[2798]: E0325 02:46:39.227552 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:39.228896 kubelet[2798]: W0325 02:46:39.227566 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:39.228896 kubelet[2798]: E0325 02:46:39.227582 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:39.229730 kubelet[2798]: E0325 02:46:39.229705 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:39.229831 kubelet[2798]: W0325 02:46:39.229809 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:39.229989 kubelet[2798]: E0325 02:46:39.229965 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:39.234775 kubelet[2798]: E0325 02:46:39.234751 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:46:39.234775 kubelet[2798]: W0325 02:46:39.234774 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:46:39.234956 kubelet[2798]: E0325 02:46:39.234793 2798 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:46:39.279271 containerd[1520]: time="2025-03-25T02:46:39.279179746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6bc4bd9dcd-767tk,Uid:cc364839-809a-4c22-869e-01d17cf44089,Namespace:calico-system,Attempt:0,}" Mar 25 02:46:39.415223 containerd[1520]: time="2025-03-25T02:46:39.414481693Z" level=info msg="connecting to shim 351477adb6d2f160d85d4798d2367e679b99a31bd788a211f713390d6b4716b2" address="unix:///run/containerd/s/807cfa4f32d16347fd746e2da7b798caa48e9fff542bb71067508a60856a56ca" namespace=k8s.io protocol=ttrpc version=3 Mar 25 02:46:39.485223 systemd[1]: Started cri-containerd-351477adb6d2f160d85d4798d2367e679b99a31bd788a211f713390d6b4716b2.scope - libcontainer container 351477adb6d2f160d85d4798d2367e679b99a31bd788a211f713390d6b4716b2. Mar 25 02:46:39.584864 containerd[1520]: time="2025-03-25T02:46:39.584785306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6bc4bd9dcd-767tk,Uid:cc364839-809a-4c22-869e-01d17cf44089,Namespace:calico-system,Attempt:0,} returns sandbox id \"351477adb6d2f160d85d4798d2367e679b99a31bd788a211f713390d6b4716b2\"" Mar 25 02:46:39.631396 kubelet[2798]: E0325 02:46:39.631298 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h6zz7" podUID="2d008efc-d6e3-44fa-b5c6-37d59264bf67" Mar 25 02:46:41.149277 containerd[1520]: time="2025-03-25T02:46:41.149081293Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:46:41.151552 containerd[1520]: time="2025-03-25T02:46:41.151490629Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2: active requests=0, bytes read=5364011" Mar 25 02:46:41.152629 containerd[1520]: time="2025-03-25T02:46:41.152523648Z" level=info msg="ImageCreate event name:\"sha256:441bf8ace5b7fa3742b7fafaf6cd60fea340dd307169a18c75a1d78cba3a8365\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:46:41.155525 containerd[1520]: time="2025-03-25T02:46:41.155461803Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:51d9341a4a37e278a906f40ecc73f5076e768612c21621f1b1d4f2b2f0735a1d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:46:41.156765 containerd[1520]: time="2025-03-25T02:46:41.156546685Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" with image id \"sha256:441bf8ace5b7fa3742b7fafaf6cd60fea340dd307169a18c75a1d78cba3a8365\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:51d9341a4a37e278a906f40ecc73f5076e768612c21621f1b1d4f2b2f0735a1d\", size \"6857075\" in 2.104800443s" Mar 25 02:46:41.156765 containerd[1520]: time="2025-03-25T02:46:41.156632859Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" returns image reference \"sha256:441bf8ace5b7fa3742b7fafaf6cd60fea340dd307169a18c75a1d78cba3a8365\"" Mar 25 02:46:41.158090 containerd[1520]: time="2025-03-25T02:46:41.158059964Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.2\"" Mar 25 02:46:41.160983 containerd[1520]: time="2025-03-25T02:46:41.160824747Z" level=info msg="CreateContainer within sandbox \"4de55a2ef9c13aa37edd653fe64c7e3febdfa1703842c9fe6fa87ceb72667821\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 25 02:46:41.185936 containerd[1520]: time="2025-03-25T02:46:41.185674201Z" level=info msg="Container 3d32a10ee889f4f13b573bede78de047253006dd07efb31f47776f0d95fb5144: CDI devices from CRI Config.CDIDevices: []" Mar 25 02:46:41.199185 containerd[1520]: time="2025-03-25T02:46:41.199096370Z" level=info msg="CreateContainer within sandbox \"4de55a2ef9c13aa37edd653fe64c7e3febdfa1703842c9fe6fa87ceb72667821\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"3d32a10ee889f4f13b573bede78de047253006dd07efb31f47776f0d95fb5144\"" Mar 25 02:46:41.202232 containerd[1520]: time="2025-03-25T02:46:41.201590688Z" level=info msg="StartContainer for \"3d32a10ee889f4f13b573bede78de047253006dd07efb31f47776f0d95fb5144\"" Mar 25 02:46:41.205916 containerd[1520]: time="2025-03-25T02:46:41.205834384Z" level=info msg="connecting to shim 3d32a10ee889f4f13b573bede78de047253006dd07efb31f47776f0d95fb5144" address="unix:///run/containerd/s/82e6d539e6fa4cf0851ac20cfdddd0fec7892e8951ce688e05a54d9d79646c8f" protocol=ttrpc version=3 Mar 25 02:46:41.244090 systemd[1]: Started cri-containerd-3d32a10ee889f4f13b573bede78de047253006dd07efb31f47776f0d95fb5144.scope - libcontainer container 3d32a10ee889f4f13b573bede78de047253006dd07efb31f47776f0d95fb5144. Mar 25 02:46:41.350282 containerd[1520]: time="2025-03-25T02:46:41.350200766Z" level=info msg="StartContainer for \"3d32a10ee889f4f13b573bede78de047253006dd07efb31f47776f0d95fb5144\" returns successfully" Mar 25 02:46:41.368975 systemd[1]: cri-containerd-3d32a10ee889f4f13b573bede78de047253006dd07efb31f47776f0d95fb5144.scope: Deactivated successfully. Mar 25 02:46:41.369902 systemd[1]: cri-containerd-3d32a10ee889f4f13b573bede78de047253006dd07efb31f47776f0d95fb5144.scope: Consumed 68ms CPU time, 8M memory peak, 5.8M written to disk. Mar 25 02:46:41.394046 containerd[1520]: time="2025-03-25T02:46:41.393912331Z" level=info msg="received exit event container_id:\"3d32a10ee889f4f13b573bede78de047253006dd07efb31f47776f0d95fb5144\" id:\"3d32a10ee889f4f13b573bede78de047253006dd07efb31f47776f0d95fb5144\" pid:3440 exited_at:{seconds:1742870801 nanos:375146767}" Mar 25 02:46:41.394046 containerd[1520]: time="2025-03-25T02:46:41.393982225Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3d32a10ee889f4f13b573bede78de047253006dd07efb31f47776f0d95fb5144\" id:\"3d32a10ee889f4f13b573bede78de047253006dd07efb31f47776f0d95fb5144\" pid:3440 exited_at:{seconds:1742870801 nanos:375146767}" Mar 25 02:46:41.439598 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3d32a10ee889f4f13b573bede78de047253006dd07efb31f47776f0d95fb5144-rootfs.mount: Deactivated successfully. Mar 25 02:46:41.640167 kubelet[2798]: E0325 02:46:41.639975 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h6zz7" podUID="2d008efc-d6e3-44fa-b5c6-37d59264bf67" Mar 25 02:46:43.641410 kubelet[2798]: E0325 02:46:43.641196 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h6zz7" podUID="2d008efc-d6e3-44fa-b5c6-37d59264bf67" Mar 25 02:46:44.225759 containerd[1520]: time="2025-03-25T02:46:44.224663684Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:46:44.227756 containerd[1520]: time="2025-03-25T02:46:44.227600381Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.2: active requests=0, bytes read=30414075" Mar 25 02:46:44.228639 containerd[1520]: time="2025-03-25T02:46:44.228553424Z" level=info msg="ImageCreate event name:\"sha256:1d6f9d005866d74e6f0a8b0b8b743d0eaf4efcb7c7032fd2215da9c6ca131cb5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:46:44.232771 containerd[1520]: time="2025-03-25T02:46:44.232697008Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:9839fd34b4c1bad50beed72aec59c64893487a46eea57dc2d7d66c3041d7bcce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:46:44.234028 containerd[1520]: time="2025-03-25T02:46:44.233669842Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.2\" with image id \"sha256:1d6f9d005866d74e6f0a8b0b8b743d0eaf4efcb7c7032fd2215da9c6ca131cb5\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:9839fd34b4c1bad50beed72aec59c64893487a46eea57dc2d7d66c3041d7bcce\", size \"31907171\" in 3.075449149s" Mar 25 02:46:44.234028 containerd[1520]: time="2025-03-25T02:46:44.233713184Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.2\" returns image reference \"sha256:1d6f9d005866d74e6f0a8b0b8b743d0eaf4efcb7c7032fd2215da9c6ca131cb5\"" Mar 25 02:46:44.235454 containerd[1520]: time="2025-03-25T02:46:44.235226552Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.2\"" Mar 25 02:46:44.284410 containerd[1520]: time="2025-03-25T02:46:44.283851728Z" level=info msg="CreateContainer within sandbox \"351477adb6d2f160d85d4798d2367e679b99a31bd788a211f713390d6b4716b2\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 25 02:46:44.299342 containerd[1520]: time="2025-03-25T02:46:44.299294199Z" level=info msg="Container b3ae034377164309039b30bc141fa687f7fc0997b69a4aabfd4bc75cb4a3d49b: CDI devices from CRI Config.CDIDevices: []" Mar 25 02:46:44.307277 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4096663198.mount: Deactivated successfully. Mar 25 02:46:44.313292 containerd[1520]: time="2025-03-25T02:46:44.313242564Z" level=info msg="CreateContainer within sandbox \"351477adb6d2f160d85d4798d2367e679b99a31bd788a211f713390d6b4716b2\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"b3ae034377164309039b30bc141fa687f7fc0997b69a4aabfd4bc75cb4a3d49b\"" Mar 25 02:46:44.314439 containerd[1520]: time="2025-03-25T02:46:44.313994717Z" level=info msg="StartContainer for \"b3ae034377164309039b30bc141fa687f7fc0997b69a4aabfd4bc75cb4a3d49b\"" Mar 25 02:46:44.316080 containerd[1520]: time="2025-03-25T02:46:44.316029676Z" level=info msg="connecting to shim b3ae034377164309039b30bc141fa687f7fc0997b69a4aabfd4bc75cb4a3d49b" address="unix:///run/containerd/s/807cfa4f32d16347fd746e2da7b798caa48e9fff542bb71067508a60856a56ca" protocol=ttrpc version=3 Mar 25 02:46:44.359137 systemd[1]: Started cri-containerd-b3ae034377164309039b30bc141fa687f7fc0997b69a4aabfd4bc75cb4a3d49b.scope - libcontainer container b3ae034377164309039b30bc141fa687f7fc0997b69a4aabfd4bc75cb4a3d49b. Mar 25 02:46:44.460116 containerd[1520]: time="2025-03-25T02:46:44.460027608Z" level=info msg="StartContainer for \"b3ae034377164309039b30bc141fa687f7fc0997b69a4aabfd4bc75cb4a3d49b\" returns successfully" Mar 25 02:46:44.841049 kubelet[2798]: I0325 02:46:44.840951 2798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6bc4bd9dcd-767tk" podStartSLOduration=3.193960877 podStartE2EDuration="7.840918597s" podCreationTimestamp="2025-03-25 02:46:37 +0000 UTC" firstStartedPulling="2025-03-25 02:46:39.588070942 +0000 UTC m=+19.146290345" lastFinishedPulling="2025-03-25 02:46:44.235028654 +0000 UTC m=+23.793248065" observedRunningTime="2025-03-25 02:46:44.83971486 +0000 UTC m=+24.397934308" watchObservedRunningTime="2025-03-25 02:46:44.840918597 +0000 UTC m=+24.399138028" Mar 25 02:46:45.631958 kubelet[2798]: E0325 02:46:45.631841 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h6zz7" podUID="2d008efc-d6e3-44fa-b5c6-37d59264bf67" Mar 25 02:46:45.831488 kubelet[2798]: I0325 02:46:45.831201 2798 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 25 02:46:47.631930 kubelet[2798]: E0325 02:46:47.631713 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h6zz7" podUID="2d008efc-d6e3-44fa-b5c6-37d59264bf67" Mar 25 02:46:49.632559 kubelet[2798]: E0325 02:46:49.632414 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h6zz7" podUID="2d008efc-d6e3-44fa-b5c6-37d59264bf67" Mar 25 02:46:50.611047 containerd[1520]: time="2025-03-25T02:46:50.610770235Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:46:50.613164 containerd[1520]: time="2025-03-25T02:46:50.613097189Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.2: active requests=0, bytes read=97781477" Mar 25 02:46:50.614713 containerd[1520]: time="2025-03-25T02:46:50.614670100Z" level=info msg="ImageCreate event name:\"sha256:cda13293c895a8a3b06c1e190b70fb6fe61036db2e59764036fc6e65ec374693\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:46:50.619373 containerd[1520]: time="2025-03-25T02:46:50.619247420Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:890e1db6ae363695cfc23ffae4d612cc85cdd99d759bd539af6683969d0c3c25\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:46:50.620808 containerd[1520]: time="2025-03-25T02:46:50.620374804Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.2\" with image id \"sha256:cda13293c895a8a3b06c1e190b70fb6fe61036db2e59764036fc6e65ec374693\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:890e1db6ae363695cfc23ffae4d612cc85cdd99d759bd539af6683969d0c3c25\", size \"99274581\" in 6.385105209s" Mar 25 02:46:50.620808 containerd[1520]: time="2025-03-25T02:46:50.620508377Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.2\" returns image reference \"sha256:cda13293c895a8a3b06c1e190b70fb6fe61036db2e59764036fc6e65ec374693\"" Mar 25 02:46:50.623944 containerd[1520]: time="2025-03-25T02:46:50.623816542Z" level=info msg="CreateContainer within sandbox \"4de55a2ef9c13aa37edd653fe64c7e3febdfa1703842c9fe6fa87ceb72667821\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 25 02:46:50.654081 containerd[1520]: time="2025-03-25T02:46:50.654028750Z" level=info msg="Container 87017d018aa9229e335d82ef3820e6b2bba85c5a5f2a4173d415658e7d9ed412: CDI devices from CRI Config.CDIDevices: []" Mar 25 02:46:50.692634 containerd[1520]: time="2025-03-25T02:46:50.692471858Z" level=info msg="CreateContainer within sandbox \"4de55a2ef9c13aa37edd653fe64c7e3febdfa1703842c9fe6fa87ceb72667821\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"87017d018aa9229e335d82ef3820e6b2bba85c5a5f2a4173d415658e7d9ed412\"" Mar 25 02:46:50.695386 containerd[1520]: time="2025-03-25T02:46:50.693772272Z" level=info msg="StartContainer for \"87017d018aa9229e335d82ef3820e6b2bba85c5a5f2a4173d415658e7d9ed412\"" Mar 25 02:46:50.706938 containerd[1520]: time="2025-03-25T02:46:50.706896964Z" level=info msg="connecting to shim 87017d018aa9229e335d82ef3820e6b2bba85c5a5f2a4173d415658e7d9ed412" address="unix:///run/containerd/s/82e6d539e6fa4cf0851ac20cfdddd0fec7892e8951ce688e05a54d9d79646c8f" protocol=ttrpc version=3 Mar 25 02:46:50.766083 systemd[1]: Started cri-containerd-87017d018aa9229e335d82ef3820e6b2bba85c5a5f2a4173d415658e7d9ed412.scope - libcontainer container 87017d018aa9229e335d82ef3820e6b2bba85c5a5f2a4173d415658e7d9ed412. Mar 25 02:46:50.851053 containerd[1520]: time="2025-03-25T02:46:50.850771343Z" level=info msg="StartContainer for \"87017d018aa9229e335d82ef3820e6b2bba85c5a5f2a4173d415658e7d9ed412\" returns successfully" Mar 25 02:46:51.632770 kubelet[2798]: E0325 02:46:51.632461 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h6zz7" podUID="2d008efc-d6e3-44fa-b5c6-37d59264bf67" Mar 25 02:46:51.680389 systemd[1]: cri-containerd-87017d018aa9229e335d82ef3820e6b2bba85c5a5f2a4173d415658e7d9ed412.scope: Deactivated successfully. Mar 25 02:46:51.680992 systemd[1]: cri-containerd-87017d018aa9229e335d82ef3820e6b2bba85c5a5f2a4173d415658e7d9ed412.scope: Consumed 788ms CPU time, 149.2M memory peak, 1.8M read from disk, 154M written to disk. Mar 25 02:46:51.684463 containerd[1520]: time="2025-03-25T02:46:51.684292678Z" level=info msg="received exit event container_id:\"87017d018aa9229e335d82ef3820e6b2bba85c5a5f2a4173d415658e7d9ed412\" id:\"87017d018aa9229e335d82ef3820e6b2bba85c5a5f2a4173d415658e7d9ed412\" pid:3536 exited_at:{seconds:1742870811 nanos:683822208}" Mar 25 02:46:51.688058 containerd[1520]: time="2025-03-25T02:46:51.687550471Z" level=info msg="TaskExit event in podsandbox handler container_id:\"87017d018aa9229e335d82ef3820e6b2bba85c5a5f2a4173d415658e7d9ed412\" id:\"87017d018aa9229e335d82ef3820e6b2bba85c5a5f2a4173d415658e7d9ed412\" pid:3536 exited_at:{seconds:1742870811 nanos:683822208}" Mar 25 02:46:51.744307 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-87017d018aa9229e335d82ef3820e6b2bba85c5a5f2a4173d415658e7d9ed412-rootfs.mount: Deactivated successfully. Mar 25 02:46:51.859011 kubelet[2798]: I0325 02:46:51.858966 2798 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Mar 25 02:46:52.062743 systemd[1]: Created slice kubepods-besteffort-podfaf12e28_177f_4d74_9f9e_8e39e2034cdd.slice - libcontainer container kubepods-besteffort-podfaf12e28_177f_4d74_9f9e_8e39e2034cdd.slice. Mar 25 02:46:52.077921 systemd[1]: Created slice kubepods-besteffort-pod55c6465a_b1dd_45a3_b84c_6339283c9078.slice - libcontainer container kubepods-besteffort-pod55c6465a_b1dd_45a3_b84c_6339283c9078.slice. Mar 25 02:46:52.095339 systemd[1]: Created slice kubepods-besteffort-pod7b782bad_d6a9_42d9_9981_9048e9834ec0.slice - libcontainer container kubepods-besteffort-pod7b782bad_d6a9_42d9_9981_9048e9834ec0.slice. Mar 25 02:46:52.113473 systemd[1]: Created slice kubepods-burstable-pod5bfa8497_c633_4917_a163_1574ac7f04bf.slice - libcontainer container kubepods-burstable-pod5bfa8497_c633_4917_a163_1574ac7f04bf.slice. Mar 25 02:46:52.123131 systemd[1]: Created slice kubepods-burstable-podc97036f6_34cb_4044_80e3_4bb047e317c7.slice - libcontainer container kubepods-burstable-podc97036f6_34cb_4044_80e3_4bb047e317c7.slice. Mar 25 02:46:52.148973 kubelet[2798]: I0325 02:46:52.148907 2798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbtkx\" (UniqueName: \"kubernetes.io/projected/faf12e28-177f-4d74-9f9e-8e39e2034cdd-kube-api-access-bbtkx\") pod \"calico-apiserver-7867f858bf-nmcst\" (UID: \"faf12e28-177f-4d74-9f9e-8e39e2034cdd\") " pod="calico-apiserver/calico-apiserver-7867f858bf-nmcst" Mar 25 02:46:52.149520 kubelet[2798]: I0325 02:46:52.149230 2798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7b782bad-d6a9-42d9-9981-9048e9834ec0-calico-apiserver-certs\") pod \"calico-apiserver-7867f858bf-sjh4v\" (UID: \"7b782bad-d6a9-42d9-9981-9048e9834ec0\") " pod="calico-apiserver/calico-apiserver-7867f858bf-sjh4v" Mar 25 02:46:52.149520 kubelet[2798]: I0325 02:46:52.149411 2798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96xh2\" (UniqueName: \"kubernetes.io/projected/7b782bad-d6a9-42d9-9981-9048e9834ec0-kube-api-access-96xh2\") pod \"calico-apiserver-7867f858bf-sjh4v\" (UID: \"7b782bad-d6a9-42d9-9981-9048e9834ec0\") " pod="calico-apiserver/calico-apiserver-7867f858bf-sjh4v" Mar 25 02:46:52.149835 kubelet[2798]: I0325 02:46:52.149495 2798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7s4pz\" (UniqueName: \"kubernetes.io/projected/55c6465a-b1dd-45a3-b84c-6339283c9078-kube-api-access-7s4pz\") pod \"calico-kube-controllers-664b77dd74-d7f4s\" (UID: \"55c6465a-b1dd-45a3-b84c-6339283c9078\") " pod="calico-system/calico-kube-controllers-664b77dd74-d7f4s" Mar 25 02:46:52.149835 kubelet[2798]: I0325 02:46:52.149790 2798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/faf12e28-177f-4d74-9f9e-8e39e2034cdd-calico-apiserver-certs\") pod \"calico-apiserver-7867f858bf-nmcst\" (UID: \"faf12e28-177f-4d74-9f9e-8e39e2034cdd\") " pod="calico-apiserver/calico-apiserver-7867f858bf-nmcst" Mar 25 02:46:52.150270 kubelet[2798]: I0325 02:46:52.150139 2798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pv8pv\" (UniqueName: \"kubernetes.io/projected/c97036f6-34cb-4044-80e3-4bb047e317c7-kube-api-access-pv8pv\") pod \"coredns-6f6b679f8f-l8f9s\" (UID: \"c97036f6-34cb-4044-80e3-4bb047e317c7\") " pod="kube-system/coredns-6f6b679f8f-l8f9s" Mar 25 02:46:52.150270 kubelet[2798]: I0325 02:46:52.150208 2798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5bfa8497-c633-4917-a163-1574ac7f04bf-config-volume\") pod \"coredns-6f6b679f8f-7vrcq\" (UID: \"5bfa8497-c633-4917-a163-1574ac7f04bf\") " pod="kube-system/coredns-6f6b679f8f-7vrcq" Mar 25 02:46:52.150489 kubelet[2798]: I0325 02:46:52.150249 2798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/55c6465a-b1dd-45a3-b84c-6339283c9078-tigera-ca-bundle\") pod \"calico-kube-controllers-664b77dd74-d7f4s\" (UID: \"55c6465a-b1dd-45a3-b84c-6339283c9078\") " pod="calico-system/calico-kube-controllers-664b77dd74-d7f4s" Mar 25 02:46:52.150489 kubelet[2798]: I0325 02:46:52.150453 2798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c97036f6-34cb-4044-80e3-4bb047e317c7-config-volume\") pod \"coredns-6f6b679f8f-l8f9s\" (UID: \"c97036f6-34cb-4044-80e3-4bb047e317c7\") " pod="kube-system/coredns-6f6b679f8f-l8f9s" Mar 25 02:46:52.150786 kubelet[2798]: I0325 02:46:52.150663 2798 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-754dk\" (UniqueName: \"kubernetes.io/projected/5bfa8497-c633-4917-a163-1574ac7f04bf-kube-api-access-754dk\") pod \"coredns-6f6b679f8f-7vrcq\" (UID: \"5bfa8497-c633-4917-a163-1574ac7f04bf\") " pod="kube-system/coredns-6f6b679f8f-7vrcq" Mar 25 02:46:52.373720 containerd[1520]: time="2025-03-25T02:46:52.373560700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7867f858bf-nmcst,Uid:faf12e28-177f-4d74-9f9e-8e39e2034cdd,Namespace:calico-apiserver,Attempt:0,}" Mar 25 02:46:52.393899 containerd[1520]: time="2025-03-25T02:46:52.393655948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-664b77dd74-d7f4s,Uid:55c6465a-b1dd-45a3-b84c-6339283c9078,Namespace:calico-system,Attempt:0,}" Mar 25 02:46:52.411101 containerd[1520]: time="2025-03-25T02:46:52.410766199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7867f858bf-sjh4v,Uid:7b782bad-d6a9-42d9-9981-9048e9834ec0,Namespace:calico-apiserver,Attempt:0,}" Mar 25 02:46:52.419540 containerd[1520]: time="2025-03-25T02:46:52.419502995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-7vrcq,Uid:5bfa8497-c633-4917-a163-1574ac7f04bf,Namespace:kube-system,Attempt:0,}" Mar 25 02:46:52.443961 containerd[1520]: time="2025-03-25T02:46:52.443847883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-l8f9s,Uid:c97036f6-34cb-4044-80e3-4bb047e317c7,Namespace:kube-system,Attempt:0,}" Mar 25 02:46:52.680507 containerd[1520]: time="2025-03-25T02:46:52.680384660Z" level=error msg="Failed to destroy network for sandbox \"e7b4a1de595e3a2cd232c04b524042c400b53d4c27e9ecc7260aaf5c7627ec4d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 02:46:52.704754 containerd[1520]: time="2025-03-25T02:46:52.686052342Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7867f858bf-sjh4v,Uid:7b782bad-d6a9-42d9-9981-9048e9834ec0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7b4a1de595e3a2cd232c04b524042c400b53d4c27e9ecc7260aaf5c7627ec4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 02:46:52.706094 containerd[1520]: time="2025-03-25T02:46:52.686359611Z" level=error msg="Failed to destroy network for sandbox \"c139aa51636f89653e55fda75bc5c0472160c3d0d6a61e37ca7ab0814cca4958\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 02:46:52.713304 containerd[1520]: time="2025-03-25T02:46:52.711224610Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7867f858bf-nmcst,Uid:faf12e28-177f-4d74-9f9e-8e39e2034cdd,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c139aa51636f89653e55fda75bc5c0472160c3d0d6a61e37ca7ab0814cca4958\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 02:46:52.723599 kubelet[2798]: E0325 02:46:52.710828 2798 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7b4a1de595e3a2cd232c04b524042c400b53d4c27e9ecc7260aaf5c7627ec4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 02:46:52.723599 kubelet[2798]: E0325 02:46:52.721780 2798 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7b4a1de595e3a2cd232c04b524042c400b53d4c27e9ecc7260aaf5c7627ec4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7867f858bf-sjh4v" Mar 25 02:46:52.723599 kubelet[2798]: E0325 02:46:52.721837 2798 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7b4a1de595e3a2cd232c04b524042c400b53d4c27e9ecc7260aaf5c7627ec4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7867f858bf-sjh4v" Mar 25 02:46:52.724264 kubelet[2798]: E0325 02:46:52.721926 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7867f858bf-sjh4v_calico-apiserver(7b782bad-d6a9-42d9-9981-9048e9834ec0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7867f858bf-sjh4v_calico-apiserver(7b782bad-d6a9-42d9-9981-9048e9834ec0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e7b4a1de595e3a2cd232c04b524042c400b53d4c27e9ecc7260aaf5c7627ec4d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7867f858bf-sjh4v" podUID="7b782bad-d6a9-42d9-9981-9048e9834ec0" Mar 25 02:46:52.724264 kubelet[2798]: E0325 02:46:52.713214 2798 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c139aa51636f89653e55fda75bc5c0472160c3d0d6a61e37ca7ab0814cca4958\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 02:46:52.724264 kubelet[2798]: E0325 02:46:52.723465 2798 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c139aa51636f89653e55fda75bc5c0472160c3d0d6a61e37ca7ab0814cca4958\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7867f858bf-nmcst" Mar 25 02:46:52.724486 kubelet[2798]: E0325 02:46:52.723496 2798 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c139aa51636f89653e55fda75bc5c0472160c3d0d6a61e37ca7ab0814cca4958\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7867f858bf-nmcst" Mar 25 02:46:52.724486 kubelet[2798]: E0325 02:46:52.723539 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7867f858bf-nmcst_calico-apiserver(faf12e28-177f-4d74-9f9e-8e39e2034cdd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7867f858bf-nmcst_calico-apiserver(faf12e28-177f-4d74-9f9e-8e39e2034cdd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c139aa51636f89653e55fda75bc5c0472160c3d0d6a61e37ca7ab0814cca4958\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7867f858bf-nmcst" podUID="faf12e28-177f-4d74-9f9e-8e39e2034cdd" Mar 25 02:46:52.737065 containerd[1520]: time="2025-03-25T02:46:52.737006165Z" level=error msg="Failed to destroy network for sandbox \"1398c84857fea0fd0a9033dae8adb37e0e09d5d2adfc163cf7128700f362ce3f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 02:46:52.742546 containerd[1520]: time="2025-03-25T02:46:52.742481158Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-664b77dd74-d7f4s,Uid:55c6465a-b1dd-45a3-b84c-6339283c9078,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1398c84857fea0fd0a9033dae8adb37e0e09d5d2adfc163cf7128700f362ce3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 02:46:52.746641 kubelet[2798]: E0325 02:46:52.743495 2798 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1398c84857fea0fd0a9033dae8adb37e0e09d5d2adfc163cf7128700f362ce3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 02:46:52.746641 kubelet[2798]: E0325 02:46:52.743704 2798 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1398c84857fea0fd0a9033dae8adb37e0e09d5d2adfc163cf7128700f362ce3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-664b77dd74-d7f4s" Mar 25 02:46:52.746641 kubelet[2798]: E0325 02:46:52.744946 2798 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1398c84857fea0fd0a9033dae8adb37e0e09d5d2adfc163cf7128700f362ce3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-664b77dd74-d7f4s" Mar 25 02:46:52.746829 kubelet[2798]: E0325 02:46:52.745050 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-664b77dd74-d7f4s_calico-system(55c6465a-b1dd-45a3-b84c-6339283c9078)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-664b77dd74-d7f4s_calico-system(55c6465a-b1dd-45a3-b84c-6339283c9078)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1398c84857fea0fd0a9033dae8adb37e0e09d5d2adfc163cf7128700f362ce3f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-664b77dd74-d7f4s" podUID="55c6465a-b1dd-45a3-b84c-6339283c9078" Mar 25 02:46:52.756200 containerd[1520]: time="2025-03-25T02:46:52.754454287Z" level=error msg="Failed to destroy network for sandbox \"b2b6da710877af6ab51cea0c6ad43407fbbbab5109ae8ed7e5685bb38016915f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 02:46:52.756490 containerd[1520]: time="2025-03-25T02:46:52.756453279Z" level=error msg="Failed to destroy network for sandbox \"31b00eda6f738f7f3cc952b719bf8ff3741f358d46015886c1791698685985e8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 02:46:52.757191 systemd[1]: run-netns-cni\x2d973b3b91\x2d9f0c\x2de406\x2d6067\x2dc158499b5fa5.mount: Deactivated successfully. Mar 25 02:46:52.760923 containerd[1520]: time="2025-03-25T02:46:52.760546526Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-7vrcq,Uid:5bfa8497-c633-4917-a163-1574ac7f04bf,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2b6da710877af6ab51cea0c6ad43407fbbbab5109ae8ed7e5685bb38016915f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 02:46:52.762514 kubelet[2798]: E0325 02:46:52.762050 2798 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2b6da710877af6ab51cea0c6ad43407fbbbab5109ae8ed7e5685bb38016915f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 02:46:52.762514 kubelet[2798]: E0325 02:46:52.762121 2798 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2b6da710877af6ab51cea0c6ad43407fbbbab5109ae8ed7e5685bb38016915f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-7vrcq" Mar 25 02:46:52.762514 kubelet[2798]: E0325 02:46:52.762148 2798 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2b6da710877af6ab51cea0c6ad43407fbbbab5109ae8ed7e5685bb38016915f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-7vrcq" Mar 25 02:46:52.762704 kubelet[2798]: E0325 02:46:52.762200 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-7vrcq_kube-system(5bfa8497-c633-4917-a163-1574ac7f04bf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-7vrcq_kube-system(5bfa8497-c633-4917-a163-1574ac7f04bf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b2b6da710877af6ab51cea0c6ad43407fbbbab5109ae8ed7e5685bb38016915f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-7vrcq" podUID="5bfa8497-c633-4917-a163-1574ac7f04bf" Mar 25 02:46:52.762806 containerd[1520]: time="2025-03-25T02:46:52.762490276Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-l8f9s,Uid:c97036f6-34cb-4044-80e3-4bb047e317c7,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"31b00eda6f738f7f3cc952b719bf8ff3741f358d46015886c1791698685985e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 02:46:52.764068 kubelet[2798]: E0325 02:46:52.764034 2798 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31b00eda6f738f7f3cc952b719bf8ff3741f358d46015886c1791698685985e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 02:46:52.764348 systemd[1]: run-netns-cni\x2df1755305\x2de110\x2dcf05\x2defad\x2deb4de3e9b949.mount: Deactivated successfully. Mar 25 02:46:52.764718 systemd[1]: run-netns-cni\x2d8f702e80\x2dcc43\x2dd837\x2d060b\x2df37247e20c9d.mount: Deactivated successfully. Mar 25 02:46:52.766134 kubelet[2798]: E0325 02:46:52.765075 2798 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31b00eda6f738f7f3cc952b719bf8ff3741f358d46015886c1791698685985e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-l8f9s" Mar 25 02:46:52.766134 kubelet[2798]: E0325 02:46:52.765110 2798 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31b00eda6f738f7f3cc952b719bf8ff3741f358d46015886c1791698685985e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-l8f9s" Mar 25 02:46:52.766134 kubelet[2798]: E0325 02:46:52.765157 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-l8f9s_kube-system(c97036f6-34cb-4044-80e3-4bb047e317c7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-l8f9s_kube-system(c97036f6-34cb-4044-80e3-4bb047e317c7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"31b00eda6f738f7f3cc952b719bf8ff3741f358d46015886c1791698685985e8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-l8f9s" podUID="c97036f6-34cb-4044-80e3-4bb047e317c7" Mar 25 02:46:52.879354 containerd[1520]: time="2025-03-25T02:46:52.879032712Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.2\"" Mar 25 02:46:53.639482 systemd[1]: Created slice kubepods-besteffort-pod2d008efc_d6e3_44fa_b5c6_37d59264bf67.slice - libcontainer container kubepods-besteffort-pod2d008efc_d6e3_44fa_b5c6_37d59264bf67.slice. Mar 25 02:46:53.643787 containerd[1520]: time="2025-03-25T02:46:53.643678918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h6zz7,Uid:2d008efc-d6e3-44fa-b5c6-37d59264bf67,Namespace:calico-system,Attempt:0,}" Mar 25 02:46:53.728954 containerd[1520]: time="2025-03-25T02:46:53.728765891Z" level=error msg="Failed to destroy network for sandbox \"b6ef968ef6130b5b0319cb705d647192197eb7614c9f5eac53ec4e84a155af84\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 02:46:53.731059 containerd[1520]: time="2025-03-25T02:46:53.730947672Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h6zz7,Uid:2d008efc-d6e3-44fa-b5c6-37d59264bf67,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6ef968ef6130b5b0319cb705d647192197eb7614c9f5eac53ec4e84a155af84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 02:46:53.731664 kubelet[2798]: E0325 02:46:53.731538 2798 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6ef968ef6130b5b0319cb705d647192197eb7614c9f5eac53ec4e84a155af84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 02:46:53.732369 kubelet[2798]: E0325 02:46:53.731662 2798 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6ef968ef6130b5b0319cb705d647192197eb7614c9f5eac53ec4e84a155af84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h6zz7" Mar 25 02:46:53.732369 kubelet[2798]: E0325 02:46:53.731707 2798 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6ef968ef6130b5b0319cb705d647192197eb7614c9f5eac53ec4e84a155af84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h6zz7" Mar 25 02:46:53.732369 kubelet[2798]: E0325 02:46:53.731790 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-h6zz7_calico-system(2d008efc-d6e3-44fa-b5c6-37d59264bf67)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-h6zz7_calico-system(2d008efc-d6e3-44fa-b5c6-37d59264bf67)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b6ef968ef6130b5b0319cb705d647192197eb7614c9f5eac53ec4e84a155af84\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-h6zz7" podUID="2d008efc-d6e3-44fa-b5c6-37d59264bf67" Mar 25 02:46:53.732587 systemd[1]: run-netns-cni\x2d708298b3\x2def77\x2d8f81\x2d36f3\x2d71e3bf74ee35.mount: Deactivated successfully. Mar 25 02:46:59.710411 kubelet[2798]: I0325 02:46:59.710026 2798 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 25 02:47:02.513867 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount59375168.mount: Deactivated successfully. Mar 25 02:47:02.591587 containerd[1520]: time="2025-03-25T02:47:02.590994631Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:47:02.604633 containerd[1520]: time="2025-03-25T02:47:02.595181189Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.2: active requests=0, bytes read=142241445" Mar 25 02:47:02.624765 containerd[1520]: time="2025-03-25T02:47:02.624660941Z" level=info msg="ImageCreate event name:\"sha256:048bf7af1f8c697d151dbecc478a18e89d89ed8627da98e17a56c11b3d45d351\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:47:02.627214 containerd[1520]: time="2025-03-25T02:47:02.626423673Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:d9a21be37fe591ee5ab5a2e3dc26408ea165a44a55705102ffaa002de9908b32\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:47:02.627214 containerd[1520]: time="2025-03-25T02:47:02.626808015Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.2\" with image id \"sha256:048bf7af1f8c697d151dbecc478a18e89d89ed8627da98e17a56c11b3d45d351\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:d9a21be37fe591ee5ab5a2e3dc26408ea165a44a55705102ffaa002de9908b32\", size \"142241307\" in 9.746696446s" Mar 25 02:47:02.627214 containerd[1520]: time="2025-03-25T02:47:02.626857044Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.2\" returns image reference \"sha256:048bf7af1f8c697d151dbecc478a18e89d89ed8627da98e17a56c11b3d45d351\"" Mar 25 02:47:02.664325 containerd[1520]: time="2025-03-25T02:47:02.664248672Z" level=info msg="CreateContainer within sandbox \"4de55a2ef9c13aa37edd653fe64c7e3febdfa1703842c9fe6fa87ceb72667821\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 25 02:47:02.718902 containerd[1520]: time="2025-03-25T02:47:02.718278947Z" level=info msg="Container 08b46a662cdc025a4603a6fcdf6a5e348afd7fcf81bec452154928151ac37f47: CDI devices from CRI Config.CDIDevices: []" Mar 25 02:47:02.780291 containerd[1520]: time="2025-03-25T02:47:02.780148669Z" level=info msg="CreateContainer within sandbox \"4de55a2ef9c13aa37edd653fe64c7e3febdfa1703842c9fe6fa87ceb72667821\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"08b46a662cdc025a4603a6fcdf6a5e348afd7fcf81bec452154928151ac37f47\"" Mar 25 02:47:02.789700 containerd[1520]: time="2025-03-25T02:47:02.789396580Z" level=info msg="StartContainer for \"08b46a662cdc025a4603a6fcdf6a5e348afd7fcf81bec452154928151ac37f47\"" Mar 25 02:47:02.799585 containerd[1520]: time="2025-03-25T02:47:02.799291355Z" level=info msg="connecting to shim 08b46a662cdc025a4603a6fcdf6a5e348afd7fcf81bec452154928151ac37f47" address="unix:///run/containerd/s/82e6d539e6fa4cf0851ac20cfdddd0fec7892e8951ce688e05a54d9d79646c8f" protocol=ttrpc version=3 Mar 25 02:47:03.002133 systemd[1]: Started cri-containerd-08b46a662cdc025a4603a6fcdf6a5e348afd7fcf81bec452154928151ac37f47.scope - libcontainer container 08b46a662cdc025a4603a6fcdf6a5e348afd7fcf81bec452154928151ac37f47. Mar 25 02:47:03.131080 containerd[1520]: time="2025-03-25T02:47:03.127797152Z" level=info msg="StartContainer for \"08b46a662cdc025a4603a6fcdf6a5e348afd7fcf81bec452154928151ac37f47\" returns successfully" Mar 25 02:47:03.256372 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Mar 25 02:47:03.257674 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Mar 25 02:47:03.633305 containerd[1520]: time="2025-03-25T02:47:03.633248672Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-664b77dd74-d7f4s,Uid:55c6465a-b1dd-45a3-b84c-6339283c9078,Namespace:calico-system,Attempt:0,}" Mar 25 02:47:04.028526 kubelet[2798]: I0325 02:47:04.010109 2798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-x74dh" podStartSLOduration=3.422133362 podStartE2EDuration="26.998840468s" podCreationTimestamp="2025-03-25 02:46:37 +0000 UTC" firstStartedPulling="2025-03-25 02:46:39.051313164 +0000 UTC m=+18.609532570" lastFinishedPulling="2025-03-25 02:47:02.628020267 +0000 UTC m=+42.186239676" observedRunningTime="2025-03-25 02:47:03.993493842 +0000 UTC m=+43.551713291" watchObservedRunningTime="2025-03-25 02:47:03.998840468 +0000 UTC m=+43.557059886" Mar 25 02:47:04.029246 systemd-networkd[1446]: calib452edd3cad: Link UP Mar 25 02:47:04.029693 systemd-networkd[1446]: calib452edd3cad: Gained carrier Mar 25 02:47:04.052455 containerd[1520]: 2025-03-25 02:47:03.693 [INFO][3820] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 25 02:47:04.052455 containerd[1520]: 2025-03-25 02:47:03.733 [INFO][3820] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--nkv7s.gb1.brightbox.com-k8s-calico--kube--controllers--664b77dd74--d7f4s-eth0 calico-kube-controllers-664b77dd74- calico-system 55c6465a-b1dd-45a3-b84c-6339283c9078 688 0 2025-03-25 02:46:37 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:664b77dd74 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s srv-nkv7s.gb1.brightbox.com calico-kube-controllers-664b77dd74-d7f4s eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calib452edd3cad [] []}} ContainerID="53c6d08b763dbb6497e7fe1cbd2a160ab8722cdadbaf9867867b5f60486fe337" Namespace="calico-system" Pod="calico-kube-controllers-664b77dd74-d7f4s" WorkloadEndpoint="srv--nkv7s.gb1.brightbox.com-k8s-calico--kube--controllers--664b77dd74--d7f4s-" Mar 25 02:47:04.052455 containerd[1520]: 2025-03-25 02:47:03.733 [INFO][3820] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="53c6d08b763dbb6497e7fe1cbd2a160ab8722cdadbaf9867867b5f60486fe337" Namespace="calico-system" Pod="calico-kube-controllers-664b77dd74-d7f4s" WorkloadEndpoint="srv--nkv7s.gb1.brightbox.com-k8s-calico--kube--controllers--664b77dd74--d7f4s-eth0" Mar 25 02:47:04.052455 containerd[1520]: 2025-03-25 02:47:03.932 [INFO][3838] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="53c6d08b763dbb6497e7fe1cbd2a160ab8722cdadbaf9867867b5f60486fe337" HandleID="k8s-pod-network.53c6d08b763dbb6497e7fe1cbd2a160ab8722cdadbaf9867867b5f60486fe337" Workload="srv--nkv7s.gb1.brightbox.com-k8s-calico--kube--controllers--664b77dd74--d7f4s-eth0" Mar 25 02:47:04.052455 containerd[1520]: 2025-03-25 02:47:03.951 [INFO][3838] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="53c6d08b763dbb6497e7fe1cbd2a160ab8722cdadbaf9867867b5f60486fe337" HandleID="k8s-pod-network.53c6d08b763dbb6497e7fe1cbd2a160ab8722cdadbaf9867867b5f60486fe337" Workload="srv--nkv7s.gb1.brightbox.com-k8s-calico--kube--controllers--664b77dd74--d7f4s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000279110), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-nkv7s.gb1.brightbox.com", "pod":"calico-kube-controllers-664b77dd74-d7f4s", "timestamp":"2025-03-25 02:47:03.932397552 +0000 UTC"}, Hostname:"srv-nkv7s.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 25 02:47:04.052455 containerd[1520]: 2025-03-25 02:47:03.951 [INFO][3838] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 25 02:47:04.052455 containerd[1520]: 2025-03-25 02:47:03.951 [INFO][3838] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 25 02:47:04.052455 containerd[1520]: 2025-03-25 02:47:03.952 [INFO][3838] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-nkv7s.gb1.brightbox.com' Mar 25 02:47:04.052455 containerd[1520]: 2025-03-25 02:47:03.955 [INFO][3838] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.53c6d08b763dbb6497e7fe1cbd2a160ab8722cdadbaf9867867b5f60486fe337" host="srv-nkv7s.gb1.brightbox.com" Mar 25 02:47:04.052455 containerd[1520]: 2025-03-25 02:47:03.967 [INFO][3838] ipam/ipam.go 372: Looking up existing affinities for host host="srv-nkv7s.gb1.brightbox.com" Mar 25 02:47:04.052455 containerd[1520]: 2025-03-25 02:47:03.975 [INFO][3838] ipam/ipam.go 489: Trying affinity for 192.168.110.128/26 host="srv-nkv7s.gb1.brightbox.com" Mar 25 02:47:04.052455 containerd[1520]: 2025-03-25 02:47:03.978 [INFO][3838] ipam/ipam.go 155: Attempting to load block cidr=192.168.110.128/26 host="srv-nkv7s.gb1.brightbox.com" Mar 25 02:47:04.052455 containerd[1520]: 2025-03-25 02:47:03.980 [INFO][3838] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.110.128/26 host="srv-nkv7s.gb1.brightbox.com" Mar 25 02:47:04.052455 containerd[1520]: 2025-03-25 02:47:03.980 [INFO][3838] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.110.128/26 handle="k8s-pod-network.53c6d08b763dbb6497e7fe1cbd2a160ab8722cdadbaf9867867b5f60486fe337" host="srv-nkv7s.gb1.brightbox.com" Mar 25 02:47:04.052455 containerd[1520]: 2025-03-25 02:47:03.982 [INFO][3838] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.53c6d08b763dbb6497e7fe1cbd2a160ab8722cdadbaf9867867b5f60486fe337 Mar 25 02:47:04.052455 containerd[1520]: 2025-03-25 02:47:03.988 [INFO][3838] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.110.128/26 handle="k8s-pod-network.53c6d08b763dbb6497e7fe1cbd2a160ab8722cdadbaf9867867b5f60486fe337" host="srv-nkv7s.gb1.brightbox.com" Mar 25 02:47:04.052455 containerd[1520]: 2025-03-25 02:47:03.998 [INFO][3838] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.110.129/26] block=192.168.110.128/26 handle="k8s-pod-network.53c6d08b763dbb6497e7fe1cbd2a160ab8722cdadbaf9867867b5f60486fe337" host="srv-nkv7s.gb1.brightbox.com" Mar 25 02:47:04.052455 containerd[1520]: 2025-03-25 02:47:03.999 [INFO][3838] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.110.129/26] handle="k8s-pod-network.53c6d08b763dbb6497e7fe1cbd2a160ab8722cdadbaf9867867b5f60486fe337" host="srv-nkv7s.gb1.brightbox.com" Mar 25 02:47:04.052455 containerd[1520]: 2025-03-25 02:47:03.999 [INFO][3838] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 25 02:47:04.055738 containerd[1520]: 2025-03-25 02:47:03.999 [INFO][3838] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.110.129/26] IPv6=[] ContainerID="53c6d08b763dbb6497e7fe1cbd2a160ab8722cdadbaf9867867b5f60486fe337" HandleID="k8s-pod-network.53c6d08b763dbb6497e7fe1cbd2a160ab8722cdadbaf9867867b5f60486fe337" Workload="srv--nkv7s.gb1.brightbox.com-k8s-calico--kube--controllers--664b77dd74--d7f4s-eth0" Mar 25 02:47:04.055738 containerd[1520]: 2025-03-25 02:47:04.004 [INFO][3820] cni-plugin/k8s.go 386: Populated endpoint ContainerID="53c6d08b763dbb6497e7fe1cbd2a160ab8722cdadbaf9867867b5f60486fe337" Namespace="calico-system" Pod="calico-kube-controllers-664b77dd74-d7f4s" WorkloadEndpoint="srv--nkv7s.gb1.brightbox.com-k8s-calico--kube--controllers--664b77dd74--d7f4s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--nkv7s.gb1.brightbox.com-k8s-calico--kube--controllers--664b77dd74--d7f4s-eth0", GenerateName:"calico-kube-controllers-664b77dd74-", Namespace:"calico-system", SelfLink:"", UID:"55c6465a-b1dd-45a3-b84c-6339283c9078", ResourceVersion:"688", Generation:0, CreationTimestamp:time.Date(2025, time.March, 25, 2, 46, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"664b77dd74", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-nkv7s.gb1.brightbox.com", ContainerID:"", Pod:"calico-kube-controllers-664b77dd74-d7f4s", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.110.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib452edd3cad", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 25 02:47:04.055738 containerd[1520]: 2025-03-25 02:47:04.004 [INFO][3820] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.110.129/32] ContainerID="53c6d08b763dbb6497e7fe1cbd2a160ab8722cdadbaf9867867b5f60486fe337" Namespace="calico-system" Pod="calico-kube-controllers-664b77dd74-d7f4s" WorkloadEndpoint="srv--nkv7s.gb1.brightbox.com-k8s-calico--kube--controllers--664b77dd74--d7f4s-eth0" Mar 25 02:47:04.055738 containerd[1520]: 2025-03-25 02:47:04.004 [INFO][3820] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib452edd3cad ContainerID="53c6d08b763dbb6497e7fe1cbd2a160ab8722cdadbaf9867867b5f60486fe337" Namespace="calico-system" Pod="calico-kube-controllers-664b77dd74-d7f4s" WorkloadEndpoint="srv--nkv7s.gb1.brightbox.com-k8s-calico--kube--controllers--664b77dd74--d7f4s-eth0" Mar 25 02:47:04.055738 containerd[1520]: 2025-03-25 02:47:04.020 [INFO][3820] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="53c6d08b763dbb6497e7fe1cbd2a160ab8722cdadbaf9867867b5f60486fe337" Namespace="calico-system" Pod="calico-kube-controllers-664b77dd74-d7f4s" WorkloadEndpoint="srv--nkv7s.gb1.brightbox.com-k8s-calico--kube--controllers--664b77dd74--d7f4s-eth0" Mar 25 02:47:04.055738 containerd[1520]: 2025-03-25 02:47:04.021 [INFO][3820] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="53c6d08b763dbb6497e7fe1cbd2a160ab8722cdadbaf9867867b5f60486fe337" Namespace="calico-system" Pod="calico-kube-controllers-664b77dd74-d7f4s" WorkloadEndpoint="srv--nkv7s.gb1.brightbox.com-k8s-calico--kube--controllers--664b77dd74--d7f4s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--nkv7s.gb1.brightbox.com-k8s-calico--kube--controllers--664b77dd74--d7f4s-eth0", GenerateName:"calico-kube-controllers-664b77dd74-", Namespace:"calico-system", SelfLink:"", UID:"55c6465a-b1dd-45a3-b84c-6339283c9078", ResourceVersion:"688", Generation:0, CreationTimestamp:time.Date(2025, time.March, 25, 2, 46, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"664b77dd74", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-nkv7s.gb1.brightbox.com", ContainerID:"53c6d08b763dbb6497e7fe1cbd2a160ab8722cdadbaf9867867b5f60486fe337", Pod:"calico-kube-controllers-664b77dd74-d7f4s", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.110.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib452edd3cad", MAC:"fe:0d:ef:ae:61:f4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 25 02:47:04.057529 containerd[1520]: 2025-03-25 02:47:04.045 [INFO][3820] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="53c6d08b763dbb6497e7fe1cbd2a160ab8722cdadbaf9867867b5f60486fe337" Namespace="calico-system" Pod="calico-kube-controllers-664b77dd74-d7f4s" WorkloadEndpoint="srv--nkv7s.gb1.brightbox.com-k8s-calico--kube--controllers--664b77dd74--d7f4s-eth0" Mar 25 02:47:04.127954 containerd[1520]: time="2025-03-25T02:47:04.127848907Z" level=info msg="connecting to shim 53c6d08b763dbb6497e7fe1cbd2a160ab8722cdadbaf9867867b5f60486fe337" address="unix:///run/containerd/s/e9c8e45f5ca318b93f4f529389d56954ec0a748c29932e2a4991e1b9269a463f" namespace=k8s.io protocol=ttrpc version=3 Mar 25 02:47:04.168124 systemd[1]: Started cri-containerd-53c6d08b763dbb6497e7fe1cbd2a160ab8722cdadbaf9867867b5f60486fe337.scope - libcontainer container 53c6d08b763dbb6497e7fe1cbd2a160ab8722cdadbaf9867867b5f60486fe337. Mar 25 02:47:04.251264 containerd[1520]: time="2025-03-25T02:47:04.251204787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-664b77dd74-d7f4s,Uid:55c6465a-b1dd-45a3-b84c-6339283c9078,Namespace:calico-system,Attempt:0,} returns sandbox id \"53c6d08b763dbb6497e7fe1cbd2a160ab8722cdadbaf9867867b5f60486fe337\"" Mar 25 02:47:04.256281 containerd[1520]: time="2025-03-25T02:47:04.256233268Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\"" Mar 25 02:47:04.635079 containerd[1520]: time="2025-03-25T02:47:04.634411047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-l8f9s,Uid:c97036f6-34cb-4044-80e3-4bb047e317c7,Namespace:kube-system,Attempt:0,}" Mar 25 02:47:04.635079 containerd[1520]: time="2025-03-25T02:47:04.634867458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h6zz7,Uid:2d008efc-d6e3-44fa-b5c6-37d59264bf67,Namespace:calico-system,Attempt:0,}" Mar 25 02:47:05.110991 systemd-networkd[1446]: cali8d89cdd23a6: Link UP Mar 25 02:47:05.111385 systemd-networkd[1446]: cali8d89cdd23a6: Gained carrier Mar 25 02:47:05.167416 containerd[1520]: 2025-03-25 02:47:04.702 [INFO][3900] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 25 02:47:05.167416 containerd[1520]: 2025-03-25 02:47:04.746 [INFO][3900] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--nkv7s.gb1.brightbox.com-k8s-coredns--6f6b679f8f--l8f9s-eth0 coredns-6f6b679f8f- kube-system c97036f6-34cb-4044-80e3-4bb047e317c7 687 0 2025-03-25 02:46:26 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-nkv7s.gb1.brightbox.com coredns-6f6b679f8f-l8f9s eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8d89cdd23a6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="05b084d3d498cbc2ad0d073022cfafc59ce37c5c9e41fea4d76bca00588c2251" Namespace="kube-system" Pod="coredns-6f6b679f8f-l8f9s" WorkloadEndpoint="srv--nkv7s.gb1.brightbox.com-k8s-coredns--6f6b679f8f--l8f9s-" Mar 25 02:47:05.167416 containerd[1520]: 2025-03-25 02:47:04.746 [INFO][3900] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="05b084d3d498cbc2ad0d073022cfafc59ce37c5c9e41fea4d76bca00588c2251" Namespace="kube-system" Pod="coredns-6f6b679f8f-l8f9s" WorkloadEndpoint="srv--nkv7s.gb1.brightbox.com-k8s-coredns--6f6b679f8f--l8f9s-eth0" Mar 25 02:47:05.167416 containerd[1520]: 2025-03-25 02:47:04.953 [INFO][3966] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="05b084d3d498cbc2ad0d073022cfafc59ce37c5c9e41fea4d76bca00588c2251" HandleID="k8s-pod-network.05b084d3d498cbc2ad0d073022cfafc59ce37c5c9e41fea4d76bca00588c2251" Workload="srv--nkv7s.gb1.brightbox.com-k8s-coredns--6f6b679f8f--l8f9s-eth0" Mar 25 02:47:05.167416 containerd[1520]: 2025-03-25 02:47:05.000 [INFO][3966] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="05b084d3d498cbc2ad0d073022cfafc59ce37c5c9e41fea4d76bca00588c2251" HandleID="k8s-pod-network.05b084d3d498cbc2ad0d073022cfafc59ce37c5c9e41fea4d76bca00588c2251" Workload="srv--nkv7s.gb1.brightbox.com-k8s-coredns--6f6b679f8f--l8f9s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f2b40), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-nkv7s.gb1.brightbox.com", "pod":"coredns-6f6b679f8f-l8f9s", "timestamp":"2025-03-25 02:47:04.951330652 +0000 UTC"}, Hostname:"srv-nkv7s.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 25 02:47:05.167416 containerd[1520]: 2025-03-25 02:47:05.000 [INFO][3966] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 25 02:47:05.167416 containerd[1520]: 2025-03-25 02:47:05.000 [INFO][3966] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 25 02:47:05.167416 containerd[1520]: 2025-03-25 02:47:05.000 [INFO][3966] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-nkv7s.gb1.brightbox.com' Mar 25 02:47:05.167416 containerd[1520]: 2025-03-25 02:47:05.007 [INFO][3966] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.05b084d3d498cbc2ad0d073022cfafc59ce37c5c9e41fea4d76bca00588c2251" host="srv-nkv7s.gb1.brightbox.com" Mar 25 02:47:05.167416 containerd[1520]: 2025-03-25 02:47:05.023 [INFO][3966] ipam/ipam.go 372: Looking up existing affinities for host host="srv-nkv7s.gb1.brightbox.com" Mar 25 02:47:05.167416 containerd[1520]: 2025-03-25 02:47:05.043 [INFO][3966] ipam/ipam.go 489: Trying affinity for 192.168.110.128/26 host="srv-nkv7s.gb1.brightbox.com" Mar 25 02:47:05.167416 containerd[1520]: 2025-03-25 02:47:05.050 [INFO][3966] ipam/ipam.go 155: Attempting to load block cidr=192.168.110.128/26 host="srv-nkv7s.gb1.brightbox.com" Mar 25 02:47:05.167416 containerd[1520]: 2025-03-25 02:47:05.057 [INFO][3966] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.110.128/26 host="srv-nkv7s.gb1.brightbox.com" Mar 25 02:47:05.167416 containerd[1520]: 2025-03-25 02:47:05.058 [INFO][3966] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.110.128/26 handle="k8s-pod-network.05b084d3d498cbc2ad0d073022cfafc59ce37c5c9e41fea4d76bca00588c2251" host="srv-nkv7s.gb1.brightbox.com" Mar 25 02:47:05.167416 containerd[1520]: 2025-03-25 02:47:05.067 [INFO][3966] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.05b084d3d498cbc2ad0d073022cfafc59ce37c5c9e41fea4d76bca00588c2251 Mar 25 02:47:05.167416 containerd[1520]: 2025-03-25 02:47:05.078 [INFO][3966] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.110.128/26 handle="k8s-pod-network.05b084d3d498cbc2ad0d073022cfafc59ce37c5c9e41fea4d76bca00588c2251" host="srv-nkv7s.gb1.brightbox.com" Mar 25 02:47:05.167416 containerd[1520]: 2025-03-25 02:47:05.091 [INFO][3966] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.110.130/26] block=192.168.110.128/26 handle="k8s-pod-network.05b084d3d498cbc2ad0d073022cfafc59ce37c5c9e41fea4d76bca00588c2251" host="srv-nkv7s.gb1.brightbox.com" Mar 25 02:47:05.167416 containerd[1520]: 2025-03-25 02:47:05.091 [INFO][3966] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.110.130/26] handle="k8s-pod-network.05b084d3d498cbc2ad0d073022cfafc59ce37c5c9e41fea4d76bca00588c2251" host="srv-nkv7s.gb1.brightbox.com" Mar 25 02:47:05.167416 containerd[1520]: 2025-03-25 02:47:05.091 [INFO][3966] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 25 02:47:05.167416 containerd[1520]: 2025-03-25 02:47:05.091 [INFO][3966] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.110.130/26] IPv6=[] ContainerID="05b084d3d498cbc2ad0d073022cfafc59ce37c5c9e41fea4d76bca00588c2251" HandleID="k8s-pod-network.05b084d3d498cbc2ad0d073022cfafc59ce37c5c9e41fea4d76bca00588c2251" Workload="srv--nkv7s.gb1.brightbox.com-k8s-coredns--6f6b679f8f--l8f9s-eth0" Mar 25 02:47:05.168978 containerd[1520]: 2025-03-25 02:47:05.100 [INFO][3900] cni-plugin/k8s.go 386: Populated endpoint ContainerID="05b084d3d498cbc2ad0d073022cfafc59ce37c5c9e41fea4d76bca00588c2251" Namespace="kube-system" Pod="coredns-6f6b679f8f-l8f9s" WorkloadEndpoint="srv--nkv7s.gb1.brightbox.com-k8s-coredns--6f6b679f8f--l8f9s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--nkv7s.gb1.brightbox.com-k8s-coredns--6f6b679f8f--l8f9s-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"c97036f6-34cb-4044-80e3-4bb047e317c7", ResourceVersion:"687", Generation:0, CreationTimestamp:time.Date(2025, time.March, 25, 2, 46, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-nkv7s.gb1.brightbox.com", ContainerID:"", Pod:"coredns-6f6b679f8f-l8f9s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.110.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8d89cdd23a6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 25 02:47:05.168978 containerd[1520]: 2025-03-25 02:47:05.101 [INFO][3900] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.110.130/32] ContainerID="05b084d3d498cbc2ad0d073022cfafc59ce37c5c9e41fea4d76bca00588c2251" Namespace="kube-system" Pod="coredns-6f6b679f8f-l8f9s" WorkloadEndpoint="srv--nkv7s.gb1.brightbox.com-k8s-coredns--6f6b679f8f--l8f9s-eth0" Mar 25 02:47:05.168978 containerd[1520]: 2025-03-25 02:47:05.101 [INFO][3900] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8d89cdd23a6 ContainerID="05b084d3d498cbc2ad0d073022cfafc59ce37c5c9e41fea4d76bca00588c2251" Namespace="kube-system" Pod="coredns-6f6b679f8f-l8f9s" WorkloadEndpoint="srv--nkv7s.gb1.brightbox.com-k8s-coredns--6f6b679f8f--l8f9s-eth0" Mar 25 02:47:05.168978 containerd[1520]: 2025-03-25 02:47:05.108 [INFO][3900] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="05b084d3d498cbc2ad0d073022cfafc59ce37c5c9e41fea4d76bca00588c2251" Namespace="kube-system" Pod="coredns-6f6b679f8f-l8f9s" WorkloadEndpoint="srv--nkv7s.gb1.brightbox.com-k8s-coredns--6f6b679f8f--l8f9s-eth0" Mar 25 02:47:05.168978 containerd[1520]: 2025-03-25 02:47:05.110 [INFO][3900] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="05b084d3d498cbc2ad0d073022cfafc59ce37c5c9e41fea4d76bca00588c2251" Namespace="kube-system" Pod="coredns-6f6b679f8f-l8f9s" WorkloadEndpoint="srv--nkv7s.gb1.brightbox.com-k8s-coredns--6f6b679f8f--l8f9s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--nkv7s.gb1.brightbox.com-k8s-coredns--6f6b679f8f--l8f9s-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"c97036f6-34cb-4044-80e3-4bb047e317c7", ResourceVersion:"687", Generation:0, CreationTimestamp:time.Date(2025, time.March, 25, 2, 46, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-nkv7s.gb1.brightbox.com", ContainerID:"05b084d3d498cbc2ad0d073022cfafc59ce37c5c9e41fea4d76bca00588c2251", Pod:"coredns-6f6b679f8f-l8f9s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.110.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8d89cdd23a6", MAC:"8a:d8:44:3d:7e:94", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 25 02:47:05.169366 containerd[1520]: 2025-03-25 02:47:05.151 [INFO][3900] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="05b084d3d498cbc2ad0d073022cfafc59ce37c5c9e41fea4d76bca00588c2251" Namespace="kube-system" Pod="coredns-6f6b679f8f-l8f9s" WorkloadEndpoint="srv--nkv7s.gb1.brightbox.com-k8s-coredns--6f6b679f8f--l8f9s-eth0" Mar 25 02:47:05.230355 systemd-networkd[1446]: cali605c12aba71: Link UP Mar 25 02:47:05.236288 systemd-networkd[1446]: cali605c12aba71: Gained carrier Mar 25 02:47:05.282201 containerd[1520]: 2025-03-25 02:47:04.750 [INFO][3903] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 25 02:47:05.282201 containerd[1520]: 2025-03-25 02:47:04.813 [INFO][3903] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--nkv7s.gb1.brightbox.com-k8s-csi--node--driver--h6zz7-eth0 csi-node-driver- calico-system 2d008efc-d6e3-44fa-b5c6-37d59264bf67 591 0 2025-03-25 02:46:37 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:568c96974f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s srv-nkv7s.gb1.brightbox.com csi-node-driver-h6zz7 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali605c12aba71 [] []}} ContainerID="f09976e4d08893c9c2de1c6090d39ac943ebf2f41494d294d2b7e1020fabac99" Namespace="calico-system" Pod="csi-node-driver-h6zz7" WorkloadEndpoint="srv--nkv7s.gb1.brightbox.com-k8s-csi--node--driver--h6zz7-" Mar 25 02:47:05.282201 containerd[1520]: 2025-03-25 02:47:04.813 [INFO][3903] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f09976e4d08893c9c2de1c6090d39ac943ebf2f41494d294d2b7e1020fabac99" Namespace="calico-system" Pod="csi-node-driver-h6zz7" WorkloadEndpoint="srv--nkv7s.gb1.brightbox.com-k8s-csi--node--driver--h6zz7-eth0" Mar 25 02:47:05.282201 containerd[1520]: 2025-03-25 02:47:05.003 [INFO][3990] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f09976e4d08893c9c2de1c6090d39ac943ebf2f41494d294d2b7e1020fabac99" HandleID="k8s-pod-network.f09976e4d08893c9c2de1c6090d39ac943ebf2f41494d294d2b7e1020fabac99" Workload="srv--nkv7s.gb1.brightbox.com-k8s-csi--node--driver--h6zz7-eth0" Mar 25 02:47:05.282201 containerd[1520]: 2025-03-25 02:47:05.030 [INFO][3990] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f09976e4d08893c9c2de1c6090d39ac943ebf2f41494d294d2b7e1020fabac99" HandleID="k8s-pod-network.f09976e4d08893c9c2de1c6090d39ac943ebf2f41494d294d2b7e1020fabac99" Workload="srv--nkv7s.gb1.brightbox.com-k8s-csi--node--driver--h6zz7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000425280), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-nkv7s.gb1.brightbox.com", "pod":"csi-node-driver-h6zz7", "timestamp":"2025-03-25 02:47:05.003090775 +0000 UTC"}, Hostname:"srv-nkv7s.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 25 02:47:05.282201 containerd[1520]: 2025-03-25 02:47:05.030 [INFO][3990] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 25 02:47:05.282201 containerd[1520]: 2025-03-25 02:47:05.092 [INFO][3990] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 25 02:47:05.282201 containerd[1520]: 2025-03-25 02:47:05.092 [INFO][3990] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-nkv7s.gb1.brightbox.com' Mar 25 02:47:05.282201 containerd[1520]: 2025-03-25 02:47:05.113 [INFO][3990] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f09976e4d08893c9c2de1c6090d39ac943ebf2f41494d294d2b7e1020fabac99" host="srv-nkv7s.gb1.brightbox.com" Mar 25 02:47:05.282201 containerd[1520]: 2025-03-25 02:47:05.125 [INFO][3990] ipam/ipam.go 372: Looking up existing affinities for host host="srv-nkv7s.gb1.brightbox.com" Mar 25 02:47:05.282201 containerd[1520]: 2025-03-25 02:47:05.135 [INFO][3990] ipam/ipam.go 489: Trying affinity for 192.168.110.128/26 host="srv-nkv7s.gb1.brightbox.com" Mar 25 02:47:05.282201 containerd[1520]: 2025-03-25 02:47:05.140 [INFO][3990] ipam/ipam.go 155: Attempting to load block cidr=192.168.110.128/26 host="srv-nkv7s.gb1.brightbox.com" Mar 25 02:47:05.282201 containerd[1520]: 2025-03-25 02:47:05.147 [INFO][3990] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.110.128/26 host="srv-nkv7s.gb1.brightbox.com" Mar 25 02:47:05.282201 containerd[1520]: 2025-03-25 02:47:05.148 [INFO][3990] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.110.128/26 handle="k8s-pod-network.f09976e4d08893c9c2de1c6090d39ac943ebf2f41494d294d2b7e1020fabac99" host="srv-nkv7s.gb1.brightbox.com" Mar 25 02:47:05.282201 containerd[1520]: 2025-03-25 02:47:05.157 [INFO][3990] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f09976e4d08893c9c2de1c6090d39ac943ebf2f41494d294d2b7e1020fabac99 Mar 25 02:47:05.282201 containerd[1520]: 2025-03-25 02:47:05.172 [INFO][3990] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.110.128/26 handle="k8s-pod-network.f09976e4d08893c9c2de1c6090d39ac943ebf2f41494d294d2b7e1020fabac99" host="srv-nkv7s.gb1.brightbox.com" Mar 25 02:47:05.282201 containerd[1520]: 2025-03-25 02:47:05.190 [INFO][3990] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.110.131/26] block=192.168.110.128/26 handle="k8s-pod-network.f09976e4d08893c9c2de1c6090d39ac943ebf2f41494d294d2b7e1020fabac99" host="srv-nkv7s.gb1.brightbox.com" Mar 25 02:47:05.282201 containerd[1520]: 2025-03-25 02:47:05.196 [INFO][3990] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.110.131/26] handle="k8s-pod-network.f09976e4d08893c9c2de1c6090d39ac943ebf2f41494d294d2b7e1020fabac99" host="srv-nkv7s.gb1.brightbox.com" Mar 25 02:47:05.282201 containerd[1520]: 2025-03-25 02:47:05.196 [INFO][3990] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 25 02:47:05.282201 containerd[1520]: 2025-03-25 02:47:05.196 [INFO][3990] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.110.131/26] IPv6=[] ContainerID="f09976e4d08893c9c2de1c6090d39ac943ebf2f41494d294d2b7e1020fabac99" HandleID="k8s-pod-network.f09976e4d08893c9c2de1c6090d39ac943ebf2f41494d294d2b7e1020fabac99" Workload="srv--nkv7s.gb1.brightbox.com-k8s-csi--node--driver--h6zz7-eth0" Mar 25 02:47:05.284847 containerd[1520]: 2025-03-25 02:47:05.205 [INFO][3903] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f09976e4d08893c9c2de1c6090d39ac943ebf2f41494d294d2b7e1020fabac99" Namespace="calico-system" Pod="csi-node-driver-h6zz7" WorkloadEndpoint="srv--nkv7s.gb1.brightbox.com-k8s-csi--node--driver--h6zz7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--nkv7s.gb1.brightbox.com-k8s-csi--node--driver--h6zz7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2d008efc-d6e3-44fa-b5c6-37d59264bf67", ResourceVersion:"591", Generation:0, CreationTimestamp:time.Date(2025, time.March, 25, 2, 46, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"568c96974f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-nkv7s.gb1.brightbox.com", ContainerID:"", Pod:"csi-node-driver-h6zz7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.110.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali605c12aba71", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 25 02:47:05.284847 containerd[1520]: 2025-03-25 02:47:05.206 [INFO][3903] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.110.131/32] ContainerID="f09976e4d08893c9c2de1c6090d39ac943ebf2f41494d294d2b7e1020fabac99" Namespace="calico-system" Pod="csi-node-driver-h6zz7" WorkloadEndpoint="srv--nkv7s.gb1.brightbox.com-k8s-csi--node--driver--h6zz7-eth0" Mar 25 02:47:05.284847 containerd[1520]: 2025-03-25 02:47:05.207 [INFO][3903] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali605c12aba71 ContainerID="f09976e4d08893c9c2de1c6090d39ac943ebf2f41494d294d2b7e1020fabac99" Namespace="calico-system" Pod="csi-node-driver-h6zz7" WorkloadEndpoint="srv--nkv7s.gb1.brightbox.com-k8s-csi--node--driver--h6zz7-eth0" Mar 25 02:47:05.284847 containerd[1520]: 2025-03-25 02:47:05.249 [INFO][3903] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f09976e4d08893c9c2de1c6090d39ac943ebf2f41494d294d2b7e1020fabac99" Namespace="calico-system" Pod="csi-node-driver-h6zz7" WorkloadEndpoint="srv--nkv7s.gb1.brightbox.com-k8s-csi--node--driver--h6zz7-eth0" Mar 25 02:47:05.284847 containerd[1520]: 2025-03-25 02:47:05.253 [INFO][3903] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f09976e4d08893c9c2de1c6090d39ac943ebf2f41494d294d2b7e1020fabac99" Namespace="calico-system" Pod="csi-node-driver-h6zz7" WorkloadEndpoint="srv--nkv7s.gb1.brightbox.com-k8s-csi--node--driver--h6zz7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--nkv7s.gb1.brightbox.com-k8s-csi--node--driver--h6zz7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2d008efc-d6e3-44fa-b5c6-37d59264bf67", ResourceVersion:"591", Generation:0, CreationTimestamp:time.Date(2025, time.March, 25, 2, 46, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"568c96974f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-nkv7s.gb1.brightbox.com", ContainerID:"f09976e4d08893c9c2de1c6090d39ac943ebf2f41494d294d2b7e1020fabac99", Pod:"csi-node-driver-h6zz7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.110.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali605c12aba71", MAC:"5a:76:9b:27:6b:c3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 25 02:47:05.284847 containerd[1520]: 2025-03-25 02:47:05.269 [INFO][3903] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f09976e4d08893c9c2de1c6090d39ac943ebf2f41494d294d2b7e1020fabac99" Namespace="calico-system" Pod="csi-node-driver-h6zz7" WorkloadEndpoint="srv--nkv7s.gb1.brightbox.com-k8s-csi--node--driver--h6zz7-eth0" Mar 25 02:47:05.321114 containerd[1520]: time="2025-03-25T02:47:05.321007964Z" level=info msg="connecting to shim 05b084d3d498cbc2ad0d073022cfafc59ce37c5c9e41fea4d76bca00588c2251" address="unix:///run/containerd/s/e416db16bc72814713dd13ddd0d904598150ce724a7044c5d51603bc998d1ebc" namespace=k8s.io protocol=ttrpc version=3 Mar 25 02:47:05.415505 containerd[1520]: time="2025-03-25T02:47:05.415445867Z" level=info msg="connecting to shim f09976e4d08893c9c2de1c6090d39ac943ebf2f41494d294d2b7e1020fabac99" address="unix:///run/containerd/s/e1325da5d93355f3d0e69008bc4c62cd34f922748dd5ae0729fce058f4b50c77" namespace=k8s.io protocol=ttrpc version=3 Mar 25 02:47:05.469211 systemd[1]: Started cri-containerd-05b084d3d498cbc2ad0d073022cfafc59ce37c5c9e41fea4d76bca00588c2251.scope - libcontainer container 05b084d3d498cbc2ad0d073022cfafc59ce37c5c9e41fea4d76bca00588c2251. Mar 25 02:47:05.496121 systemd[1]: Started cri-containerd-f09976e4d08893c9c2de1c6090d39ac943ebf2f41494d294d2b7e1020fabac99.scope - libcontainer container f09976e4d08893c9c2de1c6090d39ac943ebf2f41494d294d2b7e1020fabac99. Mar 25 02:47:05.636082 containerd[1520]: time="2025-03-25T02:47:05.635841252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7867f858bf-nmcst,Uid:faf12e28-177f-4d74-9f9e-8e39e2034cdd,Namespace:calico-apiserver,Attempt:0,}" Mar 25 02:47:05.639721 containerd[1520]: time="2025-03-25T02:47:05.636847672Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-7vrcq,Uid:5bfa8497-c633-4917-a163-1574ac7f04bf,Namespace:kube-system,Attempt:0,}" Mar 25 02:47:05.666167 containerd[1520]: time="2025-03-25T02:47:05.663855057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-l8f9s,Uid:c97036f6-34cb-4044-80e3-4bb047e317c7,Namespace:kube-system,Attempt:0,} returns sandbox id \"05b084d3d498cbc2ad0d073022cfafc59ce37c5c9e41fea4d76bca00588c2251\"" Mar 25 02:47:05.690012 containerd[1520]: time="2025-03-25T02:47:05.689943054Z" level=info msg="CreateContainer within sandbox \"05b084d3d498cbc2ad0d073022cfafc59ce37c5c9e41fea4d76bca00588c2251\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 25 02:47:05.764599 containerd[1520]: time="2025-03-25T02:47:05.764542926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h6zz7,Uid:2d008efc-d6e3-44fa-b5c6-37d59264bf67,Namespace:calico-system,Attempt:0,} returns sandbox id \"f09976e4d08893c9c2de1c6090d39ac943ebf2f41494d294d2b7e1020fabac99\"" Mar 25 02:47:05.778025 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1582506803.mount: Deactivated successfully. Mar 25 02:47:05.785921 containerd[1520]: time="2025-03-25T02:47:05.782336569Z" level=info msg="Container c68c38564f0b69deaf2f165f979fcf16132a2bfface4b102852692a1d799660e: CDI devices from CRI Config.CDIDevices: []" Mar 25 02:47:05.811235 containerd[1520]: time="2025-03-25T02:47:05.810857491Z" level=info msg="CreateContainer within sandbox \"05b084d3d498cbc2ad0d073022cfafc59ce37c5c9e41fea4d76bca00588c2251\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c68c38564f0b69deaf2f165f979fcf16132a2bfface4b102852692a1d799660e\"" Mar 25 02:47:05.815266 containerd[1520]: time="2025-03-25T02:47:05.814373111Z" level=info msg="StartContainer for \"c68c38564f0b69deaf2f165f979fcf16132a2bfface4b102852692a1d799660e\"" Mar 25 02:47:05.824131 containerd[1520]: time="2025-03-25T02:47:05.823693786Z" level=info msg="connecting to shim c68c38564f0b69deaf2f165f979fcf16132a2bfface4b102852692a1d799660e" address="unix:///run/containerd/s/e416db16bc72814713dd13ddd0d904598150ce724a7044c5d51603bc998d1ebc" protocol=ttrpc version=3 Mar 25 02:47:05.909235 systemd[1]: Started cri-containerd-c68c38564f0b69deaf2f165f979fcf16132a2bfface4b102852692a1d799660e.scope - libcontainer container c68c38564f0b69deaf2f165f979fcf16132a2bfface4b102852692a1d799660e. Mar 25 02:47:05.937097 systemd-networkd[1446]: calib452edd3cad: Gained IPv6LL Mar 25 02:47:06.096093 containerd[1520]: time="2025-03-25T02:47:06.074987599Z" level=info msg="TaskExit event in podsandbox handler container_id:\"08b46a662cdc025a4603a6fcdf6a5e348afd7fcf81bec452154928151ac37f47\" id:\"ab93fcb2d5295bb8774d1d22957deba7bdf2b40317743778982f3f34c4e8b047\" pid:4053 exit_status:1 exited_at:{seconds:1742870826 nanos:67078132}" Mar 25 02:47:06.107478 containerd[1520]: time="2025-03-25T02:47:06.106684740Z" level=info msg="StartContainer for \"c68c38564f0b69deaf2f165f979fcf16132a2bfface4b102852692a1d799660e\" returns successfully" Mar 25 02:47:06.320213 systemd-networkd[1446]: cali605c12aba71: Gained IPv6LL Mar 25 02:47:06.370687 systemd-networkd[1446]: calib6f514a4d7f: Link UP Mar 25 02:47:06.372244 systemd-networkd[1446]: calib6f514a4d7f: Gained carrier Mar 25 02:47:06.423713 containerd[1520]: 2025-03-25 02:47:05.918 [INFO][4162] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 25 02:47:06.423713 containerd[1520]: 2025-03-25 02:47:05.971 [INFO][4162] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--nkv7s.gb1.brightbox.com-k8s-calico--apiserver--7867f858bf--nmcst-eth0 calico-apiserver-7867f858bf- calico-apiserver faf12e28-177f-4d74-9f9e-8e39e2034cdd 683 0 2025-03-25 02:46:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7867f858bf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-nkv7s.gb1.brightbox.com calico-apiserver-7867f858bf-nmcst eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib6f514a4d7f [] []}} ContainerID="efee5e986526424df10c9fde3e16403620ba52241c7d8c20abd24c1e68aebfdc" Namespace="calico-apiserver" Pod="calico-apiserver-7867f858bf-nmcst" WorkloadEndpoint="srv--nkv7s.gb1.brightbox.com-k8s-calico--apiserver--7867f858bf--nmcst-" Mar 25 02:47:06.423713 containerd[1520]: 2025-03-25 02:47:05.971 [INFO][4162] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="efee5e986526424df10c9fde3e16403620ba52241c7d8c20abd24c1e68aebfdc" Namespace="calico-apiserver" Pod="calico-apiserver-7867f858bf-nmcst" WorkloadEndpoint="srv--nkv7s.gb1.brightbox.com-k8s-calico--apiserver--7867f858bf--nmcst-eth0" Mar 25 02:47:06.423713 containerd[1520]: 2025-03-25 02:47:06.134 [INFO][4239] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="efee5e986526424df10c9fde3e16403620ba52241c7d8c20abd24c1e68aebfdc" HandleID="k8s-pod-network.efee5e986526424df10c9fde3e16403620ba52241c7d8c20abd24c1e68aebfdc" Workload="srv--nkv7s.gb1.brightbox.com-k8s-calico--apiserver--7867f858bf--nmcst-eth0" Mar 25 02:47:06.423713 containerd[1520]: 2025-03-25 02:47:06.179 [INFO][4239] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="efee5e986526424df10c9fde3e16403620ba52241c7d8c20abd24c1e68aebfdc" HandleID="k8s-pod-network.efee5e986526424df10c9fde3e16403620ba52241c7d8c20abd24c1e68aebfdc" Workload="srv--nkv7s.gb1.brightbox.com-k8s-calico--apiserver--7867f858bf--nmcst-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00011b310), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-nkv7s.gb1.brightbox.com", "pod":"calico-apiserver-7867f858bf-nmcst", "timestamp":"2025-03-25 02:47:06.134673098 +0000 UTC"}, Hostname:"srv-nkv7s.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 25 02:47:06.423713 containerd[1520]: 2025-03-25 02:47:06.179 [INFO][4239] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 25 02:47:06.423713 containerd[1520]: 2025-03-25 02:47:06.179 [INFO][4239] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 25 02:47:06.423713 containerd[1520]: 2025-03-25 02:47:06.180 [INFO][4239] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-nkv7s.gb1.brightbox.com' Mar 25 02:47:06.423713 containerd[1520]: 2025-03-25 02:47:06.183 [INFO][4239] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.efee5e986526424df10c9fde3e16403620ba52241c7d8c20abd24c1e68aebfdc" host="srv-nkv7s.gb1.brightbox.com" Mar 25 02:47:06.423713 containerd[1520]: 2025-03-25 02:47:06.277 [INFO][4239] ipam/ipam.go 372: Looking up existing affinities for host host="srv-nkv7s.gb1.brightbox.com" Mar 25 02:47:06.423713 containerd[1520]: 2025-03-25 02:47:06.294 [INFO][4239] ipam/ipam.go 489: Trying affinity for 192.168.110.128/26 host="srv-nkv7s.gb1.brightbox.com" Mar 25 02:47:06.423713 containerd[1520]: 2025-03-25 02:47:06.300 [INFO][4239] ipam/ipam.go 155: Attempting to load block cidr=192.168.110.128/26 host="srv-nkv7s.gb1.brightbox.com" Mar 25 02:47:06.423713 containerd[1520]: 2025-03-25 02:47:06.306 [INFO][4239] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.110.128/26 host="srv-nkv7s.gb1.brightbox.com" Mar 25 02:47:06.423713 containerd[1520]: 2025-03-25 02:47:06.306 [INFO][4239] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.110.128/26 handle="k8s-pod-network.efee5e986526424df10c9fde3e16403620ba52241c7d8c20abd24c1e68aebfdc" host="srv-nkv7s.gb1.brightbox.com" Mar 25 02:47:06.423713 containerd[1520]: 2025-03-25 02:47:06.317 [INFO][4239] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.efee5e986526424df10c9fde3e16403620ba52241c7d8c20abd24c1e68aebfdc Mar 25 02:47:06.423713 containerd[1520]: 2025-03-25 02:47:06.333 [INFO][4239] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.110.128/26 handle="k8s-pod-network.efee5e986526424df10c9fde3e16403620ba52241c7d8c20abd24c1e68aebfdc" host="srv-nkv7s.gb1.brightbox.com" Mar 25 02:47:06.423713 containerd[1520]: 2025-03-25 02:47:06.348 [INFO][4239] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.110.132/26] block=192.168.110.128/26 handle="k8s-pod-network.efee5e986526424df10c9fde3e16403620ba52241c7d8c20abd24c1e68aebfdc" host="srv-nkv7s.gb1.brightbox.com" Mar 25 02:47:06.423713 containerd[1520]: 2025-03-25 02:47:06.348 [INFO][4239] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.110.132/26] handle="k8s-pod-network.efee5e986526424df10c9fde3e16403620ba52241c7d8c20abd24c1e68aebfdc" host="srv-nkv7s.gb1.brightbox.com" Mar 25 02:47:06.423713 containerd[1520]: 2025-03-25 02:47:06.348 [INFO][4239] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 25 02:47:06.427060 containerd[1520]: 2025-03-25 02:47:06.348 [INFO][4239] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.110.132/26] IPv6=[] ContainerID="efee5e986526424df10c9fde3e16403620ba52241c7d8c20abd24c1e68aebfdc" HandleID="k8s-pod-network.efee5e986526424df10c9fde3e16403620ba52241c7d8c20abd24c1e68aebfdc" Workload="srv--nkv7s.gb1.brightbox.com-k8s-calico--apiserver--7867f858bf--nmcst-eth0" Mar 25 02:47:06.427060 containerd[1520]: 2025-03-25 02:47:06.357 [INFO][4162] cni-plugin/k8s.go 386: Populated endpoint ContainerID="efee5e986526424df10c9fde3e16403620ba52241c7d8c20abd24c1e68aebfdc" Namespace="calico-apiserver" Pod="calico-apiserver-7867f858bf-nmcst" WorkloadEndpoint="srv--nkv7s.gb1.brightbox.com-k8s-calico--apiserver--7867f858bf--nmcst-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--nkv7s.gb1.brightbox.com-k8s-calico--apiserver--7867f858bf--nmcst-eth0", GenerateName:"calico-apiserver-7867f858bf-", Namespace:"calico-apiserver", SelfLink:"", UID:"faf12e28-177f-4d74-9f9e-8e39e2034cdd", ResourceVersion:"683", Generation:0, CreationTimestamp:time.Date(2025, time.March, 25, 2, 46, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7867f858bf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-nkv7s.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-7867f858bf-nmcst", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.110.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib6f514a4d7f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 25 02:47:06.427060 containerd[1520]: 2025-03-25 02:47:06.358 [INFO][4162] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.110.132/32] ContainerID="efee5e986526424df10c9fde3e16403620ba52241c7d8c20abd24c1e68aebfdc" Namespace="calico-apiserver" Pod="calico-apiserver-7867f858bf-nmcst" WorkloadEndpoint="srv--nkv7s.gb1.brightbox.com-k8s-calico--apiserver--7867f858bf--nmcst-eth0" Mar 25 02:47:06.427060 containerd[1520]: 2025-03-25 02:47:06.358 [INFO][4162] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib6f514a4d7f ContainerID="efee5e986526424df10c9fde3e16403620ba52241c7d8c20abd24c1e68aebfdc" Namespace="calico-apiserver" Pod="calico-apiserver-7867f858bf-nmcst" WorkloadEndpoint="srv--nkv7s.gb1.brightbox.com-k8s-calico--apiserver--7867f858bf--nmcst-eth0" Mar 25 02:47:06.427060 containerd[1520]: 2025-03-25 02:47:06.371 [INFO][4162] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="efee5e986526424df10c9fde3e16403620ba52241c7d8c20abd24c1e68aebfdc" Namespace="calico-apiserver" Pod="calico-apiserver-7867f858bf-nmcst" WorkloadEndpoint="srv--nkv7s.gb1.brightbox.com-k8s-calico--apiserver--7867f858bf--nmcst-eth0" Mar 25 02:47:06.427060 containerd[1520]: 2025-03-25 02:47:06.374 [INFO][4162] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="efee5e986526424df10c9fde3e16403620ba52241c7d8c20abd24c1e68aebfdc" Namespace="calico-apiserver" Pod="calico-apiserver-7867f858bf-nmcst" WorkloadEndpoint="srv--nkv7s.gb1.brightbox.com-k8s-calico--apiserver--7867f858bf--nmcst-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--nkv7s.gb1.brightbox.com-k8s-calico--apiserver--7867f858bf--nmcst-eth0", GenerateName:"calico-apiserver-7867f858bf-", Namespace:"calico-apiserver", SelfLink:"", UID:"faf12e28-177f-4d74-9f9e-8e39e2034cdd", ResourceVersion:"683", Generation:0, CreationTimestamp:time.Date(2025, time.March, 25, 2, 46, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7867f858bf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-nkv7s.gb1.brightbox.com", ContainerID:"efee5e986526424df10c9fde3e16403620ba52241c7d8c20abd24c1e68aebfdc", Pod:"calico-apiserver-7867f858bf-nmcst", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.110.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib6f514a4d7f", MAC:"7e:01:9a:02:45:c8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 25 02:47:06.428077 containerd[1520]: 2025-03-25 02:47:06.413 [INFO][4162] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="efee5e986526424df10c9fde3e16403620ba52241c7d8c20abd24c1e68aebfdc" Namespace="calico-apiserver" Pod="calico-apiserver-7867f858bf-nmcst" WorkloadEndpoint="srv--nkv7s.gb1.brightbox.com-k8s-calico--apiserver--7867f858bf--nmcst-eth0" Mar 25 02:47:06.448288 systemd-networkd[1446]: cali8d89cdd23a6: Gained IPv6LL Mar 25 02:47:06.504383 systemd-networkd[1446]: cali256f8a43717: Link UP Mar 25 02:47:06.506336 systemd-networkd[1446]: cali256f8a43717: Gained carrier Mar 25 02:47:06.549767 kernel: bpftool[4304]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 25 02:47:06.588406 containerd[1520]: 2025-03-25 02:47:05.903 [INFO][4164] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 25 02:47:06.588406 containerd[1520]: 2025-03-25 02:47:05.945 [INFO][4164] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--nkv7s.gb1.brightbox.com-k8s-coredns--6f6b679f8f--7vrcq-eth0 coredns-6f6b679f8f- kube-system 5bfa8497-c633-4917-a163-1574ac7f04bf 686 0 2025-03-25 02:46:26 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-nkv7s.gb1.brightbox.com coredns-6f6b679f8f-7vrcq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali256f8a43717 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="fac90d194b4365cc0a91e034b9526f812a318fa1eec34eef125d9ecc281397d5" Namespace="kube-system" Pod="coredns-6f6b679f8f-7vrcq" WorkloadEndpoint="srv--nkv7s.gb1.brightbox.com-k8s-coredns--6f6b679f8f--7vrcq-" Mar 25 02:47:06.588406 containerd[1520]: 2025-03-25 02:47:05.947 [INFO][4164] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="fac90d194b4365cc0a91e034b9526f812a318fa1eec34eef125d9ecc281397d5" Namespace="kube-system" Pod="coredns-6f6b679f8f-7vrcq" WorkloadEndpoint="srv--nkv7s.gb1.brightbox.com-k8s-coredns--6f6b679f8f--7vrcq-eth0" Mar 25 02:47:06.588406 containerd[1520]: 2025-03-25 02:47:06.127 [INFO][4233] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fac90d194b4365cc0a91e034b9526f812a318fa1eec34eef125d9ecc281397d5" HandleID="k8s-pod-network.fac90d194b4365cc0a91e034b9526f812a318fa1eec34eef125d9ecc281397d5" Workload="srv--nkv7s.gb1.brightbox.com-k8s-coredns--6f6b679f8f--7vrcq-eth0" Mar 25 02:47:06.588406 containerd[1520]: 2025-03-25 02:47:06.277 [INFO][4233] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fac90d194b4365cc0a91e034b9526f812a318fa1eec34eef125d9ecc281397d5" HandleID="k8s-pod-network.fac90d194b4365cc0a91e034b9526f812a318fa1eec34eef125d9ecc281397d5" Workload="srv--nkv7s.gb1.brightbox.com-k8s-coredns--6f6b679f8f--7vrcq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00027c450), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-nkv7s.gb1.brightbox.com", "pod":"coredns-6f6b679f8f-7vrcq", "timestamp":"2025-03-25 02:47:06.126452811 +0000 UTC"}, Hostname:"srv-nkv7s.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 25 02:47:06.588406 containerd[1520]: 2025-03-25 02:47:06.280 [INFO][4233] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 25 02:47:06.588406 containerd[1520]: 2025-03-25 02:47:06.348 [INFO][4233] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 25 02:47:06.588406 containerd[1520]: 2025-03-25 02:47:06.350 [INFO][4233] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-nkv7s.gb1.brightbox.com' Mar 25 02:47:06.588406 containerd[1520]: 2025-03-25 02:47:06.363 [INFO][4233] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.fac90d194b4365cc0a91e034b9526f812a318fa1eec34eef125d9ecc281397d5" host="srv-nkv7s.gb1.brightbox.com" Mar 25 02:47:06.588406 containerd[1520]: 2025-03-25 02:47:06.391 [INFO][4233] ipam/ipam.go 372: Looking up existing affinities for host host="srv-nkv7s.gb1.brightbox.com" Mar 25 02:47:06.588406 containerd[1520]: 2025-03-25 02:47:06.426 [INFO][4233] ipam/ipam.go 489: Trying affinity for 192.168.110.128/26 host="srv-nkv7s.gb1.brightbox.com" Mar 25 02:47:06.588406 containerd[1520]: 2025-03-25 02:47:06.434 [INFO][4233] ipam/ipam.go 155: Attempting to load block cidr=192.168.110.128/26 host="srv-nkv7s.gb1.brightbox.com" Mar 25 02:47:06.588406 containerd[1520]: 2025-03-25 02:47:06.448 [INFO][4233] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.110.128/26 host="srv-nkv7s.gb1.brightbox.com" Mar 25 02:47:06.588406 containerd[1520]: 2025-03-25 02:47:06.449 [INFO][4233] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.110.128/26 handle="k8s-pod-network.fac90d194b4365cc0a91e034b9526f812a318fa1eec34eef125d9ecc281397d5" host="srv-nkv7s.gb1.brightbox.com" Mar 25 02:47:06.588406 containerd[1520]: 2025-03-25 02:47:06.453 [INFO][4233] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.fac90d194b4365cc0a91e034b9526f812a318fa1eec34eef125d9ecc281397d5 Mar 25 02:47:06.588406 containerd[1520]: 2025-03-25 02:47:06.469 [INFO][4233] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.110.128/26 handle="k8s-pod-network.fac90d194b4365cc0a91e034b9526f812a318fa1eec34eef125d9ecc281397d5" host="srv-nkv7s.gb1.brightbox.com" Mar 25 02:47:06.588406 containerd[1520]: 2025-03-25 02:47:06.486 [INFO][4233] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.110.133/26] block=192.168.110.128/26 handle="k8s-pod-network.fac90d194b4365cc0a91e034b9526f812a318fa1eec34eef125d9ecc281397d5" host="srv-nkv7s.gb1.brightbox.com" Mar 25 02:47:06.588406 containerd[1520]: 2025-03-25 02:47:06.486 [INFO][4233] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.110.133/26] handle="k8s-pod-network.fac90d194b4365cc0a91e034b9526f812a318fa1eec34eef125d9ecc281397d5" host="srv-nkv7s.gb1.brightbox.com" Mar 25 02:47:06.588406 containerd[1520]: 2025-03-25 02:47:06.487 [INFO][4233] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 25 02:47:06.588406 containerd[1520]: 2025-03-25 02:47:06.487 [INFO][4233] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.110.133/26] IPv6=[] ContainerID="fac90d194b4365cc0a91e034b9526f812a318fa1eec34eef125d9ecc281397d5" HandleID="k8s-pod-network.fac90d194b4365cc0a91e034b9526f812a318fa1eec34eef125d9ecc281397d5" Workload="srv--nkv7s.gb1.brightbox.com-k8s-coredns--6f6b679f8f--7vrcq-eth0" Mar 25 02:47:06.592539 containerd[1520]: 2025-03-25 02:47:06.497 [INFO][4164] cni-plugin/k8s.go 386: Populated endpoint ContainerID="fac90d194b4365cc0a91e034b9526f812a318fa1eec34eef125d9ecc281397d5" Namespace="kube-system" Pod="coredns-6f6b679f8f-7vrcq" WorkloadEndpoint="srv--nkv7s.gb1.brightbox.com-k8s-coredns--6f6b679f8f--7vrcq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--nkv7s.gb1.brightbox.com-k8s-coredns--6f6b679f8f--7vrcq-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"5bfa8497-c633-4917-a163-1574ac7f04bf", ResourceVersion:"686", Generation:0, CreationTimestamp:time.Date(2025, time.March, 25, 2, 46, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-nkv7s.gb1.brightbox.com", ContainerID:"", Pod:"coredns-6f6b679f8f-7vrcq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.110.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali256f8a43717", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 25 02:47:06.592539 containerd[1520]: 2025-03-25 02:47:06.497 [INFO][4164] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.110.133/32] ContainerID="fac90d194b4365cc0a91e034b9526f812a318fa1eec34eef125d9ecc281397d5" Namespace="kube-system" Pod="coredns-6f6b679f8f-7vrcq" WorkloadEndpoint="srv--nkv7s.gb1.brightbox.com-k8s-coredns--6f6b679f8f--7vrcq-eth0" Mar 25 02:47:06.592539 containerd[1520]: 2025-03-25 02:47:06.497 [INFO][4164] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali256f8a43717 ContainerID="fac90d194b4365cc0a91e034b9526f812a318fa1eec34eef125d9ecc281397d5" Namespace="kube-system" Pod="coredns-6f6b679f8f-7vrcq" WorkloadEndpoint="srv--nkv7s.gb1.brightbox.com-k8s-coredns--6f6b679f8f--7vrcq-eth0" Mar 25 02:47:06.592539 containerd[1520]: 2025-03-25 02:47:06.509 [INFO][4164] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fac90d194b4365cc0a91e034b9526f812a318fa1eec34eef125d9ecc281397d5" Namespace="kube-system" Pod="coredns-6f6b679f8f-7vrcq" WorkloadEndpoint="srv--nkv7s.gb1.brightbox.com-k8s-coredns--6f6b679f8f--7vrcq-eth0" Mar 25 02:47:06.592539 containerd[1520]: 2025-03-25 02:47:06.512 [INFO][4164] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="fac90d194b4365cc0a91e034b9526f812a318fa1eec34eef125d9ecc281397d5" Namespace="kube-system" Pod="coredns-6f6b679f8f-7vrcq" WorkloadEndpoint="srv--nkv7s.gb1.brightbox.com-k8s-coredns--6f6b679f8f--7vrcq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--nkv7s.gb1.brightbox.com-k8s-coredns--6f6b679f8f--7vrcq-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"5bfa8497-c633-4917-a163-1574ac7f04bf", ResourceVersion:"686", Generation:0, CreationTimestamp:time.Date(2025, time.March, 25, 2, 46, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-nkv7s.gb1.brightbox.com", ContainerID:"fac90d194b4365cc0a91e034b9526f812a318fa1eec34eef125d9ecc281397d5", Pod:"coredns-6f6b679f8f-7vrcq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.110.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali256f8a43717", MAC:"ee:ce:62:55:a2:38", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 25 02:47:06.593030 containerd[1520]: 2025-03-25 02:47:06.578 [INFO][4164] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="fac90d194b4365cc0a91e034b9526f812a318fa1eec34eef125d9ecc281397d5" Namespace="kube-system" Pod="coredns-6f6b679f8f-7vrcq" WorkloadEndpoint="srv--nkv7s.gb1.brightbox.com-k8s-coredns--6f6b679f8f--7vrcq-eth0" Mar 25 02:47:06.647900 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2251078306.mount: Deactivated successfully. Mar 25 02:47:06.653319 containerd[1520]: time="2025-03-25T02:47:06.651654115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7867f858bf-sjh4v,Uid:7b782bad-d6a9-42d9-9981-9048e9834ec0,Namespace:calico-apiserver,Attempt:0,}" Mar 25 02:47:06.847343 containerd[1520]: time="2025-03-25T02:47:06.846378310Z" level=info msg="TaskExit event in podsandbox handler container_id:\"08b46a662cdc025a4603a6fcdf6a5e348afd7fcf81bec452154928151ac37f47\" id:\"a3c2a9ed05133bfd8f344cf0a64caccc82f827ad2770881515e09e4f9e1d645e\" pid:4276 exit_status:1 exited_at:{seconds:1742870826 nanos:844239363}" Mar 25 02:47:07.101424 kubelet[2798]: I0325 02:47:07.100613 2798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-l8f9s" podStartSLOduration=41.100581209 podStartE2EDuration="41.100581209s" podCreationTimestamp="2025-03-25 02:46:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 02:47:07.046809033 +0000 UTC m=+46.605028450" watchObservedRunningTime="2025-03-25 02:47:07.100581209 +0000 UTC m=+46.658800625" Mar 25 02:47:07.441416 systemd-networkd[1446]: vxlan.calico: Link UP Mar 25 02:47:07.441432 systemd-networkd[1446]: vxlan.calico: Gained carrier Mar 25 02:47:07.537089 systemd-networkd[1446]: calib6f514a4d7f: Gained IPv6LL Mar 25 02:47:08.314394 containerd[1520]: time="2025-03-25T02:47:08.314311779Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:47:08.316569 containerd[1520]: time="2025-03-25T02:47:08.316433834Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.2: active requests=0, bytes read=34792912" Mar 25 02:47:08.319015 containerd[1520]: time="2025-03-25T02:47:08.318902205Z" level=info msg="ImageCreate event name:\"sha256:f6a228558381bc7de7c5296ac6c4e903cfda929899c85806367a726ef6d7ff5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:47:08.327460 containerd[1520]: time="2025-03-25T02:47:08.327389907Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:6d1f392b747f912366ec5c60ee1130952c2c07e8ce24c53480187daa0e3364aa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:47:08.332525 containerd[1520]: time="2025-03-25T02:47:08.332458227Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\" with image id \"sha256:f6a228558381bc7de7c5296ac6c4e903cfda929899c85806367a726ef6d7ff5f\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:6d1f392b747f912366ec5c60ee1130952c2c07e8ce24c53480187daa0e3364aa\", size \"36285984\" in 4.076163816s" Mar 25 02:47:08.332614 containerd[1520]: time="2025-03-25T02:47:08.332525032Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\" returns image reference \"sha256:f6a228558381bc7de7c5296ac6c4e903cfda929899c85806367a726ef6d7ff5f\"" Mar 25 02:47:08.340170 containerd[1520]: time="2025-03-25T02:47:08.340115685Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.2\"" Mar 25 02:47:08.368176 systemd-networkd[1446]: cali256f8a43717: Gained IPv6LL Mar 25 02:47:08.406578 containerd[1520]: time="2025-03-25T02:47:08.405137188Z" level=info msg="CreateContainer within sandbox \"53c6d08b763dbb6497e7fe1cbd2a160ab8722cdadbaf9867867b5f60486fe337\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 25 02:47:08.417627 containerd[1520]: time="2025-03-25T02:47:08.417588959Z" level=info msg="Container c89faa174b78b640d649de5ec024d3354c80b621a11241d36aa050b4cc6f9b43: CDI devices from CRI Config.CDIDevices: []" Mar 25 02:47:08.431502 containerd[1520]: time="2025-03-25T02:47:08.431428906Z" level=info msg="CreateContainer within sandbox \"53c6d08b763dbb6497e7fe1cbd2a160ab8722cdadbaf9867867b5f60486fe337\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"c89faa174b78b640d649de5ec024d3354c80b621a11241d36aa050b4cc6f9b43\"" Mar 25 02:47:08.432552 containerd[1520]: time="2025-03-25T02:47:08.432493987Z" level=info msg="StartContainer for \"c89faa174b78b640d649de5ec024d3354c80b621a11241d36aa050b4cc6f9b43\"" Mar 25 02:47:08.434167 containerd[1520]: time="2025-03-25T02:47:08.434132993Z" level=info msg="connecting to shim c89faa174b78b640d649de5ec024d3354c80b621a11241d36aa050b4cc6f9b43" address="unix:///run/containerd/s/e9c8e45f5ca318b93f4f529389d56954ec0a748c29932e2a4991e1b9269a463f" protocol=ttrpc version=3 Mar 25 02:47:08.496114 systemd[1]: Started cri-containerd-c89faa174b78b640d649de5ec024d3354c80b621a11241d36aa050b4cc6f9b43.scope - libcontainer container c89faa174b78b640d649de5ec024d3354c80b621a11241d36aa050b4cc6f9b43. Mar 25 02:47:08.661169 containerd[1520]: time="2025-03-25T02:47:08.661045329Z" level=info msg="StartContainer for \"c89faa174b78b640d649de5ec024d3354c80b621a11241d36aa050b4cc6f9b43\" returns successfully" Mar 25 02:47:08.945322 systemd-networkd[1446]: vxlan.calico: Gained IPv6LL Mar 25 02:47:09.050207 kubelet[2798]: I0325 02:47:09.050117 2798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-664b77dd74-d7f4s" podStartSLOduration=27.967382331 podStartE2EDuration="32.050088067s" podCreationTimestamp="2025-03-25 02:46:37 +0000 UTC" firstStartedPulling="2025-03-25 02:47:04.25409967 +0000 UTC m=+43.812319080" lastFinishedPulling="2025-03-25 02:47:08.3368054 +0000 UTC m=+47.895024816" observedRunningTime="2025-03-25 02:47:09.047938513 +0000 UTC m=+48.606157944" watchObservedRunningTime="2025-03-25 02:47:09.050088067 +0000 UTC m=+48.608307479" Mar 25 02:47:09.099815 containerd[1520]: time="2025-03-25T02:47:09.099742720Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c89faa174b78b640d649de5ec024d3354c80b621a11241d36aa050b4cc6f9b43\" id:\"f69260e98929bc318c6b59f40c8feef8f42427cd6ff7c76c68c974c8e9a1edb6\" pid:4455 exited_at:{seconds:1742870829 nanos:99334653}" Mar 25 02:47:10.595619 containerd[1520]: time="2025-03-25T02:47:10.595494467Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.2: active requests=0, bytes read=7909887" Mar 25 02:47:10.602233 containerd[1520]: time="2025-03-25T02:47:10.602157000Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:47:10.606054 containerd[1520]: time="2025-03-25T02:47:10.605697665Z" level=info msg="ImageCreate event name:\"sha256:0fae09f861e350c042fe0db9ce9f8cc5ac4df975a5c4e4a9ddc3c6fac1552a9a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:47:10.609127 containerd[1520]: time="2025-03-25T02:47:10.609084621Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:214b4eef7008808bda55ad3cc1d4a3cd8df9e0e8094dff213fa3241104eb892c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:47:10.610514 containerd[1520]: time="2025-03-25T02:47:10.610464280Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.2\" with image id \"sha256:0fae09f861e350c042fe0db9ce9f8cc5ac4df975a5c4e4a9ddc3c6fac1552a9a\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:214b4eef7008808bda55ad3cc1d4a3cd8df9e0e8094dff213fa3241104eb892c\", size \"9402991\" in 2.270303188s" Mar 25 02:47:10.610794 containerd[1520]: time="2025-03-25T02:47:10.610677774Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.2\" returns image reference \"sha256:0fae09f861e350c042fe0db9ce9f8cc5ac4df975a5c4e4a9ddc3c6fac1552a9a\"" Mar 25 02:47:10.625251 containerd[1520]: time="2025-03-25T02:47:10.625209212Z" level=info msg="CreateContainer within sandbox \"f09976e4d08893c9c2de1c6090d39ac943ebf2f41494d294d2b7e1020fabac99\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 25 02:47:10.647108 containerd[1520]: time="2025-03-25T02:47:10.647062497Z" level=info msg="Container d6ab20bb34de55c4fea4e0fed3786d9d464b6be3c86d30486a76efdae2384609: CDI devices from CRI Config.CDIDevices: []" Mar 25 02:47:10.659324 containerd[1520]: time="2025-03-25T02:47:10.659191089Z" level=info msg="CreateContainer within sandbox \"f09976e4d08893c9c2de1c6090d39ac943ebf2f41494d294d2b7e1020fabac99\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"d6ab20bb34de55c4fea4e0fed3786d9d464b6be3c86d30486a76efdae2384609\"" Mar 25 02:47:10.661977 containerd[1520]: time="2025-03-25T02:47:10.660144685Z" level=info msg="StartContainer for \"d6ab20bb34de55c4fea4e0fed3786d9d464b6be3c86d30486a76efdae2384609\"" Mar 25 02:47:10.662755 containerd[1520]: time="2025-03-25T02:47:10.662722426Z" level=info msg="connecting to shim d6ab20bb34de55c4fea4e0fed3786d9d464b6be3c86d30486a76efdae2384609" address="unix:///run/containerd/s/e1325da5d93355f3d0e69008bc4c62cd34f922748dd5ae0729fce058f4b50c77" protocol=ttrpc version=3 Mar 25 02:47:10.699178 systemd[1]: Started cri-containerd-d6ab20bb34de55c4fea4e0fed3786d9d464b6be3c86d30486a76efdae2384609.scope - libcontainer container d6ab20bb34de55c4fea4e0fed3786d9d464b6be3c86d30486a76efdae2384609. Mar 25 02:47:10.804572 containerd[1520]: time="2025-03-25T02:47:10.804522455Z" level=info msg="StartContainer for \"d6ab20bb34de55c4fea4e0fed3786d9d464b6be3c86d30486a76efdae2384609\" returns successfully" Mar 25 02:47:10.813283 containerd[1520]: time="2025-03-25T02:47:10.811563190Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\"" Mar 25 02:47:12.790099 containerd[1520]: time="2025-03-25T02:47:12.789987061Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:47:12.791864 containerd[1520]: time="2025-03-25T02:47:12.791124492Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2: active requests=0, bytes read=13986843" Mar 25 02:47:12.792543 containerd[1520]: time="2025-03-25T02:47:12.792460378Z" level=info msg="ImageCreate event name:\"sha256:09a5a6ea58a48ac826468e05538c78d1378e103737124f1744efea8699fc29a8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:47:12.795574 containerd[1520]: time="2025-03-25T02:47:12.795411866Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:54ef0afa50feb3f691782e8d6df9a7f27d127a3af9bbcbd0bcdadac98e8be8e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:47:12.796558 containerd[1520]: time="2025-03-25T02:47:12.796522248Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" with image id \"sha256:09a5a6ea58a48ac826468e05538c78d1378e103737124f1744efea8699fc29a8\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:54ef0afa50feb3f691782e8d6df9a7f27d127a3af9bbcbd0bcdadac98e8be8e3\", size \"15479899\" in 1.984875041s" Mar 25 02:47:12.797012 containerd[1520]: time="2025-03-25T02:47:12.796740942Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" returns image reference \"sha256:09a5a6ea58a48ac826468e05538c78d1378e103737124f1744efea8699fc29a8\"" Mar 25 02:47:12.800954 containerd[1520]: time="2025-03-25T02:47:12.800530508Z" level=info msg="CreateContainer within sandbox \"f09976e4d08893c9c2de1c6090d39ac943ebf2f41494d294d2b7e1020fabac99\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 25 02:47:12.815122 containerd[1520]: time="2025-03-25T02:47:12.815051990Z" level=info msg="Container b4dab0cc7f284f8956a0fcbc397131a2166aebd1e5f2b04c37f671c10d3f8a9d: CDI devices from CRI Config.CDIDevices: []" Mar 25 02:47:12.835925 containerd[1520]: time="2025-03-25T02:47:12.835763827Z" level=info msg="CreateContainer within sandbox \"f09976e4d08893c9c2de1c6090d39ac943ebf2f41494d294d2b7e1020fabac99\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"b4dab0cc7f284f8956a0fcbc397131a2166aebd1e5f2b04c37f671c10d3f8a9d\"" Mar 25 02:47:12.838420 containerd[1520]: time="2025-03-25T02:47:12.838365037Z" level=info msg="StartContainer for \"b4dab0cc7f284f8956a0fcbc397131a2166aebd1e5f2b04c37f671c10d3f8a9d\"" Mar 25 02:47:12.842756 containerd[1520]: time="2025-03-25T02:47:12.842717619Z" level=info msg="connecting to shim b4dab0cc7f284f8956a0fcbc397131a2166aebd1e5f2b04c37f671c10d3f8a9d" address="unix:///run/containerd/s/e1325da5d93355f3d0e69008bc4c62cd34f922748dd5ae0729fce058f4b50c77" protocol=ttrpc version=3 Mar 25 02:47:12.882234 systemd[1]: Started cri-containerd-b4dab0cc7f284f8956a0fcbc397131a2166aebd1e5f2b04c37f671c10d3f8a9d.scope - libcontainer container b4dab0cc7f284f8956a0fcbc397131a2166aebd1e5f2b04c37f671c10d3f8a9d. Mar 25 02:47:12.964807 containerd[1520]: time="2025-03-25T02:47:12.964749644Z" level=info msg="StartContainer for \"b4dab0cc7f284f8956a0fcbc397131a2166aebd1e5f2b04c37f671c10d3f8a9d\" returns successfully" Mar 25 02:47:13.054451 kubelet[2798]: I0325 02:47:13.054261 2798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-h6zz7" podStartSLOduration=29.036288407 podStartE2EDuration="36.054238012s" podCreationTimestamp="2025-03-25 02:46:37 +0000 UTC" firstStartedPulling="2025-03-25 02:47:05.780303841 +0000 UTC m=+45.338523252" lastFinishedPulling="2025-03-25 02:47:12.798253448 +0000 UTC m=+52.356472857" observedRunningTime="2025-03-25 02:47:13.053763043 +0000 UTC m=+52.611982496" watchObservedRunningTime="2025-03-25 02:47:13.054238012 +0000 UTC m=+52.612457429" Mar 25 02:47:13.920133 kubelet[2798]: I0325 02:47:13.920000 2798 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 25 02:47:13.927005 kubelet[2798]: I0325 02:47:13.926941 2798 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 25 02:47:17.822351 systemd[1]: Started sshd@11-10.230.58.198:22-81.192.87.130:46679.service - OpenSSH per-connection server daemon (81.192.87.130:46679). Mar 25 02:47:18.231857 sshd[4544]: Invalid user jonjon-2 from 81.192.87.130 port 46679 Mar 25 02:47:18.317202 sshd[4544]: Received disconnect from 81.192.87.130 port 46679:11: Bye Bye [preauth] Mar 25 02:47:18.317899 sshd[4544]: Disconnected from invalid user jonjon-2 81.192.87.130 port 46679 [preauth] Mar 25 02:47:18.321162 systemd[1]: sshd@11-10.230.58.198:22-81.192.87.130:46679.service: Deactivated successfully. Mar 25 02:47:22.475900 containerd[1520]: time="2025-03-25T02:47:22.475744409Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c89faa174b78b640d649de5ec024d3354c80b621a11241d36aa050b4cc6f9b43\" id:\"6c9482fecbf827d81a764b43819f51e794e8a437388bf8d6af78e276f8d00861\" pid:4571 exited_at:{seconds:1742870842 nanos:475010992}" Mar 25 02:47:25.052908 containerd[1520]: time="2025-03-25T02:47:25.051464002Z" level=info msg="TaskExit event in podsandbox handler container_id:\"08b46a662cdc025a4603a6fcdf6a5e348afd7fcf81bec452154928151ac37f47\" id:\"4c32baa5d5c61aa6c25c26750fc19fc4662ec82c153d24525197c9629d13b376\" pid:4592 exited_at:{seconds:1742870845 nanos:49854883}" Mar 25 02:47:31.632469 kubelet[2798]: E0325 02:47:31.632361 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:47:31.733380 kubelet[2798]: E0325 02:47:31.733282 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:47:31.933776 kubelet[2798]: E0325 02:47:31.933581 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:47:32.334234 kubelet[2798]: E0325 02:47:32.334052 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:47:32.765840 kubelet[2798]: I0325 02:47:32.764846 2798 setters.go:600] "Node became not ready" node="srv-nkv7s.gb1.brightbox.com" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-03-25T02:47:32Z","lastTransitionTime":"2025-03-25T02:47:32Z","reason":"KubeletNotReady","message":"container runtime is down"} Mar 25 02:47:33.134999 kubelet[2798]: E0325 02:47:33.134952 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:47:34.735949 kubelet[2798]: E0325 02:47:34.735761 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:47:37.936368 kubelet[2798]: E0325 02:47:37.936291 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:47:41.434863 systemd[1]: Started sshd@12-10.230.58.198:22-139.178.68.195:38094.service - OpenSSH per-connection server daemon (139.178.68.195:38094). Mar 25 02:47:42.416023 sshd[4620]: Accepted publickey for core from 139.178.68.195 port 38094 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:47:42.417753 sshd-session[4620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:47:42.426982 systemd-logind[1503]: New session 12 of user core. Mar 25 02:47:42.432193 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 25 02:47:42.938294 kubelet[2798]: E0325 02:47:42.937372 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:47:43.737331 sshd[4622]: Connection closed by 139.178.68.195 port 38094 Mar 25 02:47:43.735990 sshd-session[4620]: pam_unix(sshd:session): session closed for user core Mar 25 02:47:43.742612 systemd-logind[1503]: Session 12 logged out. Waiting for processes to exit. Mar 25 02:47:43.742991 systemd[1]: sshd@12-10.230.58.198:22-139.178.68.195:38094.service: Deactivated successfully. Mar 25 02:47:43.746941 systemd[1]: session-12.scope: Deactivated successfully. Mar 25 02:47:43.748700 systemd-logind[1503]: Removed session 12. Mar 25 02:47:45.340105 containerd[1520]: time="2025-03-25T02:47:45.340003248Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c89faa174b78b640d649de5ec024d3354c80b621a11241d36aa050b4cc6f9b43\" id:\"43495d38cac832932b7ea847545de423e169ebd160af700ae26ea53b5b348548\" pid:4647 exited_at:{seconds:1742870865 nanos:339619314}" Mar 25 02:47:47.937963 kubelet[2798]: E0325 02:47:47.937848 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:47:48.891955 systemd[1]: Started sshd@13-10.230.58.198:22-139.178.68.195:40744.service - OpenSSH per-connection server daemon (139.178.68.195:40744). Mar 25 02:47:49.802340 sshd[4665]: Accepted publickey for core from 139.178.68.195 port 40744 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:47:49.804596 sshd-session[4665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:47:49.814118 systemd-logind[1503]: New session 13 of user core. Mar 25 02:47:49.821102 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 25 02:47:50.528064 sshd[4667]: Connection closed by 139.178.68.195 port 40744 Mar 25 02:47:50.529199 sshd-session[4665]: pam_unix(sshd:session): session closed for user core Mar 25 02:47:50.536441 systemd[1]: sshd@13-10.230.58.198:22-139.178.68.195:40744.service: Deactivated successfully. Mar 25 02:47:50.539779 systemd[1]: session-13.scope: Deactivated successfully. Mar 25 02:47:50.541546 systemd-logind[1503]: Session 13 logged out. Waiting for processes to exit. Mar 25 02:47:50.543391 systemd-logind[1503]: Removed session 13. Mar 25 02:47:52.455562 containerd[1520]: time="2025-03-25T02:47:52.455222409Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c89faa174b78b640d649de5ec024d3354c80b621a11241d36aa050b4cc6f9b43\" id:\"ad5714303c6ae74b87c0f23dd67aa04eacb78af9f5b5a186ead02fe41883a6e5\" pid:4692 exited_at:{seconds:1742870872 nanos:454147296}" Mar 25 02:47:52.938278 kubelet[2798]: E0325 02:47:52.938170 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:47:55.053209 containerd[1520]: time="2025-03-25T02:47:55.053033595Z" level=info msg="TaskExit event in podsandbox handler container_id:\"08b46a662cdc025a4603a6fcdf6a5e348afd7fcf81bec452154928151ac37f47\" id:\"b6b907ffcedb3add6847c59dada9af5d48be4370502118a82b047e44a717f378\" pid:4713 exited_at:{seconds:1742870875 nanos:52050373}" Mar 25 02:47:55.687833 systemd[1]: Started sshd@14-10.230.58.198:22-139.178.68.195:53134.service - OpenSSH per-connection server daemon (139.178.68.195:53134). Mar 25 02:47:56.621536 sshd[4725]: Accepted publickey for core from 139.178.68.195 port 53134 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:47:56.623986 sshd-session[4725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:47:56.631710 systemd-logind[1503]: New session 14 of user core. Mar 25 02:47:56.640111 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 25 02:47:57.346707 sshd[4727]: Connection closed by 139.178.68.195 port 53134 Mar 25 02:47:57.347860 sshd-session[4725]: pam_unix(sshd:session): session closed for user core Mar 25 02:47:57.352706 systemd-logind[1503]: Session 14 logged out. Waiting for processes to exit. Mar 25 02:47:57.355102 systemd[1]: sshd@14-10.230.58.198:22-139.178.68.195:53134.service: Deactivated successfully. Mar 25 02:47:57.358968 systemd[1]: session-14.scope: Deactivated successfully. Mar 25 02:47:57.360632 systemd-logind[1503]: Removed session 14. Mar 25 02:47:57.939212 kubelet[2798]: E0325 02:47:57.939136 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:48:02.505210 systemd[1]: Started sshd@15-10.230.58.198:22-139.178.68.195:53142.service - OpenSSH per-connection server daemon (139.178.68.195:53142). Mar 25 02:48:02.940126 kubelet[2798]: E0325 02:48:02.940037 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:48:03.421147 sshd[4742]: Accepted publickey for core from 139.178.68.195 port 53142 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:48:03.423676 sshd-session[4742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:48:03.431820 systemd-logind[1503]: New session 15 of user core. Mar 25 02:48:03.439120 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 25 02:48:04.146073 sshd[4744]: Connection closed by 139.178.68.195 port 53142 Mar 25 02:48:04.147433 sshd-session[4742]: pam_unix(sshd:session): session closed for user core Mar 25 02:48:04.153267 systemd[1]: sshd@15-10.230.58.198:22-139.178.68.195:53142.service: Deactivated successfully. Mar 25 02:48:04.156777 systemd[1]: session-15.scope: Deactivated successfully. Mar 25 02:48:04.158162 systemd-logind[1503]: Session 15 logged out. Waiting for processes to exit. Mar 25 02:48:04.159955 systemd-logind[1503]: Removed session 15. Mar 25 02:48:07.940590 kubelet[2798]: E0325 02:48:07.940529 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:48:09.303108 systemd[1]: Started sshd@16-10.230.58.198:22-139.178.68.195:43462.service - OpenSSH per-connection server daemon (139.178.68.195:43462). Mar 25 02:48:10.224630 sshd[4757]: Accepted publickey for core from 139.178.68.195 port 43462 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:48:10.228203 sshd-session[4757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:48:10.239325 systemd-logind[1503]: New session 16 of user core. Mar 25 02:48:10.244421 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 25 02:48:10.939396 sshd[4760]: Connection closed by 139.178.68.195 port 43462 Mar 25 02:48:10.940410 sshd-session[4757]: pam_unix(sshd:session): session closed for user core Mar 25 02:48:10.946386 systemd-logind[1503]: Session 16 logged out. Waiting for processes to exit. Mar 25 02:48:10.947756 systemd[1]: sshd@16-10.230.58.198:22-139.178.68.195:43462.service: Deactivated successfully. Mar 25 02:48:10.951791 systemd[1]: session-16.scope: Deactivated successfully. Mar 25 02:48:10.954776 systemd-logind[1503]: Removed session 16. Mar 25 02:48:12.940994 kubelet[2798]: E0325 02:48:12.940911 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:48:15.692576 update_engine[1504]: I20250325 02:48:15.692194 1504 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Mar 25 02:48:15.692576 update_engine[1504]: I20250325 02:48:15.692358 1504 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Mar 25 02:48:15.698842 update_engine[1504]: I20250325 02:48:15.698097 1504 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Mar 25 02:48:15.699031 update_engine[1504]: I20250325 02:48:15.698990 1504 omaha_request_params.cc:62] Current group set to alpha Mar 25 02:48:15.699612 update_engine[1504]: I20250325 02:48:15.699294 1504 update_attempter.cc:499] Already updated boot flags. Skipping. Mar 25 02:48:15.699612 update_engine[1504]: I20250325 02:48:15.699323 1504 update_attempter.cc:643] Scheduling an action processor start. Mar 25 02:48:15.699612 update_engine[1504]: I20250325 02:48:15.699368 1504 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 25 02:48:15.699612 update_engine[1504]: I20250325 02:48:15.699465 1504 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Mar 25 02:48:15.699799 update_engine[1504]: I20250325 02:48:15.699620 1504 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 25 02:48:15.699799 update_engine[1504]: I20250325 02:48:15.699642 1504 omaha_request_action.cc:272] Request: Mar 25 02:48:15.699799 update_engine[1504]: Mar 25 02:48:15.699799 update_engine[1504]: Mar 25 02:48:15.699799 update_engine[1504]: Mar 25 02:48:15.699799 update_engine[1504]: Mar 25 02:48:15.699799 update_engine[1504]: Mar 25 02:48:15.699799 update_engine[1504]: Mar 25 02:48:15.699799 update_engine[1504]: Mar 25 02:48:15.699799 update_engine[1504]: Mar 25 02:48:15.699799 update_engine[1504]: I20250325 02:48:15.699662 1504 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 25 02:48:15.715999 update_engine[1504]: I20250325 02:48:15.713490 1504 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 25 02:48:15.716155 update_engine[1504]: I20250325 02:48:15.716109 1504 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 25 02:48:15.717198 locksmithd[1536]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Mar 25 02:48:15.717766 update_engine[1504]: E20250325 02:48:15.717485 1504 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 25 02:48:15.717766 update_engine[1504]: I20250325 02:48:15.717582 1504 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Mar 25 02:48:16.101990 systemd[1]: Started sshd@17-10.230.58.198:22-139.178.68.195:44958.service - OpenSSH per-connection server daemon (139.178.68.195:44958). Mar 25 02:48:17.020820 sshd[4780]: Accepted publickey for core from 139.178.68.195 port 44958 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:48:17.023760 sshd-session[4780]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:48:17.032865 systemd-logind[1503]: New session 17 of user core. Mar 25 02:48:17.038155 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 25 02:48:17.760225 sshd[4782]: Connection closed by 139.178.68.195 port 44958 Mar 25 02:48:17.761253 sshd-session[4780]: pam_unix(sshd:session): session closed for user core Mar 25 02:48:17.770049 systemd[1]: sshd@17-10.230.58.198:22-139.178.68.195:44958.service: Deactivated successfully. Mar 25 02:48:17.773102 systemd[1]: session-17.scope: Deactivated successfully. Mar 25 02:48:17.774566 systemd-logind[1503]: Session 17 logged out. Waiting for processes to exit. Mar 25 02:48:17.776170 systemd-logind[1503]: Removed session 17. Mar 25 02:48:17.941421 kubelet[2798]: E0325 02:48:17.941221 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:48:20.989920 systemd[1]: Started sshd@18-10.230.58.198:22-81.192.87.130:58278.service - OpenSSH per-connection server daemon (81.192.87.130:58278). Mar 25 02:48:21.349841 sshd[4797]: Invalid user wadmin from 81.192.87.130 port 58278 Mar 25 02:48:21.407028 sshd[4797]: Received disconnect from 81.192.87.130 port 58278:11: Bye Bye [preauth] Mar 25 02:48:21.407028 sshd[4797]: Disconnected from invalid user wadmin 81.192.87.130 port 58278 [preauth] Mar 25 02:48:21.409060 systemd[1]: sshd@18-10.230.58.198:22-81.192.87.130:58278.service: Deactivated successfully. Mar 25 02:48:22.459385 containerd[1520]: time="2025-03-25T02:48:22.459083161Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c89faa174b78b640d649de5ec024d3354c80b621a11241d36aa050b4cc6f9b43\" id:\"10c22971c231d7f15996d6a55371602c3ec840614dfd7d38f97c0cbe833e1dbb\" pid:4813 exited_at:{seconds:1742870902 nanos:458514853}" Mar 25 02:48:22.916611 systemd[1]: Started sshd@19-10.230.58.198:22-139.178.68.195:44974.service - OpenSSH per-connection server daemon (139.178.68.195:44974). Mar 25 02:48:22.941948 kubelet[2798]: E0325 02:48:22.941833 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:48:23.836686 sshd[4823]: Accepted publickey for core from 139.178.68.195 port 44974 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:48:23.839448 sshd-session[4823]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:48:23.848544 systemd-logind[1503]: New session 18 of user core. Mar 25 02:48:23.856113 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 25 02:48:24.557587 sshd[4825]: Connection closed by 139.178.68.195 port 44974 Mar 25 02:48:24.559548 sshd-session[4823]: pam_unix(sshd:session): session closed for user core Mar 25 02:48:24.565715 systemd-logind[1503]: Session 18 logged out. Waiting for processes to exit. Mar 25 02:48:24.567158 systemd[1]: sshd@19-10.230.58.198:22-139.178.68.195:44974.service: Deactivated successfully. Mar 25 02:48:24.573352 systemd[1]: session-18.scope: Deactivated successfully. Mar 25 02:48:24.575854 systemd-logind[1503]: Removed session 18. Mar 25 02:48:25.075209 containerd[1520]: time="2025-03-25T02:48:25.075058762Z" level=info msg="TaskExit event in podsandbox handler container_id:\"08b46a662cdc025a4603a6fcdf6a5e348afd7fcf81bec452154928151ac37f47\" id:\"92046b774ea065637dbdfbe012443bd0155e8bae0d7b1db49852ade6c3116802\" pid:4849 exited_at:{seconds:1742870905 nanos:73583699}" Mar 25 02:48:25.641958 update_engine[1504]: I20250325 02:48:25.641048 1504 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 25 02:48:25.641958 update_engine[1504]: I20250325 02:48:25.641542 1504 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 25 02:48:25.642768 update_engine[1504]: I20250325 02:48:25.641980 1504 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 25 02:48:25.642768 update_engine[1504]: E20250325 02:48:25.642629 1504 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 25 02:48:25.642768 update_engine[1504]: I20250325 02:48:25.642700 1504 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Mar 25 02:48:27.943136 kubelet[2798]: E0325 02:48:27.943055 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:48:29.740418 systemd[1]: Started sshd@20-10.230.58.198:22-139.178.68.195:52368.service - OpenSSH per-connection server daemon (139.178.68.195:52368). Mar 25 02:48:30.705126 sshd[4872]: Accepted publickey for core from 139.178.68.195 port 52368 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:48:30.706540 sshd-session[4872]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:48:30.718457 systemd-logind[1503]: New session 19 of user core. Mar 25 02:48:30.725181 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 25 02:48:31.447122 sshd[4874]: Connection closed by 139.178.68.195 port 52368 Mar 25 02:48:31.448153 sshd-session[4872]: pam_unix(sshd:session): session closed for user core Mar 25 02:48:31.454577 systemd[1]: sshd@20-10.230.58.198:22-139.178.68.195:52368.service: Deactivated successfully. Mar 25 02:48:31.457412 systemd[1]: session-19.scope: Deactivated successfully. Mar 25 02:48:31.459989 systemd-logind[1503]: Session 19 logged out. Waiting for processes to exit. Mar 25 02:48:31.463003 systemd-logind[1503]: Removed session 19. Mar 25 02:48:32.943669 kubelet[2798]: E0325 02:48:32.943503 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:48:35.641944 update_engine[1504]: I20250325 02:48:35.641198 1504 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 25 02:48:35.641944 update_engine[1504]: I20250325 02:48:35.641588 1504 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 25 02:48:35.642851 update_engine[1504]: I20250325 02:48:35.642048 1504 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 25 02:48:35.642851 update_engine[1504]: E20250325 02:48:35.642527 1504 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 25 02:48:35.642851 update_engine[1504]: I20250325 02:48:35.642601 1504 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Mar 25 02:48:36.610338 systemd[1]: Started sshd@21-10.230.58.198:22-139.178.68.195:43478.service - OpenSSH per-connection server daemon (139.178.68.195:43478). Mar 25 02:48:37.515594 sshd[4887]: Accepted publickey for core from 139.178.68.195 port 43478 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:48:37.517905 sshd-session[4887]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:48:37.526239 systemd-logind[1503]: New session 20 of user core. Mar 25 02:48:37.534109 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 25 02:48:37.943985 kubelet[2798]: E0325 02:48:37.943901 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:48:38.222335 sshd[4889]: Connection closed by 139.178.68.195 port 43478 Mar 25 02:48:38.223730 sshd-session[4887]: pam_unix(sshd:session): session closed for user core Mar 25 02:48:38.229461 systemd[1]: sshd@21-10.230.58.198:22-139.178.68.195:43478.service: Deactivated successfully. Mar 25 02:48:38.232997 systemd[1]: session-20.scope: Deactivated successfully. Mar 25 02:48:38.234403 systemd-logind[1503]: Session 20 logged out. Waiting for processes to exit. Mar 25 02:48:38.236080 systemd-logind[1503]: Removed session 20. Mar 25 02:48:42.944347 kubelet[2798]: E0325 02:48:42.944270 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:48:43.379325 systemd[1]: Started sshd@22-10.230.58.198:22-139.178.68.195:43480.service - OpenSSH per-connection server daemon (139.178.68.195:43480). Mar 25 02:48:44.298450 sshd[4906]: Accepted publickey for core from 139.178.68.195 port 43480 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:48:44.300884 sshd-session[4906]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:48:44.310599 systemd-logind[1503]: New session 21 of user core. Mar 25 02:48:44.316113 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 25 02:48:45.021564 sshd[4909]: Connection closed by 139.178.68.195 port 43480 Mar 25 02:48:45.023208 sshd-session[4906]: pam_unix(sshd:session): session closed for user core Mar 25 02:48:45.032129 systemd[1]: sshd@22-10.230.58.198:22-139.178.68.195:43480.service: Deactivated successfully. Mar 25 02:48:45.035253 systemd[1]: session-21.scope: Deactivated successfully. Mar 25 02:48:45.037220 systemd-logind[1503]: Session 21 logged out. Waiting for processes to exit. Mar 25 02:48:45.038865 systemd-logind[1503]: Removed session 21. Mar 25 02:48:45.344701 containerd[1520]: time="2025-03-25T02:48:45.344534292Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c89faa174b78b640d649de5ec024d3354c80b621a11241d36aa050b4cc6f9b43\" id:\"4ec229cd10196d9c73c852f1f3ae31be469fc6457dc90004637cf596304708a2\" pid:4933 exited_at:{seconds:1742870925 nanos:344106290}" Mar 25 02:48:45.640572 update_engine[1504]: I20250325 02:48:45.640435 1504 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 25 02:48:45.641306 update_engine[1504]: I20250325 02:48:45.640849 1504 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 25 02:48:45.641377 update_engine[1504]: I20250325 02:48:45.641322 1504 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 25 02:48:45.641841 update_engine[1504]: E20250325 02:48:45.641777 1504 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 25 02:48:45.642030 update_engine[1504]: I20250325 02:48:45.641865 1504 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 25 02:48:45.642030 update_engine[1504]: I20250325 02:48:45.641914 1504 omaha_request_action.cc:617] Omaha request response: Mar 25 02:48:45.642497 update_engine[1504]: E20250325 02:48:45.642057 1504 omaha_request_action.cc:636] Omaha request network transfer failed. Mar 25 02:48:45.644635 update_engine[1504]: I20250325 02:48:45.644583 1504 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Mar 25 02:48:45.644635 update_engine[1504]: I20250325 02:48:45.644618 1504 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 25 02:48:45.644773 update_engine[1504]: I20250325 02:48:45.644634 1504 update_attempter.cc:306] Processing Done. Mar 25 02:48:45.644773 update_engine[1504]: E20250325 02:48:45.644665 1504 update_attempter.cc:619] Update failed. Mar 25 02:48:45.644773 update_engine[1504]: I20250325 02:48:45.644683 1504 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Mar 25 02:48:45.644773 update_engine[1504]: I20250325 02:48:45.644696 1504 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Mar 25 02:48:45.644773 update_engine[1504]: I20250325 02:48:45.644707 1504 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Mar 25 02:48:45.645039 update_engine[1504]: I20250325 02:48:45.644814 1504 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 25 02:48:45.645039 update_engine[1504]: I20250325 02:48:45.644855 1504 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 25 02:48:45.645039 update_engine[1504]: I20250325 02:48:45.644889 1504 omaha_request_action.cc:272] Request: Mar 25 02:48:45.645039 update_engine[1504]: Mar 25 02:48:45.645039 update_engine[1504]: Mar 25 02:48:45.645039 update_engine[1504]: Mar 25 02:48:45.645039 update_engine[1504]: Mar 25 02:48:45.645039 update_engine[1504]: Mar 25 02:48:45.645039 update_engine[1504]: Mar 25 02:48:45.645039 update_engine[1504]: I20250325 02:48:45.644914 1504 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 25 02:48:45.645541 update_engine[1504]: I20250325 02:48:45.645148 1504 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 25 02:48:45.645541 update_engine[1504]: I20250325 02:48:45.645432 1504 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 25 02:48:45.645897 update_engine[1504]: E20250325 02:48:45.645692 1504 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 25 02:48:45.645897 update_engine[1504]: I20250325 02:48:45.645757 1504 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 25 02:48:45.645897 update_engine[1504]: I20250325 02:48:45.645777 1504 omaha_request_action.cc:617] Omaha request response: Mar 25 02:48:45.645897 update_engine[1504]: I20250325 02:48:45.645789 1504 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 25 02:48:45.645897 update_engine[1504]: I20250325 02:48:45.645802 1504 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 25 02:48:45.645897 update_engine[1504]: I20250325 02:48:45.645813 1504 update_attempter.cc:306] Processing Done. Mar 25 02:48:45.645897 update_engine[1504]: I20250325 02:48:45.645825 1504 update_attempter.cc:310] Error event sent. Mar 25 02:48:45.646220 update_engine[1504]: I20250325 02:48:45.645886 1504 update_check_scheduler.cc:74] Next update check in 41m16s Mar 25 02:48:45.646842 locksmithd[1536]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Mar 25 02:48:45.646842 locksmithd[1536]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Mar 25 02:48:47.945008 kubelet[2798]: E0325 02:48:47.944945 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:48:50.175601 systemd[1]: Started sshd@23-10.230.58.198:22-139.178.68.195:42330.service - OpenSSH per-connection server daemon (139.178.68.195:42330). Mar 25 02:48:51.098812 sshd[4953]: Accepted publickey for core from 139.178.68.195 port 42330 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:48:51.101030 sshd-session[4953]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:48:51.109315 systemd-logind[1503]: New session 22 of user core. Mar 25 02:48:51.116111 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 25 02:48:51.821732 sshd[4955]: Connection closed by 139.178.68.195 port 42330 Mar 25 02:48:51.823630 sshd-session[4953]: pam_unix(sshd:session): session closed for user core Mar 25 02:48:51.831418 systemd[1]: sshd@23-10.230.58.198:22-139.178.68.195:42330.service: Deactivated successfully. Mar 25 02:48:51.835470 systemd[1]: session-22.scope: Deactivated successfully. Mar 25 02:48:51.837866 systemd-logind[1503]: Session 22 logged out. Waiting for processes to exit. Mar 25 02:48:51.840142 systemd-logind[1503]: Removed session 22. Mar 25 02:48:52.480159 containerd[1520]: time="2025-03-25T02:48:52.480098126Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c89faa174b78b640d649de5ec024d3354c80b621a11241d36aa050b4cc6f9b43\" id:\"4adf562458a6f99d410eee725055812a1d5e78cfafbb9ad4bbbb33374f4453c1\" pid:4980 exited_at:{seconds:1742870932 nanos:479772562}" Mar 25 02:48:52.945688 kubelet[2798]: E0325 02:48:52.945626 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:48:55.054359 containerd[1520]: time="2025-03-25T02:48:55.054283148Z" level=info msg="TaskExit event in podsandbox handler container_id:\"08b46a662cdc025a4603a6fcdf6a5e348afd7fcf81bec452154928151ac37f47\" id:\"f9f9cb76c7027dfbe230336ca80952b4d6fbc7bc6f0dd6e6efeb9930ad336699\" pid:5002 exited_at:{seconds:1742870935 nanos:53671847}" Mar 25 02:48:56.979306 systemd[1]: Started sshd@24-10.230.58.198:22-139.178.68.195:45114.service - OpenSSH per-connection server daemon (139.178.68.195:45114). Mar 25 02:48:57.922424 sshd[5015]: Accepted publickey for core from 139.178.68.195 port 45114 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:48:57.925177 sshd-session[5015]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:48:57.933822 systemd-logind[1503]: New session 23 of user core. Mar 25 02:48:57.939089 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 25 02:48:57.946120 kubelet[2798]: E0325 02:48:57.946047 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:48:58.648511 sshd[5019]: Connection closed by 139.178.68.195 port 45114 Mar 25 02:48:58.649818 sshd-session[5015]: pam_unix(sshd:session): session closed for user core Mar 25 02:48:58.655521 systemd-logind[1503]: Session 23 logged out. Waiting for processes to exit. Mar 25 02:48:58.657015 systemd[1]: sshd@24-10.230.58.198:22-139.178.68.195:45114.service: Deactivated successfully. Mar 25 02:48:58.659840 systemd[1]: session-23.scope: Deactivated successfully. Mar 25 02:48:58.661409 systemd-logind[1503]: Removed session 23. Mar 25 02:49:02.947113 kubelet[2798]: E0325 02:49:02.947060 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:49:03.811118 systemd[1]: Started sshd@25-10.230.58.198:22-139.178.68.195:45126.service - OpenSSH per-connection server daemon (139.178.68.195:45126). Mar 25 02:49:04.714840 sshd[5031]: Accepted publickey for core from 139.178.68.195 port 45126 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:49:04.717152 sshd-session[5031]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:49:04.725093 systemd-logind[1503]: New session 24 of user core. Mar 25 02:49:04.734158 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 25 02:49:05.418956 sshd[5033]: Connection closed by 139.178.68.195 port 45126 Mar 25 02:49:05.419916 sshd-session[5031]: pam_unix(sshd:session): session closed for user core Mar 25 02:49:05.426163 systemd[1]: sshd@25-10.230.58.198:22-139.178.68.195:45126.service: Deactivated successfully. Mar 25 02:49:05.430383 systemd[1]: session-24.scope: Deactivated successfully. Mar 25 02:49:05.432094 systemd-logind[1503]: Session 24 logged out. Waiting for processes to exit. Mar 25 02:49:05.433626 systemd-logind[1503]: Removed session 24. Mar 25 02:49:05.866697 kubelet[2798]: E0325 02:49:05.866332 2798 log.go:32] "Status from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" Mar 25 02:49:05.873156 kubelet[2798]: E0325 02:49:05.866429 2798 kubelet.go:2886] "Container runtime sanity check failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" Mar 25 02:49:07.948325 kubelet[2798]: E0325 02:49:07.948231 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:49:10.579461 systemd[1]: Started sshd@26-10.230.58.198:22-139.178.68.195:60982.service - OpenSSH per-connection server daemon (139.178.68.195:60982). Mar 25 02:49:11.496632 sshd[5045]: Accepted publickey for core from 139.178.68.195 port 60982 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:49:11.499443 sshd-session[5045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:49:11.509261 systemd-logind[1503]: New session 25 of user core. Mar 25 02:49:11.516191 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 25 02:49:12.218442 sshd[5047]: Connection closed by 139.178.68.195 port 60982 Mar 25 02:49:12.219507 sshd-session[5045]: pam_unix(sshd:session): session closed for user core Mar 25 02:49:12.225523 systemd-logind[1503]: Session 25 logged out. Waiting for processes to exit. Mar 25 02:49:12.226850 systemd[1]: sshd@26-10.230.58.198:22-139.178.68.195:60982.service: Deactivated successfully. Mar 25 02:49:12.229876 systemd[1]: session-25.scope: Deactivated successfully. Mar 25 02:49:12.231767 systemd-logind[1503]: Removed session 25. Mar 25 02:49:12.948702 kubelet[2798]: E0325 02:49:12.948615 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:49:17.373324 systemd[1]: Started sshd@27-10.230.58.198:22-139.178.68.195:47306.service - OpenSSH per-connection server daemon (139.178.68.195:47306). Mar 25 02:49:17.949677 kubelet[2798]: E0325 02:49:17.949611 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:49:18.274469 sshd[5059]: Accepted publickey for core from 139.178.68.195 port 47306 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:49:18.276810 sshd-session[5059]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:49:18.285679 systemd-logind[1503]: New session 26 of user core. Mar 25 02:49:18.289086 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 25 02:49:18.979528 sshd[5061]: Connection closed by 139.178.68.195 port 47306 Mar 25 02:49:18.979350 sshd-session[5059]: pam_unix(sshd:session): session closed for user core Mar 25 02:49:18.986640 systemd[1]: sshd@27-10.230.58.198:22-139.178.68.195:47306.service: Deactivated successfully. Mar 25 02:49:18.991678 systemd[1]: session-26.scope: Deactivated successfully. Mar 25 02:49:18.993339 systemd-logind[1503]: Session 26 logged out. Waiting for processes to exit. Mar 25 02:49:18.994768 systemd-logind[1503]: Removed session 26. Mar 25 02:49:22.481995 containerd[1520]: time="2025-03-25T02:49:22.481904214Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c89faa174b78b640d649de5ec024d3354c80b621a11241d36aa050b4cc6f9b43\" id:\"14be393c26e90b942dd3513495569fc4d7f81be8a46bef8d5c4baef3785bae34\" pid:5089 exited_at:{seconds:1742870962 nanos:481097921}" Mar 25 02:49:22.950354 kubelet[2798]: E0325 02:49:22.950264 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:49:24.144550 systemd[1]: Started sshd@28-10.230.58.198:22-139.178.68.195:47314.service - OpenSSH per-connection server daemon (139.178.68.195:47314). Mar 25 02:49:25.063173 sshd[5099]: Accepted publickey for core from 139.178.68.195 port 47314 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:49:25.065641 sshd-session[5099]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:49:25.087341 systemd[1]: Started sshd@29-10.230.58.198:22-81.192.87.130:13419.service - OpenSSH per-connection server daemon (81.192.87.130:13419). Mar 25 02:49:25.097690 containerd[1520]: time="2025-03-25T02:49:25.097480836Z" level=info msg="TaskExit event in podsandbox handler container_id:\"08b46a662cdc025a4603a6fcdf6a5e348afd7fcf81bec452154928151ac37f47\" id:\"e5c90d2d7ab55de86c0fceeec6c22de83c54444660c252f857d6cc88a23b854e\" pid:5114 exited_at:{seconds:1742870965 nanos:95422586}" Mar 25 02:49:25.100059 systemd-logind[1503]: New session 27 of user core. Mar 25 02:49:25.108119 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 25 02:49:25.442630 sshd[5127]: Invalid user o from 81.192.87.130 port 13419 Mar 25 02:49:25.496422 sshd[5127]: Received disconnect from 81.192.87.130 port 13419:11: Bye Bye [preauth] Mar 25 02:49:25.496422 sshd[5127]: Disconnected from invalid user o 81.192.87.130 port 13419 [preauth] Mar 25 02:49:25.499175 systemd[1]: sshd@29-10.230.58.198:22-81.192.87.130:13419.service: Deactivated successfully. Mar 25 02:49:25.795744 sshd[5128]: Connection closed by 139.178.68.195 port 47314 Mar 25 02:49:25.794508 sshd-session[5099]: pam_unix(sshd:session): session closed for user core Mar 25 02:49:25.799662 systemd-logind[1503]: Session 27 logged out. Waiting for processes to exit. Mar 25 02:49:25.800314 systemd[1]: sshd@28-10.230.58.198:22-139.178.68.195:47314.service: Deactivated successfully. Mar 25 02:49:25.804381 systemd[1]: session-27.scope: Deactivated successfully. Mar 25 02:49:25.807758 systemd-logind[1503]: Removed session 27. Mar 25 02:49:27.951606 kubelet[2798]: E0325 02:49:27.951485 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:49:30.951213 systemd[1]: Started sshd@30-10.230.58.198:22-139.178.68.195:54736.service - OpenSSH per-connection server daemon (139.178.68.195:54736). Mar 25 02:49:31.857522 sshd[5145]: Accepted publickey for core from 139.178.68.195 port 54736 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:49:31.859856 sshd-session[5145]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:49:31.868186 systemd-logind[1503]: New session 28 of user core. Mar 25 02:49:31.873121 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 25 02:49:32.558642 sshd[5147]: Connection closed by 139.178.68.195 port 54736 Mar 25 02:49:32.559794 sshd-session[5145]: pam_unix(sshd:session): session closed for user core Mar 25 02:49:32.565745 systemd[1]: sshd@30-10.230.58.198:22-139.178.68.195:54736.service: Deactivated successfully. Mar 25 02:49:32.568908 systemd[1]: session-28.scope: Deactivated successfully. Mar 25 02:49:32.570169 systemd-logind[1503]: Session 28 logged out. Waiting for processes to exit. Mar 25 02:49:32.572374 systemd-logind[1503]: Removed session 28. Mar 25 02:49:32.952702 kubelet[2798]: E0325 02:49:32.952630 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:49:37.714134 systemd[1]: Started sshd@31-10.230.58.198:22-139.178.68.195:51110.service - OpenSSH per-connection server daemon (139.178.68.195:51110). Mar 25 02:49:37.953502 kubelet[2798]: E0325 02:49:37.953433 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:49:38.619697 sshd[5161]: Accepted publickey for core from 139.178.68.195 port 51110 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:49:38.621710 sshd-session[5161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:49:38.630120 systemd-logind[1503]: New session 29 of user core. Mar 25 02:49:38.637109 systemd[1]: Started session-29.scope - Session 29 of User core. Mar 25 02:49:39.324536 sshd[5163]: Connection closed by 139.178.68.195 port 51110 Mar 25 02:49:39.325624 sshd-session[5161]: pam_unix(sshd:session): session closed for user core Mar 25 02:49:39.331433 systemd-logind[1503]: Session 29 logged out. Waiting for processes to exit. Mar 25 02:49:39.332821 systemd[1]: sshd@31-10.230.58.198:22-139.178.68.195:51110.service: Deactivated successfully. Mar 25 02:49:39.337394 systemd[1]: session-29.scope: Deactivated successfully. Mar 25 02:49:39.339102 systemd-logind[1503]: Removed session 29. Mar 25 02:49:42.954325 kubelet[2798]: E0325 02:49:42.953866 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:49:44.483249 systemd[1]: Started sshd@32-10.230.58.198:22-139.178.68.195:51118.service - OpenSSH per-connection server daemon (139.178.68.195:51118). Mar 25 02:49:45.339261 containerd[1520]: time="2025-03-25T02:49:45.339199890Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c89faa174b78b640d649de5ec024d3354c80b621a11241d36aa050b4cc6f9b43\" id:\"c63cbd137f5ad432235409e1e514a63351ce7f15523af67c32e63a18850bc793\" pid:5190 exited_at:{seconds:1742870985 nanos:338557838}" Mar 25 02:49:45.397732 sshd[5176]: Accepted publickey for core from 139.178.68.195 port 51118 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:49:45.401966 sshd-session[5176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:49:45.411820 systemd-logind[1503]: New session 30 of user core. Mar 25 02:49:45.420170 systemd[1]: Started session-30.scope - Session 30 of User core. Mar 25 02:49:46.112833 sshd[5199]: Connection closed by 139.178.68.195 port 51118 Mar 25 02:49:46.114220 sshd-session[5176]: pam_unix(sshd:session): session closed for user core Mar 25 02:49:46.119338 systemd-logind[1503]: Session 30 logged out. Waiting for processes to exit. Mar 25 02:49:46.120758 systemd[1]: sshd@32-10.230.58.198:22-139.178.68.195:51118.service: Deactivated successfully. Mar 25 02:49:46.123942 systemd[1]: session-30.scope: Deactivated successfully. Mar 25 02:49:46.125396 systemd-logind[1503]: Removed session 30. Mar 25 02:49:47.955177 kubelet[2798]: E0325 02:49:47.955084 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:49:51.273367 systemd[1]: Started sshd@33-10.230.58.198:22-139.178.68.195:37764.service - OpenSSH per-connection server daemon (139.178.68.195:37764). Mar 25 02:49:52.181616 sshd[5220]: Accepted publickey for core from 139.178.68.195 port 37764 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:49:52.183816 sshd-session[5220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:49:52.191983 systemd-logind[1503]: New session 31 of user core. Mar 25 02:49:52.198082 systemd[1]: Started session-31.scope - Session 31 of User core. Mar 25 02:49:52.455861 containerd[1520]: time="2025-03-25T02:49:52.455678074Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c89faa174b78b640d649de5ec024d3354c80b621a11241d36aa050b4cc6f9b43\" id:\"df97b8b9be53ae17b3031ee8d7576d47ac54b077f7a851c8510f402b07938d43\" pid:5234 exited_at:{seconds:1742870992 nanos:455255768}" Mar 25 02:49:52.895076 sshd[5222]: Connection closed by 139.178.68.195 port 37764 Mar 25 02:49:52.896228 sshd-session[5220]: pam_unix(sshd:session): session closed for user core Mar 25 02:49:52.902742 systemd[1]: sshd@33-10.230.58.198:22-139.178.68.195:37764.service: Deactivated successfully. Mar 25 02:49:52.906340 systemd[1]: session-31.scope: Deactivated successfully. Mar 25 02:49:52.907827 systemd-logind[1503]: Session 31 logged out. Waiting for processes to exit. Mar 25 02:49:52.909801 systemd-logind[1503]: Removed session 31. Mar 25 02:49:52.956285 kubelet[2798]: E0325 02:49:52.956196 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:49:55.077092 containerd[1520]: time="2025-03-25T02:49:55.076962874Z" level=info msg="TaskExit event in podsandbox handler container_id:\"08b46a662cdc025a4603a6fcdf6a5e348afd7fcf81bec452154928151ac37f47\" id:\"dcf16b0088859d9719feea495f56a56288acb249de09be97ab4a3f110e74e24e\" pid:5266 exited_at:{seconds:1742870995 nanos:76309911}" Mar 25 02:49:57.957168 kubelet[2798]: E0325 02:49:57.957062 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:49:58.057097 systemd[1]: Started sshd@34-10.230.58.198:22-139.178.68.195:33632.service - OpenSSH per-connection server daemon (139.178.68.195:33632). Mar 25 02:49:58.967709 sshd[5281]: Accepted publickey for core from 139.178.68.195 port 33632 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:49:58.970011 sshd-session[5281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:49:58.977758 systemd-logind[1503]: New session 32 of user core. Mar 25 02:49:58.987114 systemd[1]: Started session-32.scope - Session 32 of User core. Mar 25 02:49:59.680468 sshd[5283]: Connection closed by 139.178.68.195 port 33632 Mar 25 02:49:59.680296 sshd-session[5281]: pam_unix(sshd:session): session closed for user core Mar 25 02:49:59.686284 systemd[1]: sshd@34-10.230.58.198:22-139.178.68.195:33632.service: Deactivated successfully. Mar 25 02:49:59.689354 systemd[1]: session-32.scope: Deactivated successfully. Mar 25 02:49:59.690959 systemd-logind[1503]: Session 32 logged out. Waiting for processes to exit. Mar 25 02:49:59.692651 systemd-logind[1503]: Removed session 32. Mar 25 02:50:02.957861 kubelet[2798]: E0325 02:50:02.957787 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:50:04.835559 systemd[1]: Started sshd@35-10.230.58.198:22-139.178.68.195:33646.service - OpenSSH per-connection server daemon (139.178.68.195:33646). Mar 25 02:50:05.738743 sshd[5296]: Accepted publickey for core from 139.178.68.195 port 33646 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:50:05.741119 sshd-session[5296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:50:05.749355 systemd-logind[1503]: New session 33 of user core. Mar 25 02:50:05.759238 systemd[1]: Started session-33.scope - Session 33 of User core. Mar 25 02:50:06.464221 sshd[5298]: Connection closed by 139.178.68.195 port 33646 Mar 25 02:50:06.465217 sshd-session[5296]: pam_unix(sshd:session): session closed for user core Mar 25 02:50:06.470074 systemd[1]: sshd@35-10.230.58.198:22-139.178.68.195:33646.service: Deactivated successfully. Mar 25 02:50:06.473818 systemd[1]: session-33.scope: Deactivated successfully. Mar 25 02:50:06.476087 systemd-logind[1503]: Session 33 logged out. Waiting for processes to exit. Mar 25 02:50:06.478320 systemd-logind[1503]: Removed session 33. Mar 25 02:50:07.959109 kubelet[2798]: E0325 02:50:07.959031 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:50:11.626269 systemd[1]: Started sshd@36-10.230.58.198:22-139.178.68.195:56908.service - OpenSSH per-connection server daemon (139.178.68.195:56908). Mar 25 02:50:12.537180 sshd[5312]: Accepted publickey for core from 139.178.68.195 port 56908 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:50:12.539380 sshd-session[5312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:50:12.547087 systemd-logind[1503]: New session 34 of user core. Mar 25 02:50:12.555182 systemd[1]: Started session-34.scope - Session 34 of User core. Mar 25 02:50:12.959979 kubelet[2798]: E0325 02:50:12.959859 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:50:13.258028 sshd[5314]: Connection closed by 139.178.68.195 port 56908 Mar 25 02:50:13.257058 sshd-session[5312]: pam_unix(sshd:session): session closed for user core Mar 25 02:50:13.263322 systemd[1]: sshd@36-10.230.58.198:22-139.178.68.195:56908.service: Deactivated successfully. Mar 25 02:50:13.267543 systemd[1]: session-34.scope: Deactivated successfully. Mar 25 02:50:13.269603 systemd-logind[1503]: Session 34 logged out. Waiting for processes to exit. Mar 25 02:50:13.271190 systemd-logind[1503]: Removed session 34. Mar 25 02:50:17.960349 kubelet[2798]: E0325 02:50:17.960258 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:50:18.411358 systemd[1]: Started sshd@37-10.230.58.198:22-139.178.68.195:34444.service - OpenSSH per-connection server daemon (139.178.68.195:34444). Mar 25 02:50:19.320784 sshd[5332]: Accepted publickey for core from 139.178.68.195 port 34444 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:50:19.323225 sshd-session[5332]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:50:19.331768 systemd-logind[1503]: New session 35 of user core. Mar 25 02:50:19.341209 systemd[1]: Started session-35.scope - Session 35 of User core. Mar 25 02:50:20.029110 sshd[5334]: Connection closed by 139.178.68.195 port 34444 Mar 25 02:50:20.030247 sshd-session[5332]: pam_unix(sshd:session): session closed for user core Mar 25 02:50:20.036641 systemd-logind[1503]: Session 35 logged out. Waiting for processes to exit. Mar 25 02:50:20.038197 systemd[1]: sshd@37-10.230.58.198:22-139.178.68.195:34444.service: Deactivated successfully. Mar 25 02:50:20.041541 systemd[1]: session-35.scope: Deactivated successfully. Mar 25 02:50:20.043526 systemd-logind[1503]: Removed session 35. Mar 25 02:50:22.459359 containerd[1520]: time="2025-03-25T02:50:22.459262017Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c89faa174b78b640d649de5ec024d3354c80b621a11241d36aa050b4cc6f9b43\" id:\"d9fb3931e8d0e764d19ff7595c29ba575ea5613457577ec862a449b8af3a9f7f\" pid:5369 exited_at:{seconds:1742871022 nanos:458918630}" Mar 25 02:50:22.960979 kubelet[2798]: E0325 02:50:22.960917 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:50:25.056170 containerd[1520]: time="2025-03-25T02:50:25.055896188Z" level=info msg="TaskExit event in podsandbox handler container_id:\"08b46a662cdc025a4603a6fcdf6a5e348afd7fcf81bec452154928151ac37f47\" id:\"1ca8755ba51d5bf16c82d670f7f401b074e9301e5a6469c24d27524c83687517\" pid:5390 exited_at:{seconds:1742871025 nanos:54988926}" Mar 25 02:50:25.192928 systemd[1]: Started sshd@38-10.230.58.198:22-139.178.68.195:34448.service - OpenSSH per-connection server daemon (139.178.68.195:34448). Mar 25 02:50:26.103515 sshd[5403]: Accepted publickey for core from 139.178.68.195 port 34448 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:50:26.106913 sshd-session[5403]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:50:26.117548 systemd-logind[1503]: New session 36 of user core. Mar 25 02:50:26.125229 systemd[1]: Started session-36.scope - Session 36 of User core. Mar 25 02:50:26.831571 sshd[5405]: Connection closed by 139.178.68.195 port 34448 Mar 25 02:50:26.832413 sshd-session[5403]: pam_unix(sshd:session): session closed for user core Mar 25 02:50:26.838762 systemd[1]: sshd@38-10.230.58.198:22-139.178.68.195:34448.service: Deactivated successfully. Mar 25 02:50:26.842041 systemd[1]: session-36.scope: Deactivated successfully. Mar 25 02:50:26.844651 systemd-logind[1503]: Session 36 logged out. Waiting for processes to exit. Mar 25 02:50:26.846256 systemd-logind[1503]: Removed session 36. Mar 25 02:50:27.107217 systemd[1]: Started sshd@39-10.230.58.198:22-81.192.87.130:25062.service - OpenSSH per-connection server daemon (81.192.87.130:25062). Mar 25 02:50:27.451626 sshd[5418]: Invalid user yuantuis from 81.192.87.130 port 25062 Mar 25 02:50:27.506098 sshd[5418]: Received disconnect from 81.192.87.130 port 25062:11: Bye Bye [preauth] Mar 25 02:50:27.506098 sshd[5418]: Disconnected from invalid user yuantuis 81.192.87.130 port 25062 [preauth] Mar 25 02:50:27.509772 systemd[1]: sshd@39-10.230.58.198:22-81.192.87.130:25062.service: Deactivated successfully. Mar 25 02:50:27.962057 kubelet[2798]: E0325 02:50:27.961982 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:50:31.989956 systemd[1]: Started sshd@40-10.230.58.198:22-139.178.68.195:57772.service - OpenSSH per-connection server daemon (139.178.68.195:57772). Mar 25 02:50:32.896830 sshd[5425]: Accepted publickey for core from 139.178.68.195 port 57772 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:50:32.899350 sshd-session[5425]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:50:32.908471 systemd-logind[1503]: New session 37 of user core. Mar 25 02:50:32.917133 systemd[1]: Started session-37.scope - Session 37 of User core. Mar 25 02:50:32.962471 kubelet[2798]: E0325 02:50:32.962416 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:50:33.638748 sshd[5427]: Connection closed by 139.178.68.195 port 57772 Mar 25 02:50:33.639371 sshd-session[5425]: pam_unix(sshd:session): session closed for user core Mar 25 02:50:33.645970 systemd[1]: sshd@40-10.230.58.198:22-139.178.68.195:57772.service: Deactivated successfully. Mar 25 02:50:33.649236 systemd[1]: session-37.scope: Deactivated successfully. Mar 25 02:50:33.650964 systemd-logind[1503]: Session 37 logged out. Waiting for processes to exit. Mar 25 02:50:33.653223 systemd-logind[1503]: Removed session 37. Mar 25 02:50:37.963568 kubelet[2798]: E0325 02:50:37.963426 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:50:38.796438 systemd[1]: Started sshd@41-10.230.58.198:22-139.178.68.195:53156.service - OpenSSH per-connection server daemon (139.178.68.195:53156). Mar 25 02:50:39.706902 sshd[5440]: Accepted publickey for core from 139.178.68.195 port 53156 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:50:39.711207 sshd-session[5440]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:50:39.721837 systemd-logind[1503]: New session 38 of user core. Mar 25 02:50:39.727105 systemd[1]: Started session-38.scope - Session 38 of User core. Mar 25 02:50:40.425106 sshd[5443]: Connection closed by 139.178.68.195 port 53156 Mar 25 02:50:40.425577 sshd-session[5440]: pam_unix(sshd:session): session closed for user core Mar 25 02:50:40.432133 systemd[1]: sshd@41-10.230.58.198:22-139.178.68.195:53156.service: Deactivated successfully. Mar 25 02:50:40.432843 systemd-logind[1503]: Session 38 logged out. Waiting for processes to exit. Mar 25 02:50:40.436630 systemd[1]: session-38.scope: Deactivated successfully. Mar 25 02:50:40.439767 systemd-logind[1503]: Removed session 38. Mar 25 02:50:42.964112 kubelet[2798]: E0325 02:50:42.963977 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:50:45.349084 containerd[1520]: time="2025-03-25T02:50:45.349020305Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c89faa174b78b640d649de5ec024d3354c80b621a11241d36aa050b4cc6f9b43\" id:\"b702f244f00d1e58dd80e602c809cf070a4ad61687feb7c0555aca1a4b2511af\" pid:5468 exited_at:{seconds:1742871045 nanos:348080841}" Mar 25 02:50:45.582162 systemd[1]: Started sshd@42-10.230.58.198:22-139.178.68.195:53896.service - OpenSSH per-connection server daemon (139.178.68.195:53896). Mar 25 02:50:46.497319 sshd[5478]: Accepted publickey for core from 139.178.68.195 port 53896 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:50:46.499671 sshd-session[5478]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:50:46.509330 systemd-logind[1503]: New session 39 of user core. Mar 25 02:50:46.516111 systemd[1]: Started session-39.scope - Session 39 of User core. Mar 25 02:50:47.210168 sshd[5480]: Connection closed by 139.178.68.195 port 53896 Mar 25 02:50:47.211225 sshd-session[5478]: pam_unix(sshd:session): session closed for user core Mar 25 02:50:47.216030 systemd[1]: sshd@42-10.230.58.198:22-139.178.68.195:53896.service: Deactivated successfully. Mar 25 02:50:47.219089 systemd[1]: session-39.scope: Deactivated successfully. Mar 25 02:50:47.221030 systemd-logind[1503]: Session 39 logged out. Waiting for processes to exit. Mar 25 02:50:47.222856 systemd-logind[1503]: Removed session 39. Mar 25 02:50:47.965754 kubelet[2798]: E0325 02:50:47.965662 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:50:52.370368 systemd[1]: Started sshd@43-10.230.58.198:22-139.178.68.195:53912.service - OpenSSH per-connection server daemon (139.178.68.195:53912). Mar 25 02:50:52.456924 containerd[1520]: time="2025-03-25T02:50:52.456636741Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c89faa174b78b640d649de5ec024d3354c80b621a11241d36aa050b4cc6f9b43\" id:\"b11d2bcf658a82ee2dadb97a1b299d92ab3e878c7ee25dbaf8d7a0517810fa9f\" pid:5507 exited_at:{seconds:1742871052 nanos:456262497}" Mar 25 02:50:52.966858 kubelet[2798]: E0325 02:50:52.966784 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:50:53.290095 sshd[5493]: Accepted publickey for core from 139.178.68.195 port 53912 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:50:53.292333 sshd-session[5493]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:50:53.300984 systemd-logind[1503]: New session 40 of user core. Mar 25 02:50:53.308080 systemd[1]: Started session-40.scope - Session 40 of User core. Mar 25 02:50:53.997422 sshd[5515]: Connection closed by 139.178.68.195 port 53912 Mar 25 02:50:53.997262 sshd-session[5493]: pam_unix(sshd:session): session closed for user core Mar 25 02:50:54.001653 systemd[1]: sshd@43-10.230.58.198:22-139.178.68.195:53912.service: Deactivated successfully. Mar 25 02:50:54.005768 systemd[1]: session-40.scope: Deactivated successfully. Mar 25 02:50:54.008634 systemd-logind[1503]: Session 40 logged out. Waiting for processes to exit. Mar 25 02:50:54.010358 systemd-logind[1503]: Removed session 40. Mar 25 02:50:55.054582 containerd[1520]: time="2025-03-25T02:50:55.054484826Z" level=info msg="TaskExit event in podsandbox handler container_id:\"08b46a662cdc025a4603a6fcdf6a5e348afd7fcf81bec452154928151ac37f47\" id:\"b3a00e2425a778033773ce4722cfc850d39dcc70e0992bc91411dd55f3d092ce\" pid:5539 exited_at:{seconds:1742871055 nanos:53669104}" Mar 25 02:50:57.967695 kubelet[2798]: E0325 02:50:57.967570 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:50:59.156927 systemd[1]: Started sshd@44-10.230.58.198:22-139.178.68.195:48986.service - OpenSSH per-connection server daemon (139.178.68.195:48986). Mar 25 02:51:00.072628 sshd[5554]: Accepted publickey for core from 139.178.68.195 port 48986 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:51:00.074936 sshd-session[5554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:51:00.083119 systemd-logind[1503]: New session 41 of user core. Mar 25 02:51:00.089116 systemd[1]: Started session-41.scope - Session 41 of User core. Mar 25 02:51:00.794946 sshd[5556]: Connection closed by 139.178.68.195 port 48986 Mar 25 02:51:00.795711 sshd-session[5554]: pam_unix(sshd:session): session closed for user core Mar 25 02:51:00.801065 systemd[1]: sshd@44-10.230.58.198:22-139.178.68.195:48986.service: Deactivated successfully. Mar 25 02:51:00.804389 systemd[1]: session-41.scope: Deactivated successfully. Mar 25 02:51:00.805747 systemd-logind[1503]: Session 41 logged out. Waiting for processes to exit. Mar 25 02:51:00.807842 systemd-logind[1503]: Removed session 41. Mar 25 02:51:02.968358 kubelet[2798]: E0325 02:51:02.968282 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:51:05.634477 kubelet[2798]: E0325 02:51:05.634372 2798 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" Mar 25 02:51:05.635231 kubelet[2798]: E0325 02:51:05.635130 2798 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="calico-apiserver/calico-apiserver-7867f858bf-nmcst" Mar 25 02:51:05.636130 kubelet[2798]: E0325 02:51:05.635957 2798 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" Mar 25 02:51:05.636130 kubelet[2798]: E0325 02:51:05.636022 2798 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="kube-system/coredns-6f6b679f8f-7vrcq" Mar 25 02:51:05.638131 kubelet[2798]: E0325 02:51:05.637225 2798 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="kube-system/coredns-6f6b679f8f-7vrcq" Mar 25 02:51:05.638131 kubelet[2798]: E0325 02:51:05.637385 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-7vrcq_kube-system(5bfa8497-c633-4917-a163-1574ac7f04bf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-7vrcq_kube-system(5bfa8497-c633-4917-a163-1574ac7f04bf)\\\": rpc error: code = DeadlineExceeded desc = context deadline exceeded\"" pod="kube-system/coredns-6f6b679f8f-7vrcq" podUID="5bfa8497-c633-4917-a163-1574ac7f04bf" Mar 25 02:51:05.638131 kubelet[2798]: E0325 02:51:05.637773 2798 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="calico-apiserver/calico-apiserver-7867f858bf-nmcst" Mar 25 02:51:05.638131 kubelet[2798]: E0325 02:51:05.637853 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7867f858bf-nmcst_calico-apiserver(faf12e28-177f-4d74-9f9e-8e39e2034cdd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7867f858bf-nmcst_calico-apiserver(faf12e28-177f-4d74-9f9e-8e39e2034cdd)\\\": rpc error: code = DeadlineExceeded desc = context deadline exceeded\"" pod="calico-apiserver/calico-apiserver-7867f858bf-nmcst" podUID="faf12e28-177f-4d74-9f9e-8e39e2034cdd" Mar 25 02:51:05.716631 containerd[1520]: time="2025-03-25T02:51:05.716448897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7867f858bf-nmcst,Uid:faf12e28-177f-4d74-9f9e-8e39e2034cdd,Namespace:calico-apiserver,Attempt:0,}" Mar 25 02:51:05.717358 containerd[1520]: time="2025-03-25T02:51:05.716928685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-7vrcq,Uid:5bfa8497-c633-4917-a163-1574ac7f04bf,Namespace:kube-system,Attempt:0,}" Mar 25 02:51:05.725345 containerd[1520]: time="2025-03-25T02:51:05.717053227Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-7vrcq,Uid:5bfa8497-c633-4917-a163-1574ac7f04bf,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to reserve sandbox name \"coredns-6f6b679f8f-7vrcq_kube-system_5bfa8497-c633-4917-a163-1574ac7f04bf_0\": name \"coredns-6f6b679f8f-7vrcq_kube-system_5bfa8497-c633-4917-a163-1574ac7f04bf_0\" is reserved for \"fac90d194b4365cc0a91e034b9526f812a318fa1eec34eef125d9ecc281397d5\"" Mar 25 02:51:05.725345 containerd[1520]: time="2025-03-25T02:51:05.717488174Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7867f858bf-nmcst,Uid:faf12e28-177f-4d74-9f9e-8e39e2034cdd,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to reserve sandbox name \"calico-apiserver-7867f858bf-nmcst_calico-apiserver_faf12e28-177f-4d74-9f9e-8e39e2034cdd_0\": name \"calico-apiserver-7867f858bf-nmcst_calico-apiserver_faf12e28-177f-4d74-9f9e-8e39e2034cdd_0\" is reserved for \"efee5e986526424df10c9fde3e16403620ba52241c7d8c20abd24c1e68aebfdc\"" Mar 25 02:51:05.727245 kubelet[2798]: E0325 02:51:05.725481 2798 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to reserve sandbox name \"coredns-6f6b679f8f-7vrcq_kube-system_5bfa8497-c633-4917-a163-1574ac7f04bf_0\": name \"coredns-6f6b679f8f-7vrcq_kube-system_5bfa8497-c633-4917-a163-1574ac7f04bf_0\" is reserved for \"fac90d194b4365cc0a91e034b9526f812a318fa1eec34eef125d9ecc281397d5\"" Mar 25 02:51:05.727245 kubelet[2798]: E0325 02:51:05.725481 2798 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to reserve sandbox name \"calico-apiserver-7867f858bf-nmcst_calico-apiserver_faf12e28-177f-4d74-9f9e-8e39e2034cdd_0\": name \"calico-apiserver-7867f858bf-nmcst_calico-apiserver_faf12e28-177f-4d74-9f9e-8e39e2034cdd_0\" is reserved for \"efee5e986526424df10c9fde3e16403620ba52241c7d8c20abd24c1e68aebfdc\"" Mar 25 02:51:05.727245 kubelet[2798]: E0325 02:51:05.725566 2798 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to reserve sandbox name \"coredns-6f6b679f8f-7vrcq_kube-system_5bfa8497-c633-4917-a163-1574ac7f04bf_0\": name \"coredns-6f6b679f8f-7vrcq_kube-system_5bfa8497-c633-4917-a163-1574ac7f04bf_0\" is reserved for \"fac90d194b4365cc0a91e034b9526f812a318fa1eec34eef125d9ecc281397d5\"" pod="kube-system/coredns-6f6b679f8f-7vrcq" Mar 25 02:51:05.727245 kubelet[2798]: E0325 02:51:05.725570 2798 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to reserve sandbox name \"calico-apiserver-7867f858bf-nmcst_calico-apiserver_faf12e28-177f-4d74-9f9e-8e39e2034cdd_0\": name \"calico-apiserver-7867f858bf-nmcst_calico-apiserver_faf12e28-177f-4d74-9f9e-8e39e2034cdd_0\" is reserved for \"efee5e986526424df10c9fde3e16403620ba52241c7d8c20abd24c1e68aebfdc\"" pod="calico-apiserver/calico-apiserver-7867f858bf-nmcst" Mar 25 02:51:05.727487 kubelet[2798]: E0325 02:51:05.725595 2798 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to reserve sandbox name \"coredns-6f6b679f8f-7vrcq_kube-system_5bfa8497-c633-4917-a163-1574ac7f04bf_0\": name \"coredns-6f6b679f8f-7vrcq_kube-system_5bfa8497-c633-4917-a163-1574ac7f04bf_0\" is reserved for \"fac90d194b4365cc0a91e034b9526f812a318fa1eec34eef125d9ecc281397d5\"" pod="kube-system/coredns-6f6b679f8f-7vrcq" Mar 25 02:51:05.727487 kubelet[2798]: E0325 02:51:05.725621 2798 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to reserve sandbox name \"calico-apiserver-7867f858bf-nmcst_calico-apiserver_faf12e28-177f-4d74-9f9e-8e39e2034cdd_0\": name \"calico-apiserver-7867f858bf-nmcst_calico-apiserver_faf12e28-177f-4d74-9f9e-8e39e2034cdd_0\" is reserved for \"efee5e986526424df10c9fde3e16403620ba52241c7d8c20abd24c1e68aebfdc\"" pod="calico-apiserver/calico-apiserver-7867f858bf-nmcst" Mar 25 02:51:05.727487 kubelet[2798]: E0325 02:51:05.725674 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-7vrcq_kube-system(5bfa8497-c633-4917-a163-1574ac7f04bf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-7vrcq_kube-system(5bfa8497-c633-4917-a163-1574ac7f04bf)\\\": rpc error: code = Unknown desc = failed to reserve sandbox name \\\"coredns-6f6b679f8f-7vrcq_kube-system_5bfa8497-c633-4917-a163-1574ac7f04bf_0\\\": name \\\"coredns-6f6b679f8f-7vrcq_kube-system_5bfa8497-c633-4917-a163-1574ac7f04bf_0\\\" is reserved for \\\"fac90d194b4365cc0a91e034b9526f812a318fa1eec34eef125d9ecc281397d5\\\"\"" pod="kube-system/coredns-6f6b679f8f-7vrcq" podUID="5bfa8497-c633-4917-a163-1574ac7f04bf" Mar 25 02:51:05.727694 kubelet[2798]: E0325 02:51:05.725683 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7867f858bf-nmcst_calico-apiserver(faf12e28-177f-4d74-9f9e-8e39e2034cdd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7867f858bf-nmcst_calico-apiserver(faf12e28-177f-4d74-9f9e-8e39e2034cdd)\\\": rpc error: code = Unknown desc = failed to reserve sandbox name \\\"calico-apiserver-7867f858bf-nmcst_calico-apiserver_faf12e28-177f-4d74-9f9e-8e39e2034cdd_0\\\": name \\\"calico-apiserver-7867f858bf-nmcst_calico-apiserver_faf12e28-177f-4d74-9f9e-8e39e2034cdd_0\\\" is reserved for \\\"efee5e986526424df10c9fde3e16403620ba52241c7d8c20abd24c1e68aebfdc\\\"\"" pod="calico-apiserver/calico-apiserver-7867f858bf-nmcst" podUID="faf12e28-177f-4d74-9f9e-8e39e2034cdd" Mar 25 02:51:05.954720 systemd[1]: Started sshd@45-10.230.58.198:22-139.178.68.195:51982.service - OpenSSH per-connection server daemon (139.178.68.195:51982). Mar 25 02:51:06.640333 kubelet[2798]: E0325 02:51:06.640168 2798 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" Mar 25 02:51:06.640333 kubelet[2798]: E0325 02:51:06.640305 2798 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="calico-apiserver/calico-apiserver-7867f858bf-sjh4v" Mar 25 02:51:06.640333 kubelet[2798]: E0325 02:51:06.640336 2798 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="calico-apiserver/calico-apiserver-7867f858bf-sjh4v" Mar 25 02:51:06.642104 kubelet[2798]: E0325 02:51:06.640415 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7867f858bf-sjh4v_calico-apiserver(7b782bad-d6a9-42d9-9981-9048e9834ec0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7867f858bf-sjh4v_calico-apiserver(7b782bad-d6a9-42d9-9981-9048e9834ec0)\\\": rpc error: code = DeadlineExceeded desc = context deadline exceeded\"" pod="calico-apiserver/calico-apiserver-7867f858bf-sjh4v" podUID="7b782bad-d6a9-42d9-9981-9048e9834ec0" Mar 25 02:51:06.862312 sshd[5570]: Accepted publickey for core from 139.178.68.195 port 51982 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:51:06.865392 sshd-session[5570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:51:06.874300 systemd-logind[1503]: New session 42 of user core. Mar 25 02:51:06.882214 systemd[1]: Started session-42.scope - Session 42 of User core. Mar 25 02:51:07.570442 sshd[5572]: Connection closed by 139.178.68.195 port 51982 Mar 25 02:51:07.571636 sshd-session[5570]: pam_unix(sshd:session): session closed for user core Mar 25 02:51:07.577413 systemd[1]: sshd@45-10.230.58.198:22-139.178.68.195:51982.service: Deactivated successfully. Mar 25 02:51:07.581346 systemd[1]: session-42.scope: Deactivated successfully. Mar 25 02:51:07.583012 systemd-logind[1503]: Session 42 logged out. Waiting for processes to exit. Mar 25 02:51:07.584419 systemd-logind[1503]: Removed session 42. Mar 25 02:51:07.969404 kubelet[2798]: E0325 02:51:07.969341 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:51:10.874561 kubelet[2798]: E0325 02:51:10.873936 2798 log.go:32] "Status from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" Mar 25 02:51:10.874561 kubelet[2798]: E0325 02:51:10.874042 2798 kubelet.go:2886] "Container runtime sanity check failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" Mar 25 02:51:12.732730 systemd[1]: Started sshd@46-10.230.58.198:22-139.178.68.195:51990.service - OpenSSH per-connection server daemon (139.178.68.195:51990). Mar 25 02:51:12.969815 kubelet[2798]: E0325 02:51:12.969730 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:51:13.647324 sshd[5584]: Accepted publickey for core from 139.178.68.195 port 51990 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:51:13.649612 sshd-session[5584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:51:13.658422 systemd-logind[1503]: New session 43 of user core. Mar 25 02:51:13.664065 systemd[1]: Started session-43.scope - Session 43 of User core. Mar 25 02:51:13.788868 containerd[1520]: time="2025-03-25T02:51:13.785419263Z" level=warning msg="container event discarded" container=2c7c79768f7f3d30881d9b8813ea3087edeb8500d8d9fdad6ebe0e4ae5bd7707 type=CONTAINER_CREATED_EVENT Mar 25 02:51:13.801195 containerd[1520]: time="2025-03-25T02:51:13.801120296Z" level=warning msg="container event discarded" container=2c7c79768f7f3d30881d9b8813ea3087edeb8500d8d9fdad6ebe0e4ae5bd7707 type=CONTAINER_STARTED_EVENT Mar 25 02:51:13.801195 containerd[1520]: time="2025-03-25T02:51:13.801178017Z" level=warning msg="container event discarded" container=87eece11be12cb1a33224a87f8378d2c7023e17077013acbca146e9ffbb98b20 type=CONTAINER_CREATED_EVENT Mar 25 02:51:13.801195 containerd[1520]: time="2025-03-25T02:51:13.801196356Z" level=warning msg="container event discarded" container=87eece11be12cb1a33224a87f8378d2c7023e17077013acbca146e9ffbb98b20 type=CONTAINER_STARTED_EVENT Mar 25 02:51:13.831511 containerd[1520]: time="2025-03-25T02:51:13.831405972Z" level=warning msg="container event discarded" container=70728ff1feeca6dbc7ebdaa7724a92a217a3bd5f4c946b4fba31cba108fe6dc8 type=CONTAINER_CREATED_EVENT Mar 25 02:51:13.831511 containerd[1520]: time="2025-03-25T02:51:13.831474610Z" level=warning msg="container event discarded" container=70728ff1feeca6dbc7ebdaa7724a92a217a3bd5f4c946b4fba31cba108fe6dc8 type=CONTAINER_STARTED_EVENT Mar 25 02:51:13.855854 containerd[1520]: time="2025-03-25T02:51:13.855753105Z" level=warning msg="container event discarded" container=e36b477fb47d837183caeeba9553ca1d9278e95b0d706b0fb3bf1b8b4fd14fb4 type=CONTAINER_CREATED_EVENT Mar 25 02:51:13.855854 containerd[1520]: time="2025-03-25T02:51:13.855810961Z" level=warning msg="container event discarded" container=01312a9b006d97e5056525a49ab71d90d7293353655a4b18838c8efe6617639f type=CONTAINER_CREATED_EVENT Mar 25 02:51:13.880215 containerd[1520]: time="2025-03-25T02:51:13.880120832Z" level=warning msg="container event discarded" container=d25d085429eb398a39e3d3c9db10c18cb0ea87cb4e04210e321a8ba8786c8064 type=CONTAINER_CREATED_EVENT Mar 25 02:51:14.183105 containerd[1520]: time="2025-03-25T02:51:14.182918927Z" level=warning msg="container event discarded" container=e36b477fb47d837183caeeba9553ca1d9278e95b0d706b0fb3bf1b8b4fd14fb4 type=CONTAINER_STARTED_EVENT Mar 25 02:51:14.222295 containerd[1520]: time="2025-03-25T02:51:14.222203713Z" level=warning msg="container event discarded" container=01312a9b006d97e5056525a49ab71d90d7293353655a4b18838c8efe6617639f type=CONTAINER_STARTED_EVENT Mar 25 02:51:14.262963 containerd[1520]: time="2025-03-25T02:51:14.262814187Z" level=warning msg="container event discarded" container=d25d085429eb398a39e3d3c9db10c18cb0ea87cb4e04210e321a8ba8786c8064 type=CONTAINER_STARTED_EVENT Mar 25 02:51:14.374668 sshd[5586]: Connection closed by 139.178.68.195 port 51990 Mar 25 02:51:14.373770 sshd-session[5584]: pam_unix(sshd:session): session closed for user core Mar 25 02:51:14.379318 systemd-logind[1503]: Session 43 logged out. Waiting for processes to exit. Mar 25 02:51:14.380388 systemd[1]: sshd@46-10.230.58.198:22-139.178.68.195:51990.service: Deactivated successfully. Mar 25 02:51:14.384568 systemd[1]: session-43.scope: Deactivated successfully. Mar 25 02:51:14.387318 systemd-logind[1503]: Removed session 43. Mar 25 02:51:17.970571 kubelet[2798]: E0325 02:51:17.970506 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:51:19.530340 systemd[1]: Started sshd@47-10.230.58.198:22-139.178.68.195:50302.service - OpenSSH per-connection server daemon (139.178.68.195:50302). Mar 25 02:51:20.446709 sshd[5599]: Accepted publickey for core from 139.178.68.195 port 50302 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:51:20.448458 sshd-session[5599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:51:20.456270 systemd-logind[1503]: New session 44 of user core. Mar 25 02:51:20.464156 systemd[1]: Started session-44.scope - Session 44 of User core. Mar 25 02:51:21.158084 sshd[5601]: Connection closed by 139.178.68.195 port 50302 Mar 25 02:51:21.159450 sshd-session[5599]: pam_unix(sshd:session): session closed for user core Mar 25 02:51:21.164619 systemd[1]: sshd@47-10.230.58.198:22-139.178.68.195:50302.service: Deactivated successfully. Mar 25 02:51:21.169184 systemd[1]: session-44.scope: Deactivated successfully. Mar 25 02:51:21.171484 systemd-logind[1503]: Session 44 logged out. Waiting for processes to exit. Mar 25 02:51:21.173032 systemd-logind[1503]: Removed session 44. Mar 25 02:51:22.455935 containerd[1520]: time="2025-03-25T02:51:22.455701482Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c89faa174b78b640d649de5ec024d3354c80b621a11241d36aa050b4cc6f9b43\" id:\"b4bfd2191341ff4891c00083d975be10d90c3b5a745a1294efae4097508ab32b\" pid:5628 exited_at:{seconds:1742871082 nanos:454622773}" Mar 25 02:51:22.971050 kubelet[2798]: E0325 02:51:22.970944 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:51:25.047730 containerd[1520]: time="2025-03-25T02:51:25.047361733Z" level=info msg="TaskExit event in podsandbox handler container_id:\"08b46a662cdc025a4603a6fcdf6a5e348afd7fcf81bec452154928151ac37f47\" id:\"8dd23637f4c23a2bb558faa61df8b134039c89ffe5c8316a283902ec5dcc3f90\" pid:5649 exited_at:{seconds:1742871085 nanos:45958597}" Mar 25 02:51:26.316956 systemd[1]: Started sshd@48-10.230.58.198:22-139.178.68.195:40392.service - OpenSSH per-connection server daemon (139.178.68.195:40392). Mar 25 02:51:26.819114 containerd[1520]: time="2025-03-25T02:51:26.818903260Z" level=warning msg="container event discarded" container=82af7f5c2ca59cdef53d3019b97004580520f4450e4c5b29f43f5b541dc256b9 type=CONTAINER_CREATED_EVENT Mar 25 02:51:26.819114 containerd[1520]: time="2025-03-25T02:51:26.819058322Z" level=warning msg="container event discarded" container=82af7f5c2ca59cdef53d3019b97004580520f4450e4c5b29f43f5b541dc256b9 type=CONTAINER_STARTED_EVENT Mar 25 02:51:26.898471 containerd[1520]: time="2025-03-25T02:51:26.898311388Z" level=warning msg="container event discarded" container=077ce73f9ccfb7ebb1eb90dde767c5f8967dd15b3033ff27dd5217883c70ad9f type=CONTAINER_CREATED_EVENT Mar 25 02:51:27.081081 containerd[1520]: time="2025-03-25T02:51:27.080788240Z" level=warning msg="container event discarded" container=077ce73f9ccfb7ebb1eb90dde767c5f8967dd15b3033ff27dd5217883c70ad9f type=CONTAINER_STARTED_EVENT Mar 25 02:51:27.125307 containerd[1520]: time="2025-03-25T02:51:27.125201332Z" level=warning msg="container event discarded" container=589aee5f6abcc85abe0cf4f40100bf3afd0e65c1e94c5de0a59d1969f6cc6583 type=CONTAINER_CREATED_EVENT Mar 25 02:51:27.125307 containerd[1520]: time="2025-03-25T02:51:27.125287443Z" level=warning msg="container event discarded" container=589aee5f6abcc85abe0cf4f40100bf3afd0e65c1e94c5de0a59d1969f6cc6583 type=CONTAINER_STARTED_EVENT Mar 25 02:51:27.240407 sshd[5661]: Accepted publickey for core from 139.178.68.195 port 40392 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:51:27.242918 sshd-session[5661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:51:27.251903 systemd-logind[1503]: New session 45 of user core. Mar 25 02:51:27.258140 systemd[1]: Started session-45.scope - Session 45 of User core. Mar 25 02:51:27.971904 sshd[5663]: Connection closed by 139.178.68.195 port 40392 Mar 25 02:51:27.972744 kubelet[2798]: E0325 02:51:27.972294 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:51:27.972832 sshd-session[5661]: pam_unix(sshd:session): session closed for user core Mar 25 02:51:27.980056 systemd[1]: sshd@48-10.230.58.198:22-139.178.68.195:40392.service: Deactivated successfully. Mar 25 02:51:27.983839 systemd[1]: session-45.scope: Deactivated successfully. Mar 25 02:51:27.985244 systemd-logind[1503]: Session 45 logged out. Waiting for processes to exit. Mar 25 02:51:27.987292 systemd-logind[1503]: Removed session 45. Mar 25 02:51:28.465013 systemd[1]: Started sshd@49-10.230.58.198:22-81.192.87.130:36709.service - OpenSSH per-connection server daemon (81.192.87.130:36709). Mar 25 02:51:28.807286 sshd[5678]: Invalid user caishiyu from 81.192.87.130 port 36709 Mar 25 02:51:28.861032 sshd[5678]: Received disconnect from 81.192.87.130 port 36709:11: Bye Bye [preauth] Mar 25 02:51:28.861032 sshd[5678]: Disconnected from invalid user caishiyu 81.192.87.130 port 36709 [preauth] Mar 25 02:51:28.864361 systemd[1]: sshd@49-10.230.58.198:22-81.192.87.130:36709.service: Deactivated successfully. Mar 25 02:51:32.972544 kubelet[2798]: E0325 02:51:32.972470 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:51:33.129509 systemd[1]: Started sshd@50-10.230.58.198:22-139.178.68.195:40396.service - OpenSSH per-connection server daemon (139.178.68.195:40396). Mar 25 02:51:34.038774 sshd[5683]: Accepted publickey for core from 139.178.68.195 port 40396 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:51:34.041116 sshd-session[5683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:51:34.048491 systemd-logind[1503]: New session 46 of user core. Mar 25 02:51:34.051256 containerd[1520]: time="2025-03-25T02:51:34.051118004Z" level=warning msg="container event discarded" container=f453b2c5bcc3f9b1c6ba81749dbd3d51f18527d55a707d1bfa590a4256829c64 type=CONTAINER_CREATED_EVENT Mar 25 02:51:34.054125 systemd[1]: Started session-46.scope - Session 46 of User core. Mar 25 02:51:34.155058 containerd[1520]: time="2025-03-25T02:51:34.154944495Z" level=warning msg="container event discarded" container=f453b2c5bcc3f9b1c6ba81749dbd3d51f18527d55a707d1bfa590a4256829c64 type=CONTAINER_STARTED_EVENT Mar 25 02:51:34.743780 sshd[5685]: Connection closed by 139.178.68.195 port 40396 Mar 25 02:51:34.744965 sshd-session[5683]: pam_unix(sshd:session): session closed for user core Mar 25 02:51:34.749604 systemd[1]: sshd@50-10.230.58.198:22-139.178.68.195:40396.service: Deactivated successfully. Mar 25 02:51:34.752473 systemd[1]: session-46.scope: Deactivated successfully. Mar 25 02:51:34.755394 systemd-logind[1503]: Session 46 logged out. Waiting for processes to exit. Mar 25 02:51:34.757683 systemd-logind[1503]: Removed session 46. Mar 25 02:51:37.973317 kubelet[2798]: E0325 02:51:37.973209 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:51:39.060244 containerd[1520]: time="2025-03-25T02:51:39.060120910Z" level=warning msg="container event discarded" container=4de55a2ef9c13aa37edd653fe64c7e3febdfa1703842c9fe6fa87ceb72667821 type=CONTAINER_CREATED_EVENT Mar 25 02:51:39.060244 containerd[1520]: time="2025-03-25T02:51:39.060215026Z" level=warning msg="container event discarded" container=4de55a2ef9c13aa37edd653fe64c7e3febdfa1703842c9fe6fa87ceb72667821 type=CONTAINER_STARTED_EVENT Mar 25 02:51:39.595270 containerd[1520]: time="2025-03-25T02:51:39.595143970Z" level=warning msg="container event discarded" container=351477adb6d2f160d85d4798d2367e679b99a31bd788a211f713390d6b4716b2 type=CONTAINER_CREATED_EVENT Mar 25 02:51:39.595270 containerd[1520]: time="2025-03-25T02:51:39.595220360Z" level=warning msg="container event discarded" container=351477adb6d2f160d85d4798d2367e679b99a31bd788a211f713390d6b4716b2 type=CONTAINER_STARTED_EVENT Mar 25 02:51:39.901941 systemd[1]: Started sshd@51-10.230.58.198:22-139.178.68.195:53026.service - OpenSSH per-connection server daemon (139.178.68.195:53026). Mar 25 02:51:40.801978 sshd[5698]: Accepted publickey for core from 139.178.68.195 port 53026 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:51:40.804393 sshd-session[5698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:51:40.813236 systemd-logind[1503]: New session 47 of user core. Mar 25 02:51:40.821318 systemd[1]: Started session-47.scope - Session 47 of User core. Mar 25 02:51:41.208441 containerd[1520]: time="2025-03-25T02:51:41.208312582Z" level=warning msg="container event discarded" container=3d32a10ee889f4f13b573bede78de047253006dd07efb31f47776f0d95fb5144 type=CONTAINER_CREATED_EVENT Mar 25 02:51:41.353470 containerd[1520]: time="2025-03-25T02:51:41.353354903Z" level=warning msg="container event discarded" container=3d32a10ee889f4f13b573bede78de047253006dd07efb31f47776f0d95fb5144 type=CONTAINER_STARTED_EVENT Mar 25 02:51:41.504161 containerd[1520]: time="2025-03-25T02:51:41.503938258Z" level=warning msg="container event discarded" container=3d32a10ee889f4f13b573bede78de047253006dd07efb31f47776f0d95fb5144 type=CONTAINER_STOPPED_EVENT Mar 25 02:51:41.535030 sshd[5700]: Connection closed by 139.178.68.195 port 53026 Mar 25 02:51:41.534018 sshd-session[5698]: pam_unix(sshd:session): session closed for user core Mar 25 02:51:41.540480 systemd[1]: sshd@51-10.230.58.198:22-139.178.68.195:53026.service: Deactivated successfully. Mar 25 02:51:41.544288 systemd[1]: session-47.scope: Deactivated successfully. Mar 25 02:51:41.545708 systemd-logind[1503]: Session 47 logged out. Waiting for processes to exit. Mar 25 02:51:41.547474 systemd-logind[1503]: Removed session 47. Mar 25 02:51:41.690367 systemd[1]: Started sshd@52-10.230.58.198:22-139.178.68.195:53034.service - OpenSSH per-connection server daemon (139.178.68.195:53034). Mar 25 02:51:42.608697 sshd[5713]: Accepted publickey for core from 139.178.68.195 port 53034 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:51:42.610974 sshd-session[5713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:51:42.619092 systemd-logind[1503]: New session 48 of user core. Mar 25 02:51:42.624160 systemd[1]: Started session-48.scope - Session 48 of User core. Mar 25 02:51:42.974232 kubelet[2798]: E0325 02:51:42.974142 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:51:43.402642 sshd[5715]: Connection closed by 139.178.68.195 port 53034 Mar 25 02:51:43.401664 sshd-session[5713]: pam_unix(sshd:session): session closed for user core Mar 25 02:51:43.410272 systemd[1]: sshd@52-10.230.58.198:22-139.178.68.195:53034.service: Deactivated successfully. Mar 25 02:51:43.413636 systemd[1]: session-48.scope: Deactivated successfully. Mar 25 02:51:43.415789 systemd-logind[1503]: Session 48 logged out. Waiting for processes to exit. Mar 25 02:51:43.417505 systemd-logind[1503]: Removed session 48. Mar 25 02:51:43.558185 systemd[1]: Started sshd@53-10.230.58.198:22-139.178.68.195:53040.service - OpenSSH per-connection server daemon (139.178.68.195:53040). Mar 25 02:51:44.322976 containerd[1520]: time="2025-03-25T02:51:44.322858558Z" level=warning msg="container event discarded" container=b3ae034377164309039b30bc141fa687f7fc0997b69a4aabfd4bc75cb4a3d49b type=CONTAINER_CREATED_EVENT Mar 25 02:51:44.468452 containerd[1520]: time="2025-03-25T02:51:44.468365568Z" level=warning msg="container event discarded" container=b3ae034377164309039b30bc141fa687f7fc0997b69a4aabfd4bc75cb4a3d49b type=CONTAINER_STARTED_EVENT Mar 25 02:51:44.469262 sshd[5725]: Accepted publickey for core from 139.178.68.195 port 53040 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:51:44.471099 sshd-session[5725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:51:44.479456 systemd-logind[1503]: New session 49 of user core. Mar 25 02:51:44.486094 systemd[1]: Started session-49.scope - Session 49 of User core. Mar 25 02:51:45.196906 sshd[5727]: Connection closed by 139.178.68.195 port 53040 Mar 25 02:51:45.196129 sshd-session[5725]: pam_unix(sshd:session): session closed for user core Mar 25 02:51:45.201341 systemd-logind[1503]: Session 49 logged out. Waiting for processes to exit. Mar 25 02:51:45.202032 systemd[1]: sshd@53-10.230.58.198:22-139.178.68.195:53040.service: Deactivated successfully. Mar 25 02:51:45.205898 systemd[1]: session-49.scope: Deactivated successfully. Mar 25 02:51:45.209283 systemd-logind[1503]: Removed session 49. Mar 25 02:51:45.348121 containerd[1520]: time="2025-03-25T02:51:45.347778985Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c89faa174b78b640d649de5ec024d3354c80b621a11241d36aa050b4cc6f9b43\" id:\"9aedeeb719427b7007abf2207605189bb56920475f914124ec01f68e833eeb92\" pid:5750 exited_at:{seconds:1742871105 nanos:347002836}" Mar 25 02:51:47.974700 kubelet[2798]: E0325 02:51:47.974621 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:51:50.354802 systemd[1]: Started sshd@54-10.230.58.198:22-139.178.68.195:34660.service - OpenSSH per-connection server daemon (139.178.68.195:34660). Mar 25 02:51:50.701986 containerd[1520]: time="2025-03-25T02:51:50.701897887Z" level=warning msg="container event discarded" container=87017d018aa9229e335d82ef3820e6b2bba85c5a5f2a4173d415658e7d9ed412 type=CONTAINER_CREATED_EVENT Mar 25 02:51:50.858795 containerd[1520]: time="2025-03-25T02:51:50.858703776Z" level=warning msg="container event discarded" container=87017d018aa9229e335d82ef3820e6b2bba85c5a5f2a4173d415658e7d9ed412 type=CONTAINER_STARTED_EVENT Mar 25 02:51:51.291669 sshd[5764]: Accepted publickey for core from 139.178.68.195 port 34660 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:51:51.294254 sshd-session[5764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:51:51.303461 systemd-logind[1503]: New session 50 of user core. Mar 25 02:51:51.308096 systemd[1]: Started session-50.scope - Session 50 of User core. Mar 25 02:51:51.986341 containerd[1520]: time="2025-03-25T02:51:51.986187412Z" level=warning msg="container event discarded" container=87017d018aa9229e335d82ef3820e6b2bba85c5a5f2a4173d415658e7d9ed412 type=CONTAINER_STOPPED_EVENT Mar 25 02:51:52.006650 sshd[5766]: Connection closed by 139.178.68.195 port 34660 Mar 25 02:51:52.007227 sshd-session[5764]: pam_unix(sshd:session): session closed for user core Mar 25 02:51:52.013410 systemd[1]: sshd@54-10.230.58.198:22-139.178.68.195:34660.service: Deactivated successfully. Mar 25 02:51:52.017347 systemd[1]: session-50.scope: Deactivated successfully. Mar 25 02:51:52.018748 systemd-logind[1503]: Session 50 logged out. Waiting for processes to exit. Mar 25 02:51:52.020668 systemd-logind[1503]: Removed session 50. Mar 25 02:51:52.458109 containerd[1520]: time="2025-03-25T02:51:52.458046946Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c89faa174b78b640d649de5ec024d3354c80b621a11241d36aa050b4cc6f9b43\" id:\"fc7631590a8467b4d0c85847a2824c6cc7e2b394d16cdd7dff64fee7aa271589\" pid:5789 exited_at:{seconds:1742871112 nanos:457111601}" Mar 25 02:51:52.975518 kubelet[2798]: E0325 02:51:52.975444 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:51:55.052028 containerd[1520]: time="2025-03-25T02:51:55.051951405Z" level=info msg="TaskExit event in podsandbox handler container_id:\"08b46a662cdc025a4603a6fcdf6a5e348afd7fcf81bec452154928151ac37f47\" id:\"551e98130713b2f871af345fadf90d4257af1bcb4d953edc096928418daf6675\" pid:5810 exited_at:{seconds:1742871115 nanos:51058443}" Mar 25 02:51:57.163365 systemd[1]: Started sshd@55-10.230.58.198:22-139.178.68.195:36858.service - OpenSSH per-connection server daemon (139.178.68.195:36858). Mar 25 02:51:57.975710 kubelet[2798]: E0325 02:51:57.975617 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:51:58.086498 sshd[5823]: Accepted publickey for core from 139.178.68.195 port 36858 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:51:58.088966 sshd-session[5823]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:51:58.097598 systemd-logind[1503]: New session 51 of user core. Mar 25 02:51:58.105140 systemd[1]: Started session-51.scope - Session 51 of User core. Mar 25 02:51:58.807694 sshd[5837]: Connection closed by 139.178.68.195 port 36858 Mar 25 02:51:58.808243 sshd-session[5823]: pam_unix(sshd:session): session closed for user core Mar 25 02:51:58.815373 systemd-logind[1503]: Session 51 logged out. Waiting for processes to exit. Mar 25 02:51:58.815779 systemd[1]: sshd@55-10.230.58.198:22-139.178.68.195:36858.service: Deactivated successfully. Mar 25 02:51:58.818836 systemd[1]: session-51.scope: Deactivated successfully. Mar 25 02:51:58.821728 systemd-logind[1503]: Removed session 51. Mar 25 02:52:02.781857 containerd[1520]: time="2025-03-25T02:52:02.781695316Z" level=warning msg="container event discarded" container=08b46a662cdc025a4603a6fcdf6a5e348afd7fcf81bec452154928151ac37f47 type=CONTAINER_CREATED_EVENT Mar 25 02:52:02.976517 kubelet[2798]: E0325 02:52:02.976386 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:52:03.136659 containerd[1520]: time="2025-03-25T02:52:03.136546066Z" level=warning msg="container event discarded" container=08b46a662cdc025a4603a6fcdf6a5e348afd7fcf81bec452154928151ac37f47 type=CONTAINER_STARTED_EVENT Mar 25 02:52:03.995358 systemd[1]: Started sshd@56-10.230.58.198:22-139.178.68.195:36864.service - OpenSSH per-connection server daemon (139.178.68.195:36864). Mar 25 02:52:04.260745 containerd[1520]: time="2025-03-25T02:52:04.260425814Z" level=warning msg="container event discarded" container=53c6d08b763dbb6497e7fe1cbd2a160ab8722cdadbaf9867867b5f60486fe337 type=CONTAINER_CREATED_EVENT Mar 25 02:52:04.260745 containerd[1520]: time="2025-03-25T02:52:04.260519335Z" level=warning msg="container event discarded" container=53c6d08b763dbb6497e7fe1cbd2a160ab8722cdadbaf9867867b5f60486fe337 type=CONTAINER_STARTED_EVENT Mar 25 02:52:04.909451 sshd[5849]: Accepted publickey for core from 139.178.68.195 port 36864 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:52:04.911766 sshd-session[5849]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:52:04.920990 systemd-logind[1503]: New session 52 of user core. Mar 25 02:52:04.927106 systemd[1]: Started session-52.scope - Session 52 of User core. Mar 25 02:52:05.631290 sshd[5851]: Connection closed by 139.178.68.195 port 36864 Mar 25 02:52:05.632379 sshd-session[5849]: pam_unix(sshd:session): session closed for user core Mar 25 02:52:05.637738 systemd[1]: sshd@56-10.230.58.198:22-139.178.68.195:36864.service: Deactivated successfully. Mar 25 02:52:05.641282 systemd[1]: session-52.scope: Deactivated successfully. Mar 25 02:52:05.643763 systemd-logind[1503]: Session 52 logged out. Waiting for processes to exit. Mar 25 02:52:05.645549 systemd-logind[1503]: Removed session 52. Mar 25 02:52:05.674741 containerd[1520]: time="2025-03-25T02:52:05.674647766Z" level=warning msg="container event discarded" container=05b084d3d498cbc2ad0d073022cfafc59ce37c5c9e41fea4d76bca00588c2251 type=CONTAINER_CREATED_EVENT Mar 25 02:52:05.674741 containerd[1520]: time="2025-03-25T02:52:05.674728938Z" level=warning msg="container event discarded" container=05b084d3d498cbc2ad0d073022cfafc59ce37c5c9e41fea4d76bca00588c2251 type=CONTAINER_STARTED_EVENT Mar 25 02:52:05.774741 containerd[1520]: time="2025-03-25T02:52:05.774596476Z" level=warning msg="container event discarded" container=f09976e4d08893c9c2de1c6090d39ac943ebf2f41494d294d2b7e1020fabac99 type=CONTAINER_CREATED_EVENT Mar 25 02:52:05.774741 containerd[1520]: time="2025-03-25T02:52:05.774681243Z" level=warning msg="container event discarded" container=f09976e4d08893c9c2de1c6090d39ac943ebf2f41494d294d2b7e1020fabac99 type=CONTAINER_STARTED_EVENT Mar 25 02:52:05.817116 containerd[1520]: time="2025-03-25T02:52:05.817015815Z" level=warning msg="container event discarded" container=c68c38564f0b69deaf2f165f979fcf16132a2bfface4b102852692a1d799660e type=CONTAINER_CREATED_EVENT Mar 25 02:52:06.112629 containerd[1520]: time="2025-03-25T02:52:06.112387395Z" level=warning msg="container event discarded" container=c68c38564f0b69deaf2f165f979fcf16132a2bfface4b102852692a1d799660e type=CONTAINER_STARTED_EVENT Mar 25 02:52:07.976634 kubelet[2798]: E0325 02:52:07.976547 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:52:08.441194 containerd[1520]: time="2025-03-25T02:52:08.441088781Z" level=warning msg="container event discarded" container=c89faa174b78b640d649de5ec024d3354c80b621a11241d36aa050b4cc6f9b43 type=CONTAINER_CREATED_EVENT Mar 25 02:52:08.667916 containerd[1520]: time="2025-03-25T02:52:08.667712342Z" level=warning msg="container event discarded" container=c89faa174b78b640d649de5ec024d3354c80b621a11241d36aa050b4cc6f9b43 type=CONTAINER_STARTED_EVENT Mar 25 02:52:10.668838 containerd[1520]: time="2025-03-25T02:52:10.668736873Z" level=warning msg="container event discarded" container=d6ab20bb34de55c4fea4e0fed3786d9d464b6be3c86d30486a76efdae2384609 type=CONTAINER_CREATED_EVENT Mar 25 02:52:10.787965 systemd[1]: Started sshd@57-10.230.58.198:22-139.178.68.195:51784.service - OpenSSH per-connection server daemon (139.178.68.195:51784). Mar 25 02:52:10.809202 containerd[1520]: time="2025-03-25T02:52:10.809112169Z" level=warning msg="container event discarded" container=d6ab20bb34de55c4fea4e0fed3786d9d464b6be3c86d30486a76efdae2384609 type=CONTAINER_STARTED_EVENT Mar 25 02:52:11.701582 sshd[5864]: Accepted publickey for core from 139.178.68.195 port 51784 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:52:11.704261 sshd-session[5864]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:52:11.712793 systemd-logind[1503]: New session 53 of user core. Mar 25 02:52:11.723149 systemd[1]: Started session-53.scope - Session 53 of User core. Mar 25 02:52:12.448983 sshd[5866]: Connection closed by 139.178.68.195 port 51784 Mar 25 02:52:12.450099 sshd-session[5864]: pam_unix(sshd:session): session closed for user core Mar 25 02:52:12.455512 systemd-logind[1503]: Session 53 logged out. Waiting for processes to exit. Mar 25 02:52:12.456806 systemd[1]: sshd@57-10.230.58.198:22-139.178.68.195:51784.service: Deactivated successfully. Mar 25 02:52:12.460261 systemd[1]: session-53.scope: Deactivated successfully. Mar 25 02:52:12.462506 systemd-logind[1503]: Removed session 53. Mar 25 02:52:12.845079 containerd[1520]: time="2025-03-25T02:52:12.844802307Z" level=warning msg="container event discarded" container=b4dab0cc7f284f8956a0fcbc397131a2166aebd1e5f2b04c37f671c10d3f8a9d type=CONTAINER_CREATED_EVENT Mar 25 02:52:12.972087 containerd[1520]: time="2025-03-25T02:52:12.972012823Z" level=warning msg="container event discarded" container=b4dab0cc7f284f8956a0fcbc397131a2166aebd1e5f2b04c37f671c10d3f8a9d type=CONTAINER_STARTED_EVENT Mar 25 02:52:12.976883 kubelet[2798]: E0325 02:52:12.976822 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:52:17.609858 systemd[1]: Started sshd@58-10.230.58.198:22-139.178.68.195:39958.service - OpenSSH per-connection server daemon (139.178.68.195:39958). Mar 25 02:52:17.977565 kubelet[2798]: E0325 02:52:17.977502 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:52:18.521090 sshd[5878]: Accepted publickey for core from 139.178.68.195 port 39958 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:52:18.523379 sshd-session[5878]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:52:18.532990 systemd-logind[1503]: New session 54 of user core. Mar 25 02:52:18.540112 systemd[1]: Started session-54.scope - Session 54 of User core. Mar 25 02:52:19.227372 sshd[5880]: Connection closed by 139.178.68.195 port 39958 Mar 25 02:52:19.228464 sshd-session[5878]: pam_unix(sshd:session): session closed for user core Mar 25 02:52:19.234230 systemd[1]: sshd@58-10.230.58.198:22-139.178.68.195:39958.service: Deactivated successfully. Mar 25 02:52:19.237023 systemd[1]: session-54.scope: Deactivated successfully. Mar 25 02:52:19.239746 systemd-logind[1503]: Session 54 logged out. Waiting for processes to exit. Mar 25 02:52:19.241734 systemd-logind[1503]: Removed session 54. Mar 25 02:52:22.453204 containerd[1520]: time="2025-03-25T02:52:22.453048438Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c89faa174b78b640d649de5ec024d3354c80b621a11241d36aa050b4cc6f9b43\" id:\"a830a5ea45badffa6868eea1e62a880b777aa5db9b46a28f65c89601506faf06\" pid:5905 exited_at:{seconds:1742871142 nanos:452433604}" Mar 25 02:52:22.977700 kubelet[2798]: E0325 02:52:22.977640 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:52:24.390386 systemd[1]: Started sshd@59-10.230.58.198:22-139.178.68.195:39966.service - OpenSSH per-connection server daemon (139.178.68.195:39966). Mar 25 02:52:25.043445 containerd[1520]: time="2025-03-25T02:52:25.043374285Z" level=info msg="TaskExit event in podsandbox handler container_id:\"08b46a662cdc025a4603a6fcdf6a5e348afd7fcf81bec452154928151ac37f47\" id:\"cf72d22fa6f0420c52f5defaf0cf2c630232e8f33667c71d1e18a5031792e6f8\" pid:5929 exited_at:{seconds:1742871145 nanos:42804321}" Mar 25 02:52:25.297372 sshd[5915]: Accepted publickey for core from 139.178.68.195 port 39966 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:52:25.299517 sshd-session[5915]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:52:25.308929 systemd-logind[1503]: New session 55 of user core. Mar 25 02:52:25.315091 systemd[1]: Started session-55.scope - Session 55 of User core. Mar 25 02:52:26.008226 sshd[5939]: Connection closed by 139.178.68.195 port 39966 Mar 25 02:52:26.009319 sshd-session[5915]: pam_unix(sshd:session): session closed for user core Mar 25 02:52:26.014843 systemd[1]: sshd@59-10.230.58.198:22-139.178.68.195:39966.service: Deactivated successfully. Mar 25 02:52:26.017829 systemd[1]: session-55.scope: Deactivated successfully. Mar 25 02:52:26.019227 systemd-logind[1503]: Session 55 logged out. Waiting for processes to exit. Mar 25 02:52:26.021063 systemd-logind[1503]: Removed session 55. Mar 25 02:52:27.978434 kubelet[2798]: E0325 02:52:27.978302 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:52:28.614727 systemd[1]: Started sshd@60-10.230.58.198:22-81.192.87.130:48318.service - OpenSSH per-connection server daemon (81.192.87.130:48318). Mar 25 02:52:28.957623 sshd[5953]: Invalid user alara from 81.192.87.130 port 48318 Mar 25 02:52:29.013002 sshd[5953]: Received disconnect from 81.192.87.130 port 48318:11: Bye Bye [preauth] Mar 25 02:52:29.013002 sshd[5953]: Disconnected from invalid user alara 81.192.87.130 port 48318 [preauth] Mar 25 02:52:29.016178 systemd[1]: sshd@60-10.230.58.198:22-81.192.87.130:48318.service: Deactivated successfully. Mar 25 02:52:31.165560 systemd[1]: Started sshd@61-10.230.58.198:22-139.178.68.195:43186.service - OpenSSH per-connection server daemon (139.178.68.195:43186). Mar 25 02:52:32.081139 sshd[5967]: Accepted publickey for core from 139.178.68.195 port 43186 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:52:32.083366 sshd-session[5967]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:52:32.092611 systemd-logind[1503]: New session 56 of user core. Mar 25 02:52:32.098128 systemd[1]: Started session-56.scope - Session 56 of User core. Mar 25 02:52:32.810503 sshd[5969]: Connection closed by 139.178.68.195 port 43186 Mar 25 02:52:32.810321 sshd-session[5967]: pam_unix(sshd:session): session closed for user core Mar 25 02:52:32.814958 systemd[1]: sshd@61-10.230.58.198:22-139.178.68.195:43186.service: Deactivated successfully. Mar 25 02:52:32.818504 systemd[1]: session-56.scope: Deactivated successfully. Mar 25 02:52:32.820683 systemd-logind[1503]: Session 56 logged out. Waiting for processes to exit. Mar 25 02:52:32.822364 systemd-logind[1503]: Removed session 56. Mar 25 02:52:32.978750 kubelet[2798]: E0325 02:52:32.978677 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:52:37.966937 systemd[1]: Started sshd@62-10.230.58.198:22-139.178.68.195:37258.service - OpenSSH per-connection server daemon (139.178.68.195:37258). Mar 25 02:52:37.979123 kubelet[2798]: E0325 02:52:37.978866 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:52:38.871728 sshd[5980]: Accepted publickey for core from 139.178.68.195 port 37258 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:52:38.874060 sshd-session[5980]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:52:38.881841 systemd-logind[1503]: New session 57 of user core. Mar 25 02:52:38.888178 systemd[1]: Started session-57.scope - Session 57 of User core. Mar 25 02:52:39.581961 sshd[5982]: Connection closed by 139.178.68.195 port 37258 Mar 25 02:52:39.583179 sshd-session[5980]: pam_unix(sshd:session): session closed for user core Mar 25 02:52:39.589481 systemd-logind[1503]: Session 57 logged out. Waiting for processes to exit. Mar 25 02:52:39.590840 systemd[1]: sshd@62-10.230.58.198:22-139.178.68.195:37258.service: Deactivated successfully. Mar 25 02:52:39.593859 systemd[1]: session-57.scope: Deactivated successfully. Mar 25 02:52:39.595469 systemd-logind[1503]: Removed session 57. Mar 25 02:52:42.979308 kubelet[2798]: E0325 02:52:42.979157 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:52:44.743423 systemd[1]: Started sshd@63-10.230.58.198:22-139.178.68.195:37268.service - OpenSSH per-connection server daemon (139.178.68.195:37268). Mar 25 02:52:45.350312 containerd[1520]: time="2025-03-25T02:52:45.350149839Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c89faa174b78b640d649de5ec024d3354c80b621a11241d36aa050b4cc6f9b43\" id:\"d0aca0b9c0e9438e0629ba2a80243ead9f5ecfdba59fe510b2d70567df6b3959\" pid:6009 exited_at:{seconds:1742871165 nanos:348961243}" Mar 25 02:52:45.662516 sshd[5995]: Accepted publickey for core from 139.178.68.195 port 37268 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:52:45.664788 sshd-session[5995]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:52:45.673909 systemd-logind[1503]: New session 58 of user core. Mar 25 02:52:45.679124 systemd[1]: Started session-58.scope - Session 58 of User core. Mar 25 02:52:46.426531 sshd[6020]: Connection closed by 139.178.68.195 port 37268 Mar 25 02:52:46.429495 sshd-session[5995]: pam_unix(sshd:session): session closed for user core Mar 25 02:52:46.440732 systemd-logind[1503]: Session 58 logged out. Waiting for processes to exit. Mar 25 02:52:46.442205 systemd[1]: sshd@63-10.230.58.198:22-139.178.68.195:37268.service: Deactivated successfully. Mar 25 02:52:46.446004 systemd[1]: session-58.scope: Deactivated successfully. Mar 25 02:52:46.447693 systemd-logind[1503]: Removed session 58. Mar 25 02:52:47.979557 kubelet[2798]: E0325 02:52:47.979469 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:52:51.585736 systemd[1]: Started sshd@64-10.230.58.198:22-139.178.68.195:54188.service - OpenSSH per-connection server daemon (139.178.68.195:54188). Mar 25 02:52:52.455922 containerd[1520]: time="2025-03-25T02:52:52.455831880Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c89faa174b78b640d649de5ec024d3354c80b621a11241d36aa050b4cc6f9b43\" id:\"99b3070d1c433a05085df398be10075ea755244a68ac0796831c7915a1616bdd\" pid:6046 exited_at:{seconds:1742871172 nanos:455512045}" Mar 25 02:52:52.522208 sshd[6032]: Accepted publickey for core from 139.178.68.195 port 54188 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:52:52.524485 sshd-session[6032]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:52:52.533669 systemd-logind[1503]: New session 59 of user core. Mar 25 02:52:52.538141 systemd[1]: Started session-59.scope - Session 59 of User core. Mar 25 02:52:52.980260 kubelet[2798]: E0325 02:52:52.980104 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:52:53.271589 sshd[6055]: Connection closed by 139.178.68.195 port 54188 Mar 25 02:52:53.272558 sshd-session[6032]: pam_unix(sshd:session): session closed for user core Mar 25 02:52:53.282047 systemd[1]: sshd@64-10.230.58.198:22-139.178.68.195:54188.service: Deactivated successfully. Mar 25 02:52:53.284638 systemd[1]: session-59.scope: Deactivated successfully. Mar 25 02:52:53.285819 systemd-logind[1503]: Session 59 logged out. Waiting for processes to exit. Mar 25 02:52:53.288320 systemd-logind[1503]: Removed session 59. Mar 25 02:52:55.048222 containerd[1520]: time="2025-03-25T02:52:55.048091500Z" level=info msg="TaskExit event in podsandbox handler container_id:\"08b46a662cdc025a4603a6fcdf6a5e348afd7fcf81bec452154928151ac37f47\" id:\"198d663718ff9c9d7265e949d5f1d284c06d93112ad819217f19a14b881da330\" pid:6078 exited_at:{seconds:1742871175 nanos:47506797}" Mar 25 02:52:57.981275 kubelet[2798]: E0325 02:52:57.981192 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:52:58.431226 systemd[1]: Started sshd@65-10.230.58.198:22-139.178.68.195:33152.service - OpenSSH per-connection server daemon (139.178.68.195:33152). Mar 25 02:52:59.362545 sshd[6092]: Accepted publickey for core from 139.178.68.195 port 33152 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:52:59.364849 sshd-session[6092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:52:59.372715 systemd-logind[1503]: New session 60 of user core. Mar 25 02:52:59.378066 systemd[1]: Started session-60.scope - Session 60 of User core. Mar 25 02:53:00.095603 sshd[6094]: Connection closed by 139.178.68.195 port 33152 Mar 25 02:53:00.096697 sshd-session[6092]: pam_unix(sshd:session): session closed for user core Mar 25 02:53:00.102440 systemd-logind[1503]: Session 60 logged out. Waiting for processes to exit. Mar 25 02:53:00.103550 systemd[1]: sshd@65-10.230.58.198:22-139.178.68.195:33152.service: Deactivated successfully. Mar 25 02:53:00.106812 systemd[1]: session-60.scope: Deactivated successfully. Mar 25 02:53:00.109065 systemd-logind[1503]: Removed session 60. Mar 25 02:53:02.981415 kubelet[2798]: E0325 02:53:02.981342 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:53:05.257272 systemd[1]: Started sshd@66-10.230.58.198:22-139.178.68.195:52422.service - OpenSSH per-connection server daemon (139.178.68.195:52422). Mar 25 02:53:06.161903 sshd[6105]: Accepted publickey for core from 139.178.68.195 port 52422 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:53:06.164732 sshd-session[6105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:53:06.173287 systemd-logind[1503]: New session 61 of user core. Mar 25 02:53:06.183112 systemd[1]: Started session-61.scope - Session 61 of User core. Mar 25 02:53:06.865903 sshd[6107]: Connection closed by 139.178.68.195 port 52422 Mar 25 02:53:06.867095 sshd-session[6105]: pam_unix(sshd:session): session closed for user core Mar 25 02:53:06.872146 systemd[1]: sshd@66-10.230.58.198:22-139.178.68.195:52422.service: Deactivated successfully. Mar 25 02:53:06.872727 systemd-logind[1503]: Session 61 logged out. Waiting for processes to exit. Mar 25 02:53:06.875689 systemd[1]: session-61.scope: Deactivated successfully. Mar 25 02:53:06.878924 systemd-logind[1503]: Removed session 61. Mar 25 02:53:07.981998 kubelet[2798]: E0325 02:53:07.981910 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:53:12.027322 systemd[1]: Started sshd@67-10.230.58.198:22-139.178.68.195:52432.service - OpenSSH per-connection server daemon (139.178.68.195:52432). Mar 25 02:53:12.933128 sshd[6119]: Accepted publickey for core from 139.178.68.195 port 52432 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:53:12.935799 sshd-session[6119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:53:12.944717 systemd-logind[1503]: New session 62 of user core. Mar 25 02:53:12.957117 systemd[1]: Started session-62.scope - Session 62 of User core. Mar 25 02:53:12.982757 kubelet[2798]: E0325 02:53:12.982698 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:53:13.648388 sshd[6121]: Connection closed by 139.178.68.195 port 52432 Mar 25 02:53:13.649282 sshd-session[6119]: pam_unix(sshd:session): session closed for user core Mar 25 02:53:13.655652 systemd-logind[1503]: Session 62 logged out. Waiting for processes to exit. Mar 25 02:53:13.656600 systemd[1]: sshd@67-10.230.58.198:22-139.178.68.195:52432.service: Deactivated successfully. Mar 25 02:53:13.660033 systemd[1]: session-62.scope: Deactivated successfully. Mar 25 02:53:13.661701 systemd-logind[1503]: Removed session 62. Mar 25 02:53:15.875564 kubelet[2798]: E0325 02:53:15.875359 2798 log.go:32] "Status from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" Mar 25 02:53:15.875564 kubelet[2798]: E0325 02:53:15.875485 2798 kubelet.go:2886] "Container runtime sanity check failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" Mar 25 02:53:17.983197 kubelet[2798]: E0325 02:53:17.983097 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:53:18.805209 systemd[1]: Started sshd@68-10.230.58.198:22-139.178.68.195:53552.service - OpenSSH per-connection server daemon (139.178.68.195:53552). Mar 25 02:53:19.717689 sshd[6132]: Accepted publickey for core from 139.178.68.195 port 53552 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:53:19.720015 sshd-session[6132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:53:19.727459 systemd-logind[1503]: New session 63 of user core. Mar 25 02:53:19.736101 systemd[1]: Started session-63.scope - Session 63 of User core. Mar 25 02:53:20.423184 sshd[6134]: Connection closed by 139.178.68.195 port 53552 Mar 25 02:53:20.424264 sshd-session[6132]: pam_unix(sshd:session): session closed for user core Mar 25 02:53:20.429682 systemd-logind[1503]: Session 63 logged out. Waiting for processes to exit. Mar 25 02:53:20.430530 systemd[1]: sshd@68-10.230.58.198:22-139.178.68.195:53552.service: Deactivated successfully. Mar 25 02:53:20.433409 systemd[1]: session-63.scope: Deactivated successfully. Mar 25 02:53:20.436583 systemd-logind[1503]: Removed session 63. Mar 25 02:53:22.461551 containerd[1520]: time="2025-03-25T02:53:22.461420574Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c89faa174b78b640d649de5ec024d3354c80b621a11241d36aa050b4cc6f9b43\" id:\"6d20a4b16f72b7b1b434197674c85ff2ab81424c9262299c5dc18282302d6b7b\" pid:6159 exited_at:{seconds:1742871202 nanos:460816725}" Mar 25 02:53:22.983333 kubelet[2798]: E0325 02:53:22.983261 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:53:25.043604 containerd[1520]: time="2025-03-25T02:53:25.043504858Z" level=info msg="TaskExit event in podsandbox handler container_id:\"08b46a662cdc025a4603a6fcdf6a5e348afd7fcf81bec452154928151ac37f47\" id:\"12018ac258c7059aff38dc7935e2bc7c7085f85d492ae1ed5d93b565df0ba35a\" pid:6186 exited_at:{seconds:1742871205 nanos:42984879}" Mar 25 02:53:25.581013 systemd[1]: Started sshd@69-10.230.58.198:22-139.178.68.195:47196.service - OpenSSH per-connection server daemon (139.178.68.195:47196). Mar 25 02:53:26.515673 sshd[6199]: Accepted publickey for core from 139.178.68.195 port 47196 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:53:26.518137 sshd-session[6199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:53:26.531627 systemd-logind[1503]: New session 64 of user core. Mar 25 02:53:26.538144 systemd[1]: Started session-64.scope - Session 64 of User core. Mar 25 02:53:27.393575 sshd[6201]: Connection closed by 139.178.68.195 port 47196 Mar 25 02:53:27.394793 sshd-session[6199]: pam_unix(sshd:session): session closed for user core Mar 25 02:53:27.405109 systemd[1]: sshd@69-10.230.58.198:22-139.178.68.195:47196.service: Deactivated successfully. Mar 25 02:53:27.410065 systemd[1]: session-64.scope: Deactivated successfully. Mar 25 02:53:27.411703 systemd-logind[1503]: Session 64 logged out. Waiting for processes to exit. Mar 25 02:53:27.413636 systemd-logind[1503]: Removed session 64. Mar 25 02:53:27.984495 kubelet[2798]: E0325 02:53:27.984403 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:53:30.906636 systemd[1]: Started sshd@70-10.230.58.198:22-81.192.87.130:59900.service - OpenSSH per-connection server daemon (81.192.87.130:59900). Mar 25 02:53:31.274696 sshd[6215]: Invalid user james from 81.192.87.130 port 59900 Mar 25 02:53:31.330109 sshd[6215]: Received disconnect from 81.192.87.130 port 59900:11: Bye Bye [preauth] Mar 25 02:53:31.330109 sshd[6215]: Disconnected from invalid user james 81.192.87.130 port 59900 [preauth] Mar 25 02:53:31.333271 systemd[1]: sshd@70-10.230.58.198:22-81.192.87.130:59900.service: Deactivated successfully. Mar 25 02:53:32.555984 systemd[1]: Started sshd@71-10.230.58.198:22-139.178.68.195:47206.service - OpenSSH per-connection server daemon (139.178.68.195:47206). Mar 25 02:53:32.985479 kubelet[2798]: E0325 02:53:32.985398 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:53:33.473628 sshd[6220]: Accepted publickey for core from 139.178.68.195 port 47206 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:53:33.476217 sshd-session[6220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:53:33.485228 systemd-logind[1503]: New session 65 of user core. Mar 25 02:53:33.492123 systemd[1]: Started session-65.scope - Session 65 of User core. Mar 25 02:53:34.207106 sshd[6222]: Connection closed by 139.178.68.195 port 47206 Mar 25 02:53:34.208547 sshd-session[6220]: pam_unix(sshd:session): session closed for user core Mar 25 02:53:34.214667 systemd[1]: sshd@71-10.230.58.198:22-139.178.68.195:47206.service: Deactivated successfully. Mar 25 02:53:34.217789 systemd[1]: session-65.scope: Deactivated successfully. Mar 25 02:53:34.219214 systemd-logind[1503]: Session 65 logged out. Waiting for processes to exit. Mar 25 02:53:34.220849 systemd-logind[1503]: Removed session 65. Mar 25 02:53:37.987256 kubelet[2798]: E0325 02:53:37.987115 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:53:39.363413 systemd[1]: Started sshd@72-10.230.58.198:22-139.178.68.195:43722.service - OpenSSH per-connection server daemon (139.178.68.195:43722). Mar 25 02:53:40.309611 sshd[6243]: Accepted publickey for core from 139.178.68.195 port 43722 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:53:40.312250 sshd-session[6243]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:53:40.320100 systemd-logind[1503]: New session 66 of user core. Mar 25 02:53:40.326118 systemd[1]: Started session-66.scope - Session 66 of User core. Mar 25 02:53:41.150860 sshd[6245]: Connection closed by 139.178.68.195 port 43722 Mar 25 02:53:41.152045 sshd-session[6243]: pam_unix(sshd:session): session closed for user core Mar 25 02:53:41.157485 systemd[1]: sshd@72-10.230.58.198:22-139.178.68.195:43722.service: Deactivated successfully. Mar 25 02:53:41.160799 systemd[1]: session-66.scope: Deactivated successfully. Mar 25 02:53:41.163953 systemd-logind[1503]: Session 66 logged out. Waiting for processes to exit. Mar 25 02:53:41.165722 systemd-logind[1503]: Removed session 66. Mar 25 02:53:42.987934 kubelet[2798]: E0325 02:53:42.987261 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:53:45.351409 containerd[1520]: time="2025-03-25T02:53:45.349865633Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c89faa174b78b640d649de5ec024d3354c80b621a11241d36aa050b4cc6f9b43\" id:\"17727b3df37cc338b39b2fda119afbbdba012d82cca40ffc460e20ae6a4cee6e\" pid:6269 exited_at:{seconds:1742871225 nanos:349559501}" Mar 25 02:53:46.308577 systemd[1]: Started sshd@73-10.230.58.198:22-139.178.68.195:54436.service - OpenSSH per-connection server daemon (139.178.68.195:54436). Mar 25 02:53:47.214952 sshd[6279]: Accepted publickey for core from 139.178.68.195 port 54436 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:53:47.217203 sshd-session[6279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:53:47.228116 systemd-logind[1503]: New session 67 of user core. Mar 25 02:53:47.234415 systemd[1]: Started session-67.scope - Session 67 of User core. Mar 25 02:53:47.926816 sshd[6281]: Connection closed by 139.178.68.195 port 54436 Mar 25 02:53:47.928359 sshd-session[6279]: pam_unix(sshd:session): session closed for user core Mar 25 02:53:47.935637 systemd[1]: sshd@73-10.230.58.198:22-139.178.68.195:54436.service: Deactivated successfully. Mar 25 02:53:47.939211 systemd[1]: session-67.scope: Deactivated successfully. Mar 25 02:53:47.940706 systemd-logind[1503]: Session 67 logged out. Waiting for processes to exit. Mar 25 02:53:47.943119 systemd-logind[1503]: Removed session 67. Mar 25 02:53:47.988271 kubelet[2798]: E0325 02:53:47.988212 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:53:52.454076 containerd[1520]: time="2025-03-25T02:53:52.453968611Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c89faa174b78b640d649de5ec024d3354c80b621a11241d36aa050b4cc6f9b43\" id:\"26468a693a94f444b0282505d7c2250c5c030a03d7dbe882eccefab11782874e\" pid:6303 exited_at:{seconds:1742871232 nanos:453224033}" Mar 25 02:53:52.988758 kubelet[2798]: E0325 02:53:52.988690 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:53:53.083382 systemd[1]: Started sshd@74-10.230.58.198:22-139.178.68.195:54448.service - OpenSSH per-connection server daemon (139.178.68.195:54448). Mar 25 02:53:53.995526 sshd[6313]: Accepted publickey for core from 139.178.68.195 port 54448 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:53:53.998122 sshd-session[6313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:53:54.006802 systemd-logind[1503]: New session 68 of user core. Mar 25 02:53:54.017268 systemd[1]: Started session-68.scope - Session 68 of User core. Mar 25 02:53:54.710259 sshd[6315]: Connection closed by 139.178.68.195 port 54448 Mar 25 02:53:54.709382 sshd-session[6313]: pam_unix(sshd:session): session closed for user core Mar 25 02:53:54.714394 systemd[1]: sshd@74-10.230.58.198:22-139.178.68.195:54448.service: Deactivated successfully. Mar 25 02:53:54.717607 systemd[1]: session-68.scope: Deactivated successfully. Mar 25 02:53:54.720121 systemd-logind[1503]: Session 68 logged out. Waiting for processes to exit. Mar 25 02:53:54.722015 systemd-logind[1503]: Removed session 68. Mar 25 02:53:55.051438 containerd[1520]: time="2025-03-25T02:53:55.051124636Z" level=info msg="TaskExit event in podsandbox handler container_id:\"08b46a662cdc025a4603a6fcdf6a5e348afd7fcf81bec452154928151ac37f47\" id:\"2bcb1092d05fea457ade17da98f9ee42b47a94dc6c8cc1c361f5d3167520b989\" pid:6337 exited_at:{seconds:1742871235 nanos:50139710}" Mar 25 02:53:57.989123 kubelet[2798]: E0325 02:53:57.989040 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:53:59.869895 systemd[1]: Started sshd@75-10.230.58.198:22-139.178.68.195:43698.service - OpenSSH per-connection server daemon (139.178.68.195:43698). Mar 25 02:54:00.788690 sshd[6351]: Accepted publickey for core from 139.178.68.195 port 43698 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:54:00.790991 sshd-session[6351]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:54:00.799927 systemd-logind[1503]: New session 69 of user core. Mar 25 02:54:00.805206 systemd[1]: Started session-69.scope - Session 69 of User core. Mar 25 02:54:01.504919 sshd[6353]: Connection closed by 139.178.68.195 port 43698 Mar 25 02:54:01.504228 sshd-session[6351]: pam_unix(sshd:session): session closed for user core Mar 25 02:54:01.510155 systemd-logind[1503]: Session 69 logged out. Waiting for processes to exit. Mar 25 02:54:01.511086 systemd[1]: sshd@75-10.230.58.198:22-139.178.68.195:43698.service: Deactivated successfully. Mar 25 02:54:01.514805 systemd[1]: session-69.scope: Deactivated successfully. Mar 25 02:54:01.518003 systemd-logind[1503]: Removed session 69. Mar 25 02:54:02.990145 kubelet[2798]: E0325 02:54:02.990065 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:54:06.663806 systemd[1]: Started sshd@76-10.230.58.198:22-139.178.68.195:55356.service - OpenSSH per-connection server daemon (139.178.68.195:55356). Mar 25 02:54:07.565360 sshd[6366]: Accepted publickey for core from 139.178.68.195 port 55356 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:54:07.567668 sshd-session[6366]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:54:07.575354 systemd-logind[1503]: New session 70 of user core. Mar 25 02:54:07.586107 systemd[1]: Started session-70.scope - Session 70 of User core. Mar 25 02:54:07.991032 kubelet[2798]: E0325 02:54:07.990959 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:54:08.273911 sshd[6368]: Connection closed by 139.178.68.195 port 55356 Mar 25 02:54:08.275265 sshd-session[6366]: pam_unix(sshd:session): session closed for user core Mar 25 02:54:08.281133 systemd[1]: sshd@76-10.230.58.198:22-139.178.68.195:55356.service: Deactivated successfully. Mar 25 02:54:08.284792 systemd[1]: session-70.scope: Deactivated successfully. Mar 25 02:54:08.286090 systemd-logind[1503]: Session 70 logged out. Waiting for processes to exit. Mar 25 02:54:08.287697 systemd-logind[1503]: Removed session 70. Mar 25 02:54:12.991637 kubelet[2798]: E0325 02:54:12.991448 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:54:13.430199 systemd[1]: Started sshd@77-10.230.58.198:22-139.178.68.195:55368.service - OpenSSH per-connection server daemon (139.178.68.195:55368). Mar 25 02:54:14.341929 sshd[6380]: Accepted publickey for core from 139.178.68.195 port 55368 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:54:14.344077 sshd-session[6380]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:54:14.351955 systemd-logind[1503]: New session 71 of user core. Mar 25 02:54:14.357085 systemd[1]: Started session-71.scope - Session 71 of User core. Mar 25 02:54:15.052015 sshd[6382]: Connection closed by 139.178.68.195 port 55368 Mar 25 02:54:15.053141 sshd-session[6380]: pam_unix(sshd:session): session closed for user core Mar 25 02:54:15.058620 systemd[1]: sshd@77-10.230.58.198:22-139.178.68.195:55368.service: Deactivated successfully. Mar 25 02:54:15.062289 systemd[1]: session-71.scope: Deactivated successfully. Mar 25 02:54:15.063768 systemd-logind[1503]: Session 71 logged out. Waiting for processes to exit. Mar 25 02:54:15.065443 systemd-logind[1503]: Removed session 71. Mar 25 02:54:17.992263 kubelet[2798]: E0325 02:54:17.992164 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:54:20.210822 systemd[1]: Started sshd@78-10.230.58.198:22-139.178.68.195:35764.service - OpenSSH per-connection server daemon (139.178.68.195:35764). Mar 25 02:54:21.112709 sshd[6393]: Accepted publickey for core from 139.178.68.195 port 35764 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:54:21.114986 sshd-session[6393]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:54:21.122835 systemd-logind[1503]: New session 72 of user core. Mar 25 02:54:21.129095 systemd[1]: Started session-72.scope - Session 72 of User core. Mar 25 02:54:21.827362 sshd[6397]: Connection closed by 139.178.68.195 port 35764 Mar 25 02:54:21.828504 sshd-session[6393]: pam_unix(sshd:session): session closed for user core Mar 25 02:54:21.834437 systemd-logind[1503]: Session 72 logged out. Waiting for processes to exit. Mar 25 02:54:21.835594 systemd[1]: sshd@78-10.230.58.198:22-139.178.68.195:35764.service: Deactivated successfully. Mar 25 02:54:21.840187 systemd[1]: session-72.scope: Deactivated successfully. Mar 25 02:54:21.842183 systemd-logind[1503]: Removed session 72. Mar 25 02:54:22.457831 containerd[1520]: time="2025-03-25T02:54:22.457756294Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c89faa174b78b640d649de5ec024d3354c80b621a11241d36aa050b4cc6f9b43\" id:\"c5ab085b7e4dc77adf1270cb99f328f9d0409ecd0d973dc1df394a2290991c39\" pid:6418 exited_at:{seconds:1742871262 nanos:457206778}" Mar 25 02:54:22.992767 kubelet[2798]: E0325 02:54:22.992681 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:54:25.046340 containerd[1520]: time="2025-03-25T02:54:25.046275513Z" level=info msg="TaskExit event in podsandbox handler container_id:\"08b46a662cdc025a4603a6fcdf6a5e348afd7fcf81bec452154928151ac37f47\" id:\"fadf617cabf0b49137daa801c693754874609625f29670dfcfb9fbb391a25f0b\" pid:6440 exited_at:{seconds:1742871265 nanos:44354111}" Mar 25 02:54:26.988158 systemd[1]: Started sshd@79-10.230.58.198:22-139.178.68.195:44850.service - OpenSSH per-connection server daemon (139.178.68.195:44850). Mar 25 02:54:27.931223 sshd[6452]: Accepted publickey for core from 139.178.68.195 port 44850 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:54:27.934069 sshd-session[6452]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:54:27.944125 systemd-logind[1503]: New session 73 of user core. Mar 25 02:54:27.949087 systemd[1]: Started session-73.scope - Session 73 of User core. Mar 25 02:54:27.993378 kubelet[2798]: E0325 02:54:27.993311 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:54:28.662502 sshd[6456]: Connection closed by 139.178.68.195 port 44850 Mar 25 02:54:28.663635 sshd-session[6452]: pam_unix(sshd:session): session closed for user core Mar 25 02:54:28.669424 systemd[1]: sshd@79-10.230.58.198:22-139.178.68.195:44850.service: Deactivated successfully. Mar 25 02:54:28.672267 systemd[1]: session-73.scope: Deactivated successfully. Mar 25 02:54:28.673464 systemd-logind[1503]: Session 73 logged out. Waiting for processes to exit. Mar 25 02:54:28.675071 systemd-logind[1503]: Removed session 73. Mar 25 02:54:32.994502 kubelet[2798]: E0325 02:54:32.994399 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:54:33.819959 systemd[1]: Started sshd@80-10.230.58.198:22-139.178.68.195:44864.service - OpenSSH per-connection server daemon (139.178.68.195:44864). Mar 25 02:54:34.725753 sshd[6467]: Accepted publickey for core from 139.178.68.195 port 44864 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:54:34.728367 sshd-session[6467]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:54:34.737187 systemd-logind[1503]: New session 74 of user core. Mar 25 02:54:34.746066 systemd[1]: Started session-74.scope - Session 74 of User core. Mar 25 02:54:35.445397 sshd[6469]: Connection closed by 139.178.68.195 port 44864 Mar 25 02:54:35.446652 sshd-session[6467]: pam_unix(sshd:session): session closed for user core Mar 25 02:54:35.453140 systemd[1]: sshd@80-10.230.58.198:22-139.178.68.195:44864.service: Deactivated successfully. Mar 25 02:54:35.456461 systemd[1]: session-74.scope: Deactivated successfully. Mar 25 02:54:35.457784 systemd-logind[1503]: Session 74 logged out. Waiting for processes to exit. Mar 25 02:54:35.459501 systemd-logind[1503]: Removed session 74. Mar 25 02:54:35.747242 systemd[1]: Started sshd@81-10.230.58.198:22-81.192.87.130:15005.service - OpenSSH per-connection server daemon (81.192.87.130:15005). Mar 25 02:54:36.092259 sshd[6480]: Invalid user thomas from 81.192.87.130 port 15005 Mar 25 02:54:36.146105 sshd[6480]: Received disconnect from 81.192.87.130 port 15005:11: Bye Bye [preauth] Mar 25 02:54:36.146105 sshd[6480]: Disconnected from invalid user thomas 81.192.87.130 port 15005 [preauth] Mar 25 02:54:36.148160 systemd[1]: sshd@81-10.230.58.198:22-81.192.87.130:15005.service: Deactivated successfully. Mar 25 02:54:37.995379 kubelet[2798]: E0325 02:54:37.995261 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:54:40.601421 systemd[1]: Started sshd@82-10.230.58.198:22-139.178.68.195:43292.service - OpenSSH per-connection server daemon (139.178.68.195:43292). Mar 25 02:54:41.503745 sshd[6487]: Accepted publickey for core from 139.178.68.195 port 43292 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:54:41.506508 sshd-session[6487]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:54:41.515184 systemd-logind[1503]: New session 75 of user core. Mar 25 02:54:41.521103 systemd[1]: Started session-75.scope - Session 75 of User core. Mar 25 02:54:42.211111 sshd[6489]: Connection closed by 139.178.68.195 port 43292 Mar 25 02:54:42.212327 sshd-session[6487]: pam_unix(sshd:session): session closed for user core Mar 25 02:54:42.219048 systemd[1]: sshd@82-10.230.58.198:22-139.178.68.195:43292.service: Deactivated successfully. Mar 25 02:54:42.222201 systemd[1]: session-75.scope: Deactivated successfully. Mar 25 02:54:42.223788 systemd-logind[1503]: Session 75 logged out. Waiting for processes to exit. Mar 25 02:54:42.225371 systemd-logind[1503]: Removed session 75. Mar 25 02:54:42.995625 kubelet[2798]: E0325 02:54:42.995518 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:54:45.344866 containerd[1520]: time="2025-03-25T02:54:45.344671674Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c89faa174b78b640d649de5ec024d3354c80b621a11241d36aa050b4cc6f9b43\" id:\"3873f6876e9e5a771aacf2a6d39446b2b6d7e9941182f472a461cf08a5d805f0\" pid:6513 exited_at:{seconds:1742871285 nanos:343437126}" Mar 25 02:54:47.374227 systemd[1]: Started sshd@83-10.230.58.198:22-139.178.68.195:59184.service - OpenSSH per-connection server daemon (139.178.68.195:59184). Mar 25 02:54:47.996639 kubelet[2798]: E0325 02:54:47.996563 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:54:48.291947 sshd[6523]: Accepted publickey for core from 139.178.68.195 port 59184 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:54:48.294267 sshd-session[6523]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:54:48.302531 systemd-logind[1503]: New session 76 of user core. Mar 25 02:54:48.309089 systemd[1]: Started session-76.scope - Session 76 of User core. Mar 25 02:54:49.010142 sshd[6525]: Connection closed by 139.178.68.195 port 59184 Mar 25 02:54:49.011310 sshd-session[6523]: pam_unix(sshd:session): session closed for user core Mar 25 02:54:49.017032 systemd[1]: sshd@83-10.230.58.198:22-139.178.68.195:59184.service: Deactivated successfully. Mar 25 02:54:49.021349 systemd[1]: session-76.scope: Deactivated successfully. Mar 25 02:54:49.022607 systemd-logind[1503]: Session 76 logged out. Waiting for processes to exit. Mar 25 02:54:49.024516 systemd-logind[1503]: Removed session 76. Mar 25 02:54:52.457189 containerd[1520]: time="2025-03-25T02:54:52.457111168Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c89faa174b78b640d649de5ec024d3354c80b621a11241d36aa050b4cc6f9b43\" id:\"1f1f87eaf43ea2cdf4ccb6ece489f736b3406a19acc971c41830175ee0dd3945\" pid:6547 exited_at:{seconds:1742871292 nanos:456661504}" Mar 25 02:54:52.997798 kubelet[2798]: E0325 02:54:52.997727 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:54:54.167829 systemd[1]: Started sshd@84-10.230.58.198:22-139.178.68.195:59186.service - OpenSSH per-connection server daemon (139.178.68.195:59186). Mar 25 02:54:55.058366 containerd[1520]: time="2025-03-25T02:54:55.058293531Z" level=info msg="TaskExit event in podsandbox handler container_id:\"08b46a662cdc025a4603a6fcdf6a5e348afd7fcf81bec452154928151ac37f47\" id:\"f5e5c2f0cf40de5a3d14e53b4d920bce047056eeabad00d3bff276a081f444ed\" pid:6573 exited_at:{seconds:1742871295 nanos:57593776}" Mar 25 02:54:55.086978 sshd[6557]: Accepted publickey for core from 139.178.68.195 port 59186 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:54:55.089270 sshd-session[6557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:54:55.098948 systemd-logind[1503]: New session 77 of user core. Mar 25 02:54:55.103130 systemd[1]: Started session-77.scope - Session 77 of User core. Mar 25 02:54:55.826943 sshd[6584]: Connection closed by 139.178.68.195 port 59186 Mar 25 02:54:55.827336 sshd-session[6557]: pam_unix(sshd:session): session closed for user core Mar 25 02:54:55.832769 systemd[1]: sshd@84-10.230.58.198:22-139.178.68.195:59186.service: Deactivated successfully. Mar 25 02:54:55.835717 systemd[1]: session-77.scope: Deactivated successfully. Mar 25 02:54:55.837636 systemd-logind[1503]: Session 77 logged out. Waiting for processes to exit. Mar 25 02:54:55.839609 systemd-logind[1503]: Removed session 77. Mar 25 02:54:57.998629 kubelet[2798]: E0325 02:54:57.998540 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:55:00.981593 systemd[1]: Started sshd@85-10.230.58.198:22-139.178.68.195:35484.service - OpenSSH per-connection server daemon (139.178.68.195:35484). Mar 25 02:55:01.910457 sshd[6603]: Accepted publickey for core from 139.178.68.195 port 35484 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:55:01.912774 sshd-session[6603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:55:01.923068 systemd-logind[1503]: New session 78 of user core. Mar 25 02:55:01.934087 systemd[1]: Started session-78.scope - Session 78 of User core. Mar 25 02:55:02.633188 sshd[6607]: Connection closed by 139.178.68.195 port 35484 Mar 25 02:55:02.633020 sshd-session[6603]: pam_unix(sshd:session): session closed for user core Mar 25 02:55:02.638998 systemd-logind[1503]: Session 78 logged out. Waiting for processes to exit. Mar 25 02:55:02.640815 systemd[1]: sshd@85-10.230.58.198:22-139.178.68.195:35484.service: Deactivated successfully. Mar 25 02:55:02.643851 systemd[1]: session-78.scope: Deactivated successfully. Mar 25 02:55:02.645756 systemd-logind[1503]: Removed session 78. Mar 25 02:55:02.999298 kubelet[2798]: E0325 02:55:02.999084 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:55:07.793210 systemd[1]: Started sshd@86-10.230.58.198:22-139.178.68.195:58498.service - OpenSSH per-connection server daemon (139.178.68.195:58498). Mar 25 02:55:08.000132 kubelet[2798]: E0325 02:55:08.000055 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:55:08.702814 sshd[6629]: Accepted publickey for core from 139.178.68.195 port 58498 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:55:08.705285 sshd-session[6629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:55:08.713544 systemd-logind[1503]: New session 79 of user core. Mar 25 02:55:08.718119 systemd[1]: Started session-79.scope - Session 79 of User core. Mar 25 02:55:09.536527 sshd[6631]: Connection closed by 139.178.68.195 port 58498 Mar 25 02:55:09.542716 sshd-session[6629]: pam_unix(sshd:session): session closed for user core Mar 25 02:55:09.560801 systemd[1]: sshd@86-10.230.58.198:22-139.178.68.195:58498.service: Deactivated successfully. Mar 25 02:55:09.565535 systemd[1]: session-79.scope: Deactivated successfully. Mar 25 02:55:09.571849 systemd-logind[1503]: Session 79 logged out. Waiting for processes to exit. Mar 25 02:55:09.577518 systemd-logind[1503]: Removed session 79. Mar 25 02:55:13.000999 kubelet[2798]: E0325 02:55:13.000708 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:55:14.696245 systemd[1]: Started sshd@87-10.230.58.198:22-139.178.68.195:58502.service - OpenSSH per-connection server daemon (139.178.68.195:58502). Mar 25 02:55:15.628276 sshd[6643]: Accepted publickey for core from 139.178.68.195 port 58502 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:55:15.630677 sshd-session[6643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:55:15.639477 systemd-logind[1503]: New session 80 of user core. Mar 25 02:55:15.647171 systemd[1]: Started session-80.scope - Session 80 of User core. Mar 25 02:55:16.380942 sshd[6645]: Connection closed by 139.178.68.195 port 58502 Mar 25 02:55:16.382101 sshd-session[6643]: pam_unix(sshd:session): session closed for user core Mar 25 02:55:16.388373 systemd-logind[1503]: Session 80 logged out. Waiting for processes to exit. Mar 25 02:55:16.389630 systemd[1]: sshd@87-10.230.58.198:22-139.178.68.195:58502.service: Deactivated successfully. Mar 25 02:55:16.393798 systemd[1]: session-80.scope: Deactivated successfully. Mar 25 02:55:16.395585 systemd-logind[1503]: Removed session 80. Mar 25 02:55:18.001912 kubelet[2798]: E0325 02:55:18.001801 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:55:20.876571 kubelet[2798]: E0325 02:55:20.876482 2798 log.go:32] "Status from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" Mar 25 02:55:20.876571 kubelet[2798]: E0325 02:55:20.876551 2798 kubelet.go:2886] "Container runtime sanity check failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" Mar 25 02:55:21.534746 systemd[1]: Started sshd@88-10.230.58.198:22-139.178.68.195:48052.service - OpenSSH per-connection server daemon (139.178.68.195:48052). Mar 25 02:55:22.439104 sshd[6658]: Accepted publickey for core from 139.178.68.195 port 48052 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:55:22.442481 sshd-session[6658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:55:22.452000 systemd-logind[1503]: New session 81 of user core. Mar 25 02:55:22.458347 systemd[1]: Started session-81.scope - Session 81 of User core. Mar 25 02:55:22.470613 containerd[1520]: time="2025-03-25T02:55:22.470510253Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c89faa174b78b640d649de5ec024d3354c80b621a11241d36aa050b4cc6f9b43\" id:\"a547dfddc58c8c7cdc81ad5eb8587ad83c84df1402a9098a45ad3b61c593bb21\" pid:6673 exited_at:{seconds:1742871322 nanos:469995133}" Mar 25 02:55:23.002117 kubelet[2798]: E0325 02:55:23.002016 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:55:23.149904 sshd[6679]: Connection closed by 139.178.68.195 port 48052 Mar 25 02:55:23.150966 sshd-session[6658]: pam_unix(sshd:session): session closed for user core Mar 25 02:55:23.155817 systemd[1]: sshd@88-10.230.58.198:22-139.178.68.195:48052.service: Deactivated successfully. Mar 25 02:55:23.159156 systemd[1]: session-81.scope: Deactivated successfully. Mar 25 02:55:23.161760 systemd-logind[1503]: Session 81 logged out. Waiting for processes to exit. Mar 25 02:55:23.163422 systemd-logind[1503]: Removed session 81. Mar 25 02:55:25.046004 containerd[1520]: time="2025-03-25T02:55:25.045618876Z" level=info msg="TaskExit event in podsandbox handler container_id:\"08b46a662cdc025a4603a6fcdf6a5e348afd7fcf81bec452154928151ac37f47\" id:\"2df02ecabd97c72d96562630143de916c3d8acb9286f98bb113f52946506d470\" pid:6704 exited_at:{seconds:1742871325 nanos:43555731}" Mar 25 02:55:28.002535 kubelet[2798]: E0325 02:55:28.002465 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:55:28.306284 systemd[1]: Started sshd@89-10.230.58.198:22-139.178.68.195:53376.service - OpenSSH per-connection server daemon (139.178.68.195:53376). Mar 25 02:55:29.229153 sshd[6718]: Accepted publickey for core from 139.178.68.195 port 53376 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:55:29.232748 sshd-session[6718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:55:29.240834 systemd-logind[1503]: New session 82 of user core. Mar 25 02:55:29.248178 systemd[1]: Started session-82.scope - Session 82 of User core. Mar 25 02:55:29.957906 sshd[6728]: Connection closed by 139.178.68.195 port 53376 Mar 25 02:55:29.957133 sshd-session[6718]: pam_unix(sshd:session): session closed for user core Mar 25 02:55:29.963117 systemd[1]: sshd@89-10.230.58.198:22-139.178.68.195:53376.service: Deactivated successfully. Mar 25 02:55:29.967119 systemd[1]: session-82.scope: Deactivated successfully. Mar 25 02:55:29.968759 systemd-logind[1503]: Session 82 logged out. Waiting for processes to exit. Mar 25 02:55:29.970501 systemd-logind[1503]: Removed session 82. Mar 25 02:55:33.003058 kubelet[2798]: E0325 02:55:33.002993 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:55:35.117500 systemd[1]: Started sshd@90-10.230.58.198:22-139.178.68.195:53382.service - OpenSSH per-connection server daemon (139.178.68.195:53382). Mar 25 02:55:36.036558 sshd[6739]: Accepted publickey for core from 139.178.68.195 port 53382 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:55:36.039168 sshd-session[6739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:55:36.047484 systemd-logind[1503]: New session 83 of user core. Mar 25 02:55:36.056172 systemd[1]: Started session-83.scope - Session 83 of User core. Mar 25 02:55:36.756940 sshd[6741]: Connection closed by 139.178.68.195 port 53382 Mar 25 02:55:36.758194 sshd-session[6739]: pam_unix(sshd:session): session closed for user core Mar 25 02:55:36.763506 systemd[1]: sshd@90-10.230.58.198:22-139.178.68.195:53382.service: Deactivated successfully. Mar 25 02:55:36.769262 systemd[1]: session-83.scope: Deactivated successfully. Mar 25 02:55:36.772097 systemd-logind[1503]: Session 83 logged out. Waiting for processes to exit. Mar 25 02:55:36.774566 systemd-logind[1503]: Removed session 83. Mar 25 02:55:38.003898 kubelet[2798]: E0325 02:55:38.003791 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:55:40.652516 systemd[1]: Started sshd@91-10.230.58.198:22-81.192.87.130:26636.service - OpenSSH per-connection server daemon (81.192.87.130:26636). Mar 25 02:55:40.999955 sshd[6752]: Invalid user test from 81.192.87.130 port 26636 Mar 25 02:55:41.057917 sshd[6752]: Received disconnect from 81.192.87.130 port 26636:11: Bye Bye [preauth] Mar 25 02:55:41.057917 sshd[6752]: Disconnected from invalid user test 81.192.87.130 port 26636 [preauth] Mar 25 02:55:41.060794 systemd[1]: sshd@91-10.230.58.198:22-81.192.87.130:26636.service: Deactivated successfully. Mar 25 02:55:41.914682 systemd[1]: Started sshd@92-10.230.58.198:22-139.178.68.195:47830.service - OpenSSH per-connection server daemon (139.178.68.195:47830). Mar 25 02:55:42.821006 sshd[6757]: Accepted publickey for core from 139.178.68.195 port 47830 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:55:42.823461 sshd-session[6757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:55:42.832052 systemd-logind[1503]: New session 84 of user core. Mar 25 02:55:42.839156 systemd[1]: Started session-84.scope - Session 84 of User core. Mar 25 02:55:43.004252 kubelet[2798]: E0325 02:55:43.004176 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:55:43.524030 sshd[6759]: Connection closed by 139.178.68.195 port 47830 Mar 25 02:55:43.523820 sshd-session[6757]: pam_unix(sshd:session): session closed for user core Mar 25 02:55:43.530363 systemd[1]: sshd@92-10.230.58.198:22-139.178.68.195:47830.service: Deactivated successfully. Mar 25 02:55:43.533449 systemd[1]: session-84.scope: Deactivated successfully. Mar 25 02:55:43.535165 systemd-logind[1503]: Session 84 logged out. Waiting for processes to exit. Mar 25 02:55:43.537347 systemd-logind[1503]: Removed session 84. Mar 25 02:55:43.681773 systemd[1]: Started sshd@93-10.230.58.198:22-139.178.68.195:47834.service - OpenSSH per-connection server daemon (139.178.68.195:47834). Mar 25 02:55:44.594598 sshd[6771]: Accepted publickey for core from 139.178.68.195 port 47834 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:55:44.596909 sshd-session[6771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:55:44.607970 systemd-logind[1503]: New session 85 of user core. Mar 25 02:55:44.620148 systemd[1]: Started session-85.scope - Session 85 of User core. Mar 25 02:55:45.363326 containerd[1520]: time="2025-03-25T02:55:45.363261212Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c89faa174b78b640d649de5ec024d3354c80b621a11241d36aa050b4cc6f9b43\" id:\"12aa03dd33533272c182608a58dd7b47a2f0d4fd82063a6c45592c43296ce68f\" pid:6791 exited_at:{seconds:1742871345 nanos:362680603}" Mar 25 02:55:45.713900 sshd[6773]: Connection closed by 139.178.68.195 port 47834 Mar 25 02:55:45.715263 sshd-session[6771]: pam_unix(sshd:session): session closed for user core Mar 25 02:55:45.722344 systemd[1]: sshd@93-10.230.58.198:22-139.178.68.195:47834.service: Deactivated successfully. Mar 25 02:55:45.726378 systemd[1]: session-85.scope: Deactivated successfully. Mar 25 02:55:45.728236 systemd-logind[1503]: Session 85 logged out. Waiting for processes to exit. Mar 25 02:55:45.729763 systemd-logind[1503]: Removed session 85. Mar 25 02:55:45.871474 systemd[1]: Started sshd@94-10.230.58.198:22-139.178.68.195:52222.service - OpenSSH per-connection server daemon (139.178.68.195:52222). Mar 25 02:55:46.811424 sshd[6803]: Accepted publickey for core from 139.178.68.195 port 52222 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:55:46.813913 sshd-session[6803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:55:46.822209 systemd-logind[1503]: New session 86 of user core. Mar 25 02:55:46.833155 systemd[1]: Started session-86.scope - Session 86 of User core. Mar 25 02:55:48.004576 kubelet[2798]: E0325 02:55:48.004487 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:55:50.113363 sshd[6805]: Connection closed by 139.178.68.195 port 52222 Mar 25 02:55:50.115081 sshd-session[6803]: pam_unix(sshd:session): session closed for user core Mar 25 02:55:50.119701 systemd[1]: sshd@94-10.230.58.198:22-139.178.68.195:52222.service: Deactivated successfully. Mar 25 02:55:50.122892 systemd[1]: session-86.scope: Deactivated successfully. Mar 25 02:55:50.125307 systemd-logind[1503]: Session 86 logged out. Waiting for processes to exit. Mar 25 02:55:50.127411 systemd-logind[1503]: Removed session 86. Mar 25 02:55:50.269630 systemd[1]: Started sshd@95-10.230.58.198:22-139.178.68.195:52224.service - OpenSSH per-connection server daemon (139.178.68.195:52224). Mar 25 02:55:51.194348 sshd[6822]: Accepted publickey for core from 139.178.68.195 port 52224 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:55:51.197047 sshd-session[6822]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:55:51.205968 systemd-logind[1503]: New session 87 of user core. Mar 25 02:55:51.211128 systemd[1]: Started session-87.scope - Session 87 of User core. Mar 25 02:55:52.456030 containerd[1520]: time="2025-03-25T02:55:52.455660065Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c89faa174b78b640d649de5ec024d3354c80b621a11241d36aa050b4cc6f9b43\" id:\"3320c0b309f836d42b8b2c077b0074bf8240df6b2f7e9119ec5fb17586528291\" pid:6841 exited_at:{seconds:1742871352 nanos:455012220}" Mar 25 02:55:52.482167 sshd[6824]: Connection closed by 139.178.68.195 port 52224 Mar 25 02:55:52.483118 sshd-session[6822]: pam_unix(sshd:session): session closed for user core Mar 25 02:55:52.490099 systemd[1]: sshd@95-10.230.58.198:22-139.178.68.195:52224.service: Deactivated successfully. Mar 25 02:55:52.493532 systemd[1]: session-87.scope: Deactivated successfully. Mar 25 02:55:52.495777 systemd-logind[1503]: Session 87 logged out. Waiting for processes to exit. Mar 25 02:55:52.498098 systemd-logind[1503]: Removed session 87. Mar 25 02:55:52.641230 systemd[1]: Started sshd@96-10.230.58.198:22-139.178.68.195:52230.service - OpenSSH per-connection server daemon (139.178.68.195:52230). Mar 25 02:55:53.005607 kubelet[2798]: E0325 02:55:53.005542 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:55:53.551384 sshd[6854]: Accepted publickey for core from 139.178.68.195 port 52230 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:55:53.552097 sshd-session[6854]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:55:53.560950 systemd-logind[1503]: New session 88 of user core. Mar 25 02:55:53.570116 systemd[1]: Started session-88.scope - Session 88 of User core. Mar 25 02:55:54.253475 sshd[6856]: Connection closed by 139.178.68.195 port 52230 Mar 25 02:55:54.254702 sshd-session[6854]: pam_unix(sshd:session): session closed for user core Mar 25 02:55:54.260205 systemd[1]: sshd@96-10.230.58.198:22-139.178.68.195:52230.service: Deactivated successfully. Mar 25 02:55:54.263381 systemd[1]: session-88.scope: Deactivated successfully. Mar 25 02:55:54.264696 systemd-logind[1503]: Session 88 logged out. Waiting for processes to exit. Mar 25 02:55:54.266308 systemd-logind[1503]: Removed session 88. Mar 25 02:55:55.051142 containerd[1520]: time="2025-03-25T02:55:55.046430525Z" level=info msg="TaskExit event in podsandbox handler container_id:\"08b46a662cdc025a4603a6fcdf6a5e348afd7fcf81bec452154928151ac37f47\" id:\"b52fbb9d851017d417b92d2e5306d298d83b2837ae5ab6885ca7c15229211ec7\" pid:6879 exited_at:{seconds:1742871355 nanos:45506885}" Mar 25 02:55:58.006204 kubelet[2798]: E0325 02:55:58.006124 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:55:59.413577 systemd[1]: Started sshd@97-10.230.58.198:22-139.178.68.195:51218.service - OpenSSH per-connection server daemon (139.178.68.195:51218). Mar 25 02:56:00.323203 sshd[6894]: Accepted publickey for core from 139.178.68.195 port 51218 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:56:00.324164 sshd-session[6894]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:56:00.334009 systemd-logind[1503]: New session 89 of user core. Mar 25 02:56:00.342215 systemd[1]: Started session-89.scope - Session 89 of User core. Mar 25 02:56:01.028438 sshd[6898]: Connection closed by 139.178.68.195 port 51218 Mar 25 02:56:01.029540 sshd-session[6894]: pam_unix(sshd:session): session closed for user core Mar 25 02:56:01.036094 systemd[1]: sshd@97-10.230.58.198:22-139.178.68.195:51218.service: Deactivated successfully. Mar 25 02:56:01.039380 systemd[1]: session-89.scope: Deactivated successfully. Mar 25 02:56:01.041780 systemd-logind[1503]: Session 89 logged out. Waiting for processes to exit. Mar 25 02:56:01.043415 systemd-logind[1503]: Removed session 89. Mar 25 02:56:03.006554 kubelet[2798]: E0325 02:56:03.006487 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:56:06.187117 systemd[1]: Started sshd@98-10.230.58.198:22-139.178.68.195:54840.service - OpenSSH per-connection server daemon (139.178.68.195:54840). Mar 25 02:56:07.097835 sshd[6910]: Accepted publickey for core from 139.178.68.195 port 54840 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:56:07.100151 sshd-session[6910]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:56:07.109350 systemd-logind[1503]: New session 90 of user core. Mar 25 02:56:07.118062 systemd[1]: Started session-90.scope - Session 90 of User core. Mar 25 02:56:07.816252 sshd[6913]: Connection closed by 139.178.68.195 port 54840 Mar 25 02:56:07.817612 sshd-session[6910]: pam_unix(sshd:session): session closed for user core Mar 25 02:56:07.824619 systemd[1]: sshd@98-10.230.58.198:22-139.178.68.195:54840.service: Deactivated successfully. Mar 25 02:56:07.827578 systemd[1]: session-90.scope: Deactivated successfully. Mar 25 02:56:07.829327 systemd-logind[1503]: Session 90 logged out. Waiting for processes to exit. Mar 25 02:56:07.831798 systemd-logind[1503]: Removed session 90. Mar 25 02:56:08.007769 kubelet[2798]: E0325 02:56:08.007693 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:56:12.976223 systemd[1]: Started sshd@99-10.230.58.198:22-139.178.68.195:54844.service - OpenSSH per-connection server daemon (139.178.68.195:54844). Mar 25 02:56:13.008768 kubelet[2798]: E0325 02:56:13.008703 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:56:13.915706 sshd[6925]: Accepted publickey for core from 139.178.68.195 port 54844 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:56:13.918142 sshd-session[6925]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:56:13.926635 systemd-logind[1503]: New session 91 of user core. Mar 25 02:56:13.935225 systemd[1]: Started session-91.scope - Session 91 of User core. Mar 25 02:56:14.627741 sshd[6927]: Connection closed by 139.178.68.195 port 54844 Mar 25 02:56:14.626641 sshd-session[6925]: pam_unix(sshd:session): session closed for user core Mar 25 02:56:14.631773 systemd-logind[1503]: Session 91 logged out. Waiting for processes to exit. Mar 25 02:56:14.632743 systemd[1]: sshd@99-10.230.58.198:22-139.178.68.195:54844.service: Deactivated successfully. Mar 25 02:56:14.637367 systemd[1]: session-91.scope: Deactivated successfully. Mar 25 02:56:14.640627 systemd-logind[1503]: Removed session 91. Mar 25 02:56:18.009368 kubelet[2798]: E0325 02:56:18.009255 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:56:19.784760 systemd[1]: Started sshd@100-10.230.58.198:22-139.178.68.195:49280.service - OpenSSH per-connection server daemon (139.178.68.195:49280). Mar 25 02:56:20.693966 sshd[6939]: Accepted publickey for core from 139.178.68.195 port 49280 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:56:20.696244 sshd-session[6939]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:56:20.705279 systemd-logind[1503]: New session 92 of user core. Mar 25 02:56:20.710260 systemd[1]: Started session-92.scope - Session 92 of User core. Mar 25 02:56:21.395535 sshd[6943]: Connection closed by 139.178.68.195 port 49280 Mar 25 02:56:21.396826 sshd-session[6939]: pam_unix(sshd:session): session closed for user core Mar 25 02:56:21.402598 systemd[1]: sshd@100-10.230.58.198:22-139.178.68.195:49280.service: Deactivated successfully. Mar 25 02:56:21.406798 systemd[1]: session-92.scope: Deactivated successfully. Mar 25 02:56:21.409382 systemd-logind[1503]: Session 92 logged out. Waiting for processes to exit. Mar 25 02:56:21.411161 systemd-logind[1503]: Removed session 92. Mar 25 02:56:22.457940 containerd[1520]: time="2025-03-25T02:56:22.457425873Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c89faa174b78b640d649de5ec024d3354c80b621a11241d36aa050b4cc6f9b43\" id:\"bddad2094775f2fa285f01cbbd536bdd0b7fcf96cc86d6b942f841fabd3bbcf9\" pid:6966 exited_at:{seconds:1742871382 nanos:457135042}" Mar 25 02:56:23.010272 kubelet[2798]: E0325 02:56:23.010192 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:56:25.052956 containerd[1520]: time="2025-03-25T02:56:25.052831444Z" level=info msg="TaskExit event in podsandbox handler container_id:\"08b46a662cdc025a4603a6fcdf6a5e348afd7fcf81bec452154928151ac37f47\" id:\"04b093c29194e564c29fd3ce48e11d214213d0ba312ce6fa225734a9e398a3aa\" pid:6988 exited_at:{seconds:1742871385 nanos:52352123}" Mar 25 02:56:26.557337 systemd[1]: Started sshd@101-10.230.58.198:22-139.178.68.195:45324.service - OpenSSH per-connection server daemon (139.178.68.195:45324). Mar 25 02:56:27.499788 sshd[7001]: Accepted publickey for core from 139.178.68.195 port 45324 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:56:27.502395 sshd-session[7001]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:56:27.509783 systemd-logind[1503]: New session 93 of user core. Mar 25 02:56:27.517147 systemd[1]: Started session-93.scope - Session 93 of User core. Mar 25 02:56:28.011168 kubelet[2798]: E0325 02:56:28.011026 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:56:28.231726 sshd[7003]: Connection closed by 139.178.68.195 port 45324 Mar 25 02:56:28.233128 sshd-session[7001]: pam_unix(sshd:session): session closed for user core Mar 25 02:56:28.238803 systemd[1]: sshd@101-10.230.58.198:22-139.178.68.195:45324.service: Deactivated successfully. Mar 25 02:56:28.242540 systemd[1]: session-93.scope: Deactivated successfully. Mar 25 02:56:28.243692 systemd-logind[1503]: Session 93 logged out. Waiting for processes to exit. Mar 25 02:56:28.245727 systemd-logind[1503]: Removed session 93. Mar 25 02:56:33.011564 kubelet[2798]: E0325 02:56:33.011442 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:56:33.392601 systemd[1]: Started sshd@102-10.230.58.198:22-139.178.68.195:45334.service - OpenSSH per-connection server daemon (139.178.68.195:45334). Mar 25 02:56:34.308931 sshd[7021]: Accepted publickey for core from 139.178.68.195 port 45334 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:56:34.310488 sshd-session[7021]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:56:34.321142 systemd-logind[1503]: New session 94 of user core. Mar 25 02:56:34.329428 systemd[1]: Started session-94.scope - Session 94 of User core. Mar 25 02:56:35.029946 sshd[7023]: Connection closed by 139.178.68.195 port 45334 Mar 25 02:56:35.031176 sshd-session[7021]: pam_unix(sshd:session): session closed for user core Mar 25 02:56:35.035376 systemd-logind[1503]: Session 94 logged out. Waiting for processes to exit. Mar 25 02:56:35.038350 systemd[1]: sshd@102-10.230.58.198:22-139.178.68.195:45334.service: Deactivated successfully. Mar 25 02:56:35.042405 systemd[1]: session-94.scope: Deactivated successfully. Mar 25 02:56:35.046432 systemd-logind[1503]: Removed session 94. Mar 25 02:56:36.041183 systemd[1]: Started sshd@103-10.230.58.198:22-95.90.242.212:44869.service - OpenSSH per-connection server daemon (95.90.242.212:44869). Mar 25 02:56:36.280181 sshd[7035]: Invalid user deploy from 95.90.242.212 port 44869 Mar 25 02:56:36.315071 sshd[7035]: Received disconnect from 95.90.242.212 port 44869:11: Bye Bye [preauth] Mar 25 02:56:36.315071 sshd[7035]: Disconnected from invalid user deploy 95.90.242.212 port 44869 [preauth] Mar 25 02:56:36.316934 systemd[1]: sshd@103-10.230.58.198:22-95.90.242.212:44869.service: Deactivated successfully. Mar 25 02:56:38.011647 kubelet[2798]: E0325 02:56:38.011586 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:56:40.188792 systemd[1]: Started sshd@104-10.230.58.198:22-139.178.68.195:45048.service - OpenSSH per-connection server daemon (139.178.68.195:45048). Mar 25 02:56:41.107910 sshd[7040]: Accepted publickey for core from 139.178.68.195 port 45048 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:56:41.108031 sshd-session[7040]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:56:41.118961 systemd-logind[1503]: New session 95 of user core. Mar 25 02:56:41.125064 systemd[1]: Started session-95.scope - Session 95 of User core. Mar 25 02:56:41.826408 sshd[7042]: Connection closed by 139.178.68.195 port 45048 Mar 25 02:56:41.828540 sshd-session[7040]: pam_unix(sshd:session): session closed for user core Mar 25 02:56:41.837387 systemd[1]: sshd@104-10.230.58.198:22-139.178.68.195:45048.service: Deactivated successfully. Mar 25 02:56:41.842716 systemd[1]: session-95.scope: Deactivated successfully. Mar 25 02:56:41.844691 systemd-logind[1503]: Session 95 logged out. Waiting for processes to exit. Mar 25 02:56:41.847216 systemd-logind[1503]: Removed session 95. Mar 25 02:56:43.012143 kubelet[2798]: E0325 02:56:43.012025 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:56:45.342380 containerd[1520]: time="2025-03-25T02:56:45.342066377Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c89faa174b78b640d649de5ec024d3354c80b621a11241d36aa050b4cc6f9b43\" id:\"b6c74c6ab20acb8760f8228f490d6cc49bcd25e7eb93a64f82d293ebb663540b\" pid:7076 exited_at:{seconds:1742871405 nanos:341295571}" Mar 25 02:56:46.345345 systemd[1]: Started sshd@105-10.230.58.198:22-81.192.87.130:38288.service - OpenSSH per-connection server daemon (81.192.87.130:38288). Mar 25 02:56:46.712249 sshd[7085]: Invalid user pip from 81.192.87.130 port 38288 Mar 25 02:56:46.770495 sshd[7085]: Received disconnect from 81.192.87.130 port 38288:11: Bye Bye [preauth] Mar 25 02:56:46.770495 sshd[7085]: Disconnected from invalid user pip 81.192.87.130 port 38288 [preauth] Mar 25 02:56:46.773159 systemd[1]: sshd@105-10.230.58.198:22-81.192.87.130:38288.service: Deactivated successfully. Mar 25 02:56:46.986992 systemd[1]: Started sshd@106-10.230.58.198:22-139.178.68.195:40930.service - OpenSSH per-connection server daemon (139.178.68.195:40930). Mar 25 02:56:47.898733 sshd[7090]: Accepted publickey for core from 139.178.68.195 port 40930 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:56:47.901215 sshd-session[7090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:56:47.909043 systemd-logind[1503]: New session 96 of user core. Mar 25 02:56:47.915056 systemd[1]: Started session-96.scope - Session 96 of User core. Mar 25 02:56:48.012509 kubelet[2798]: E0325 02:56:48.012409 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:56:48.620940 sshd[7092]: Connection closed by 139.178.68.195 port 40930 Mar 25 02:56:48.620630 sshd-session[7090]: pam_unix(sshd:session): session closed for user core Mar 25 02:56:48.627018 systemd[1]: sshd@106-10.230.58.198:22-139.178.68.195:40930.service: Deactivated successfully. Mar 25 02:56:48.631219 systemd[1]: session-96.scope: Deactivated successfully. Mar 25 02:56:48.632745 systemd-logind[1503]: Session 96 logged out. Waiting for processes to exit. Mar 25 02:56:48.634183 systemd-logind[1503]: Removed session 96. Mar 25 02:56:52.464601 containerd[1520]: time="2025-03-25T02:56:52.464427925Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c89faa174b78b640d649de5ec024d3354c80b621a11241d36aa050b4cc6f9b43\" id:\"402d095d2c41e7ab78b81e137f04500284088308081e48040e092fcfc72ff702\" pid:7116 exited_at:{seconds:1742871412 nanos:462850052}" Mar 25 02:56:53.013855 kubelet[2798]: E0325 02:56:53.013798 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:56:53.777314 systemd[1]: Started sshd@107-10.230.58.198:22-139.178.68.195:40942.service - OpenSSH per-connection server daemon (139.178.68.195:40942). Mar 25 02:56:54.724319 sshd[7126]: Accepted publickey for core from 139.178.68.195 port 40942 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:56:54.727640 sshd-session[7126]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:56:54.735314 systemd-logind[1503]: New session 97 of user core. Mar 25 02:56:54.745183 systemd[1]: Started session-97.scope - Session 97 of User core. Mar 25 02:56:55.042350 containerd[1520]: time="2025-03-25T02:56:55.041991355Z" level=info msg="TaskExit event in podsandbox handler container_id:\"08b46a662cdc025a4603a6fcdf6a5e348afd7fcf81bec452154928151ac37f47\" id:\"eba28b168cef251f0d1af0e10b85726e50472949294a490049897e2125cd7c05\" pid:7142 exited_at:{seconds:1742871415 nanos:41053450}" Mar 25 02:56:55.502065 sshd[7128]: Connection closed by 139.178.68.195 port 40942 Mar 25 02:56:55.503413 sshd-session[7126]: pam_unix(sshd:session): session closed for user core Mar 25 02:56:55.509394 systemd[1]: sshd@107-10.230.58.198:22-139.178.68.195:40942.service: Deactivated successfully. Mar 25 02:56:55.510168 systemd-logind[1503]: Session 97 logged out. Waiting for processes to exit. Mar 25 02:56:55.513120 systemd[1]: session-97.scope: Deactivated successfully. Mar 25 02:56:55.516280 systemd-logind[1503]: Removed session 97. Mar 25 02:56:58.014784 kubelet[2798]: E0325 02:56:58.014658 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:57:00.659662 systemd[1]: Started sshd@108-10.230.58.198:22-139.178.68.195:60564.service - OpenSSH per-connection server daemon (139.178.68.195:60564). Mar 25 02:57:01.585928 sshd[7166]: Accepted publickey for core from 139.178.68.195 port 60564 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:57:01.588078 sshd-session[7166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:57:01.596020 systemd-logind[1503]: New session 98 of user core. Mar 25 02:57:01.606118 systemd[1]: Started session-98.scope - Session 98 of User core. Mar 25 02:57:02.304954 sshd[7168]: Connection closed by 139.178.68.195 port 60564 Mar 25 02:57:02.306185 sshd-session[7166]: pam_unix(sshd:session): session closed for user core Mar 25 02:57:02.312227 systemd-logind[1503]: Session 98 logged out. Waiting for processes to exit. Mar 25 02:57:02.313647 systemd[1]: sshd@108-10.230.58.198:22-139.178.68.195:60564.service: Deactivated successfully. Mar 25 02:57:02.317287 systemd[1]: session-98.scope: Deactivated successfully. Mar 25 02:57:02.318863 systemd-logind[1503]: Removed session 98. Mar 25 02:57:03.015254 kubelet[2798]: E0325 02:57:03.015089 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:57:07.460248 systemd[1]: Started sshd@109-10.230.58.198:22-139.178.68.195:42414.service - OpenSSH per-connection server daemon (139.178.68.195:42414). Mar 25 02:57:08.016242 kubelet[2798]: E0325 02:57:08.016146 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:57:08.371030 sshd[7179]: Accepted publickey for core from 139.178.68.195 port 42414 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:57:08.372867 sshd-session[7179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:57:08.381983 systemd-logind[1503]: New session 99 of user core. Mar 25 02:57:08.387108 systemd[1]: Started session-99.scope - Session 99 of User core. Mar 25 02:57:09.079279 sshd[7181]: Connection closed by 139.178.68.195 port 42414 Mar 25 02:57:09.078169 sshd-session[7179]: pam_unix(sshd:session): session closed for user core Mar 25 02:57:09.084360 systemd[1]: sshd@109-10.230.58.198:22-139.178.68.195:42414.service: Deactivated successfully. Mar 25 02:57:09.089091 systemd[1]: session-99.scope: Deactivated successfully. Mar 25 02:57:09.090830 systemd-logind[1503]: Session 99 logged out. Waiting for processes to exit. Mar 25 02:57:09.092483 systemd-logind[1503]: Removed session 99. Mar 25 02:57:13.016914 kubelet[2798]: E0325 02:57:13.016654 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:57:14.238583 systemd[1]: Started sshd@110-10.230.58.198:22-139.178.68.195:42426.service - OpenSSH per-connection server daemon (139.178.68.195:42426). Mar 25 02:57:15.170925 sshd[7193]: Accepted publickey for core from 139.178.68.195 port 42426 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:57:15.173170 sshd-session[7193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:57:15.180947 systemd-logind[1503]: New session 100 of user core. Mar 25 02:57:15.190146 systemd[1]: Started session-100.scope - Session 100 of User core. Mar 25 02:57:15.889532 sshd[7195]: Connection closed by 139.178.68.195 port 42426 Mar 25 02:57:15.888769 sshd-session[7193]: pam_unix(sshd:session): session closed for user core Mar 25 02:57:15.895343 systemd[1]: sshd@110-10.230.58.198:22-139.178.68.195:42426.service: Deactivated successfully. Mar 25 02:57:15.899191 systemd[1]: session-100.scope: Deactivated successfully. Mar 25 02:57:15.901395 systemd-logind[1503]: Session 100 logged out. Waiting for processes to exit. Mar 25 02:57:15.903776 systemd-logind[1503]: Removed session 100. Mar 25 02:57:18.017201 kubelet[2798]: E0325 02:57:18.017108 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:57:21.046682 systemd[1]: Started sshd@111-10.230.58.198:22-139.178.68.195:40352.service - OpenSSH per-connection server daemon (139.178.68.195:40352). Mar 25 02:57:21.957920 sshd[7209]: Accepted publickey for core from 139.178.68.195 port 40352 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:57:21.960506 sshd-session[7209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:57:21.969688 systemd-logind[1503]: New session 101 of user core. Mar 25 02:57:21.979126 systemd[1]: Started session-101.scope - Session 101 of User core. Mar 25 02:57:22.469445 containerd[1520]: time="2025-03-25T02:57:22.469361893Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c89faa174b78b640d649de5ec024d3354c80b621a11241d36aa050b4cc6f9b43\" id:\"5988aa879ea47d6df27503ac296f2c271a80e53176f441c99132dc85961b2a8e\" pid:7225 exited_at:{seconds:1742871442 nanos:463092927}" Mar 25 02:57:22.688672 sshd[7211]: Connection closed by 139.178.68.195 port 40352 Mar 25 02:57:22.691161 sshd-session[7209]: pam_unix(sshd:session): session closed for user core Mar 25 02:57:22.697922 systemd[1]: sshd@111-10.230.58.198:22-139.178.68.195:40352.service: Deactivated successfully. Mar 25 02:57:22.702515 systemd[1]: session-101.scope: Deactivated successfully. Mar 25 02:57:22.704533 systemd-logind[1503]: Session 101 logged out. Waiting for processes to exit. Mar 25 02:57:22.706518 systemd-logind[1503]: Removed session 101. Mar 25 02:57:23.017347 kubelet[2798]: E0325 02:57:23.017241 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:57:25.048574 containerd[1520]: time="2025-03-25T02:57:25.048397459Z" level=info msg="TaskExit event in podsandbox handler container_id:\"08b46a662cdc025a4603a6fcdf6a5e348afd7fcf81bec452154928151ac37f47\" id:\"0c5051e1e1f1aeea9d66f1a0c28da0595ac249ea4da13f02ca3c5620e6b2d8ce\" pid:7257 exited_at:{seconds:1742871445 nanos:46981531}" Mar 25 02:57:25.877795 kubelet[2798]: E0325 02:57:25.877612 2798 log.go:32] "Status from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" Mar 25 02:57:25.877795 kubelet[2798]: E0325 02:57:25.877713 2798 kubelet.go:2886] "Container runtime sanity check failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" Mar 25 02:57:27.848804 systemd[1]: Started sshd@112-10.230.58.198:22-139.178.68.195:54100.service - OpenSSH per-connection server daemon (139.178.68.195:54100). Mar 25 02:57:28.018349 kubelet[2798]: E0325 02:57:28.018267 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:57:28.749930 sshd[7272]: Accepted publickey for core from 139.178.68.195 port 54100 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:57:28.751700 sshd-session[7272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:57:28.761397 systemd-logind[1503]: New session 102 of user core. Mar 25 02:57:28.765121 systemd[1]: Started session-102.scope - Session 102 of User core. Mar 25 02:57:29.454732 sshd[7274]: Connection closed by 139.178.68.195 port 54100 Mar 25 02:57:29.455840 sshd-session[7272]: pam_unix(sshd:session): session closed for user core Mar 25 02:57:29.462371 systemd[1]: sshd@112-10.230.58.198:22-139.178.68.195:54100.service: Deactivated successfully. Mar 25 02:57:29.465720 systemd[1]: session-102.scope: Deactivated successfully. Mar 25 02:57:29.467788 systemd-logind[1503]: Session 102 logged out. Waiting for processes to exit. Mar 25 02:57:29.469340 systemd-logind[1503]: Removed session 102. Mar 25 02:57:33.018722 kubelet[2798]: E0325 02:57:33.018649 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:57:34.627231 systemd[1]: Started sshd@113-10.230.58.198:22-139.178.68.195:54102.service - OpenSSH per-connection server daemon (139.178.68.195:54102). Mar 25 02:57:35.547233 sshd[7285]: Accepted publickey for core from 139.178.68.195 port 54102 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:57:35.549787 sshd-session[7285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:57:35.559260 systemd-logind[1503]: New session 103 of user core. Mar 25 02:57:35.569108 systemd[1]: Started session-103.scope - Session 103 of User core. Mar 25 02:57:36.275929 sshd[7287]: Connection closed by 139.178.68.195 port 54102 Mar 25 02:57:36.276605 sshd-session[7285]: pam_unix(sshd:session): session closed for user core Mar 25 02:57:36.282625 systemd[1]: sshd@113-10.230.58.198:22-139.178.68.195:54102.service: Deactivated successfully. Mar 25 02:57:36.285731 systemd[1]: session-103.scope: Deactivated successfully. Mar 25 02:57:36.288030 systemd-logind[1503]: Session 103 logged out. Waiting for processes to exit. Mar 25 02:57:36.289631 systemd-logind[1503]: Removed session 103. Mar 25 02:57:38.019649 kubelet[2798]: E0325 02:57:38.019574 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:57:41.439970 systemd[1]: Started sshd@114-10.230.58.198:22-139.178.68.195:45216.service - OpenSSH per-connection server daemon (139.178.68.195:45216). Mar 25 02:57:42.346473 sshd[7299]: Accepted publickey for core from 139.178.68.195 port 45216 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:57:42.348999 sshd-session[7299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:57:42.357209 systemd-logind[1503]: New session 104 of user core. Mar 25 02:57:42.364085 systemd[1]: Started session-104.scope - Session 104 of User core. Mar 25 02:57:43.020386 kubelet[2798]: E0325 02:57:43.020164 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:57:43.074908 sshd[7301]: Connection closed by 139.178.68.195 port 45216 Mar 25 02:57:43.076826 sshd-session[7299]: pam_unix(sshd:session): session closed for user core Mar 25 02:57:43.084736 systemd[1]: sshd@114-10.230.58.198:22-139.178.68.195:45216.service: Deactivated successfully. Mar 25 02:57:43.092362 systemd[1]: session-104.scope: Deactivated successfully. Mar 25 02:57:43.095209 systemd-logind[1503]: Session 104 logged out. Waiting for processes to exit. Mar 25 02:57:43.099313 systemd-logind[1503]: Removed session 104. Mar 25 02:57:45.345651 containerd[1520]: time="2025-03-25T02:57:45.345545224Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c89faa174b78b640d649de5ec024d3354c80b621a11241d36aa050b4cc6f9b43\" id:\"c19f06412ba4a3d576d6bf89be763e43aba24a091af508a52d310da643819234\" pid:7324 exited_at:{seconds:1742871465 nanos:344604099}" Mar 25 02:57:47.568655 systemd[1]: Started sshd@115-10.230.58.198:22-81.192.87.130:49944.service - OpenSSH per-connection server daemon (81.192.87.130:49944). Mar 25 02:57:47.909402 sshd[7334]: Invalid user super from 81.192.87.130 port 49944 Mar 25 02:57:47.964013 sshd[7334]: Received disconnect from 81.192.87.130 port 49944:11: Bye Bye [preauth] Mar 25 02:57:47.964013 sshd[7334]: Disconnected from invalid user super 81.192.87.130 port 49944 [preauth] Mar 25 02:57:47.966641 systemd[1]: sshd@115-10.230.58.198:22-81.192.87.130:49944.service: Deactivated successfully. Mar 25 02:57:48.020646 kubelet[2798]: E0325 02:57:48.020541 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:57:48.233072 systemd[1]: Started sshd@116-10.230.58.198:22-139.178.68.195:47334.service - OpenSSH per-connection server daemon (139.178.68.195:47334). Mar 25 02:57:49.143545 sshd[7339]: Accepted publickey for core from 139.178.68.195 port 47334 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:57:49.145856 sshd-session[7339]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:57:49.153926 systemd-logind[1503]: New session 105 of user core. Mar 25 02:57:49.165133 systemd[1]: Started session-105.scope - Session 105 of User core. Mar 25 02:57:49.856294 sshd[7349]: Connection closed by 139.178.68.195 port 47334 Mar 25 02:57:49.857537 sshd-session[7339]: pam_unix(sshd:session): session closed for user core Mar 25 02:57:49.863801 systemd[1]: sshd@116-10.230.58.198:22-139.178.68.195:47334.service: Deactivated successfully. Mar 25 02:57:49.867108 systemd[1]: session-105.scope: Deactivated successfully. Mar 25 02:57:49.868566 systemd-logind[1503]: Session 105 logged out. Waiting for processes to exit. Mar 25 02:57:49.870592 systemd-logind[1503]: Removed session 105. Mar 25 02:57:52.480514 containerd[1520]: time="2025-03-25T02:57:52.480402275Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c89faa174b78b640d649de5ec024d3354c80b621a11241d36aa050b4cc6f9b43\" id:\"efba640502d9a37cdf4ac39ca144e7dbb4091eb27ac0a8e32fde8242d1c7dbdb\" pid:7374 exited_at:{seconds:1742871472 nanos:479624776}" Mar 25 02:57:53.021364 kubelet[2798]: E0325 02:57:53.021261 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:57:55.014882 systemd[1]: Started sshd@117-10.230.58.198:22-139.178.68.195:47338.service - OpenSSH per-connection server daemon (139.178.68.195:47338). Mar 25 02:57:55.046009 containerd[1520]: time="2025-03-25T02:57:55.045721264Z" level=info msg="TaskExit event in podsandbox handler container_id:\"08b46a662cdc025a4603a6fcdf6a5e348afd7fcf81bec452154928151ac37f47\" id:\"c61b44fac58d461c1799215b2f25cb266ac51d8d8aebba25be1625e703897b9f\" pid:7395 exited_at:{seconds:1742871475 nanos:45151725}" Mar 25 02:57:55.934232 sshd[7406]: Accepted publickey for core from 139.178.68.195 port 47338 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:57:55.937453 sshd-session[7406]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:57:55.949132 systemd-logind[1503]: New session 106 of user core. Mar 25 02:57:55.960133 systemd[1]: Started session-106.scope - Session 106 of User core. Mar 25 02:57:56.636564 sshd[7410]: Connection closed by 139.178.68.195 port 47338 Mar 25 02:57:56.637316 sshd-session[7406]: pam_unix(sshd:session): session closed for user core Mar 25 02:57:56.641824 systemd-logind[1503]: Session 106 logged out. Waiting for processes to exit. Mar 25 02:57:56.642451 systemd[1]: sshd@117-10.230.58.198:22-139.178.68.195:47338.service: Deactivated successfully. Mar 25 02:57:56.646356 systemd[1]: session-106.scope: Deactivated successfully. Mar 25 02:57:56.650357 systemd-logind[1503]: Removed session 106. Mar 25 02:57:58.021586 kubelet[2798]: E0325 02:57:58.021480 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:58:01.790201 systemd[1]: Started sshd@118-10.230.58.198:22-139.178.68.195:60980.service - OpenSSH per-connection server daemon (139.178.68.195:60980). Mar 25 02:58:02.699736 sshd[7424]: Accepted publickey for core from 139.178.68.195 port 60980 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:58:02.702251 sshd-session[7424]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:58:02.711232 systemd-logind[1503]: New session 107 of user core. Mar 25 02:58:02.719133 systemd[1]: Started session-107.scope - Session 107 of User core. Mar 25 02:58:03.021943 kubelet[2798]: E0325 02:58:03.021694 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:58:03.412943 sshd[7426]: Connection closed by 139.178.68.195 port 60980 Mar 25 02:58:03.414204 sshd-session[7424]: pam_unix(sshd:session): session closed for user core Mar 25 02:58:03.420137 systemd[1]: sshd@118-10.230.58.198:22-139.178.68.195:60980.service: Deactivated successfully. Mar 25 02:58:03.423543 systemd[1]: session-107.scope: Deactivated successfully. Mar 25 02:58:03.424864 systemd-logind[1503]: Session 107 logged out. Waiting for processes to exit. Mar 25 02:58:03.426333 systemd-logind[1503]: Removed session 107. Mar 25 02:58:08.023015 kubelet[2798]: E0325 02:58:08.022934 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:58:08.572088 systemd[1]: Started sshd@119-10.230.58.198:22-139.178.68.195:55786.service - OpenSSH per-connection server daemon (139.178.68.195:55786). Mar 25 02:58:09.485782 sshd[7443]: Accepted publickey for core from 139.178.68.195 port 55786 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:58:09.488365 sshd-session[7443]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:58:09.499002 systemd-logind[1503]: New session 108 of user core. Mar 25 02:58:09.508224 systemd[1]: Started session-108.scope - Session 108 of User core. Mar 25 02:58:10.279925 sshd[7445]: Connection closed by 139.178.68.195 port 55786 Mar 25 02:58:10.281120 sshd-session[7443]: pam_unix(sshd:session): session closed for user core Mar 25 02:58:10.287006 systemd[1]: sshd@119-10.230.58.198:22-139.178.68.195:55786.service: Deactivated successfully. Mar 25 02:58:10.289883 systemd[1]: session-108.scope: Deactivated successfully. Mar 25 02:58:10.291428 systemd-logind[1503]: Session 108 logged out. Waiting for processes to exit. Mar 25 02:58:10.292781 systemd-logind[1503]: Removed session 108. Mar 25 02:58:13.023676 kubelet[2798]: E0325 02:58:13.023567 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:58:15.437443 systemd[1]: Started sshd@120-10.230.58.198:22-139.178.68.195:40828.service - OpenSSH per-connection server daemon (139.178.68.195:40828). Mar 25 02:58:16.338837 sshd[7457]: Accepted publickey for core from 139.178.68.195 port 40828 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:58:16.341338 sshd-session[7457]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:58:16.350676 systemd-logind[1503]: New session 109 of user core. Mar 25 02:58:16.361185 systemd[1]: Started session-109.scope - Session 109 of User core. Mar 25 02:58:17.093165 sshd[7459]: Connection closed by 139.178.68.195 port 40828 Mar 25 02:58:17.094260 sshd-session[7457]: pam_unix(sshd:session): session closed for user core Mar 25 02:58:17.103601 systemd[1]: sshd@120-10.230.58.198:22-139.178.68.195:40828.service: Deactivated successfully. Mar 25 02:58:17.109362 systemd[1]: session-109.scope: Deactivated successfully. Mar 25 02:58:17.112541 systemd-logind[1503]: Session 109 logged out. Waiting for processes to exit. Mar 25 02:58:17.114748 systemd-logind[1503]: Removed session 109. Mar 25 02:58:18.024603 kubelet[2798]: E0325 02:58:18.024515 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:58:22.253691 systemd[1]: Started sshd@121-10.230.58.198:22-139.178.68.195:40836.service - OpenSSH per-connection server daemon (139.178.68.195:40836). Mar 25 02:58:22.457907 containerd[1520]: time="2025-03-25T02:58:22.457801168Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c89faa174b78b640d649de5ec024d3354c80b621a11241d36aa050b4cc6f9b43\" id:\"05217c4435779c471fb9da319b39bada6dd73553dccc94ab302f956ec3ce5d79\" pid:7497 exited_at:{seconds:1742871502 nanos:457323729}" Mar 25 02:58:23.025011 kubelet[2798]: E0325 02:58:23.024936 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:58:23.169921 sshd[7483]: Accepted publickey for core from 139.178.68.195 port 40836 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:58:23.171344 sshd-session[7483]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:58:23.180665 systemd-logind[1503]: New session 110 of user core. Mar 25 02:58:23.188133 systemd[1]: Started session-110.scope - Session 110 of User core. Mar 25 02:58:23.884902 sshd[7506]: Connection closed by 139.178.68.195 port 40836 Mar 25 02:58:23.883561 sshd-session[7483]: pam_unix(sshd:session): session closed for user core Mar 25 02:58:23.889525 systemd-logind[1503]: Session 110 logged out. Waiting for processes to exit. Mar 25 02:58:23.889987 systemd[1]: sshd@121-10.230.58.198:22-139.178.68.195:40836.service: Deactivated successfully. Mar 25 02:58:23.895757 systemd[1]: session-110.scope: Deactivated successfully. Mar 25 02:58:23.900017 systemd-logind[1503]: Removed session 110. Mar 25 02:58:25.054361 containerd[1520]: time="2025-03-25T02:58:25.054276216Z" level=info msg="TaskExit event in podsandbox handler container_id:\"08b46a662cdc025a4603a6fcdf6a5e348afd7fcf81bec452154928151ac37f47\" id:\"675b94104ff9173285aa29c38c9554e66712384cfd1fe31756a130e29c74292e\" pid:7528 exited_at:{seconds:1742871505 nanos:52514574}" Mar 25 02:58:28.025168 kubelet[2798]: E0325 02:58:28.025095 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:58:29.039416 systemd[1]: Started sshd@122-10.230.58.198:22-139.178.68.195:49676.service - OpenSSH per-connection server daemon (139.178.68.195:49676). Mar 25 02:58:29.968589 sshd[7543]: Accepted publickey for core from 139.178.68.195 port 49676 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:58:29.970986 sshd-session[7543]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:58:29.978049 systemd-logind[1503]: New session 111 of user core. Mar 25 02:58:29.986137 systemd[1]: Started session-111.scope - Session 111 of User core. Mar 25 02:58:30.683390 sshd[7545]: Connection closed by 139.178.68.195 port 49676 Mar 25 02:58:30.684478 sshd-session[7543]: pam_unix(sshd:session): session closed for user core Mar 25 02:58:30.690681 systemd[1]: sshd@122-10.230.58.198:22-139.178.68.195:49676.service: Deactivated successfully. Mar 25 02:58:30.695513 systemd[1]: session-111.scope: Deactivated successfully. Mar 25 02:58:30.697242 systemd-logind[1503]: Session 111 logged out. Waiting for processes to exit. Mar 25 02:58:30.699398 systemd-logind[1503]: Removed session 111. Mar 25 02:58:33.025635 kubelet[2798]: E0325 02:58:33.025537 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:58:35.837183 systemd[1]: Started sshd@123-10.230.58.198:22-139.178.68.195:42122.service - OpenSSH per-connection server daemon (139.178.68.195:42122). Mar 25 02:58:36.739090 sshd[7557]: Accepted publickey for core from 139.178.68.195 port 42122 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:58:36.741806 sshd-session[7557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:58:36.751439 systemd-logind[1503]: New session 112 of user core. Mar 25 02:58:36.758111 systemd[1]: Started session-112.scope - Session 112 of User core. Mar 25 02:58:37.462644 sshd[7559]: Connection closed by 139.178.68.195 port 42122 Mar 25 02:58:37.461818 sshd-session[7557]: pam_unix(sshd:session): session closed for user core Mar 25 02:58:37.469118 systemd[1]: sshd@123-10.230.58.198:22-139.178.68.195:42122.service: Deactivated successfully. Mar 25 02:58:37.473803 systemd[1]: session-112.scope: Deactivated successfully. Mar 25 02:58:37.475554 systemd-logind[1503]: Session 112 logged out. Waiting for processes to exit. Mar 25 02:58:37.477545 systemd-logind[1503]: Removed session 112. Mar 25 02:58:38.026495 kubelet[2798]: E0325 02:58:38.026370 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:58:42.624775 systemd[1]: Started sshd@124-10.230.58.198:22-139.178.68.195:42126.service - OpenSSH per-connection server daemon (139.178.68.195:42126). Mar 25 02:58:43.027642 kubelet[2798]: E0325 02:58:43.027488 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:58:43.538248 sshd[7571]: Accepted publickey for core from 139.178.68.195 port 42126 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:58:43.540731 sshd-session[7571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:58:43.550995 systemd-logind[1503]: New session 113 of user core. Mar 25 02:58:43.559343 systemd[1]: Started session-113.scope - Session 113 of User core. Mar 25 02:58:44.261265 sshd[7573]: Connection closed by 139.178.68.195 port 42126 Mar 25 02:58:44.262418 sshd-session[7571]: pam_unix(sshd:session): session closed for user core Mar 25 02:58:44.270015 systemd[1]: sshd@124-10.230.58.198:22-139.178.68.195:42126.service: Deactivated successfully. Mar 25 02:58:44.273709 systemd[1]: session-113.scope: Deactivated successfully. Mar 25 02:58:44.276493 systemd-logind[1503]: Session 113 logged out. Waiting for processes to exit. Mar 25 02:58:44.279465 systemd-logind[1503]: Removed session 113. Mar 25 02:58:45.347629 containerd[1520]: time="2025-03-25T02:58:45.347517423Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c89faa174b78b640d649de5ec024d3354c80b621a11241d36aa050b4cc6f9b43\" id:\"10ef01f2b2a5dfb74d6bee9efddf41fc76101399eeda6f2875a016a37f801b02\" pid:7596 exited_at:{seconds:1742871525 nanos:347007244}" Mar 25 02:58:47.650186 systemd[1]: Started sshd@125-10.230.58.198:22-81.192.87.130:61638.service - OpenSSH per-connection server daemon (81.192.87.130:61638). Mar 25 02:58:47.996598 sshd[7606]: Invalid user t128 from 81.192.87.130 port 61638 Mar 25 02:58:48.028687 kubelet[2798]: E0325 02:58:48.028608 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:58:48.051065 sshd[7606]: Received disconnect from 81.192.87.130 port 61638:11: Bye Bye [preauth] Mar 25 02:58:48.051065 sshd[7606]: Disconnected from invalid user t128 81.192.87.130 port 61638 [preauth] Mar 25 02:58:48.053361 systemd[1]: sshd@125-10.230.58.198:22-81.192.87.130:61638.service: Deactivated successfully. Mar 25 02:58:49.420376 systemd[1]: Started sshd@126-10.230.58.198:22-139.178.68.195:53636.service - OpenSSH per-connection server daemon (139.178.68.195:53636). Mar 25 02:58:50.332094 sshd[7611]: Accepted publickey for core from 139.178.68.195 port 53636 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:58:50.333661 sshd-session[7611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:58:50.342147 systemd-logind[1503]: New session 114 of user core. Mar 25 02:58:50.348100 systemd[1]: Started session-114.scope - Session 114 of User core. Mar 25 02:58:51.039342 sshd[7613]: Connection closed by 139.178.68.195 port 53636 Mar 25 02:58:51.040698 sshd-session[7611]: pam_unix(sshd:session): session closed for user core Mar 25 02:58:51.046778 systemd[1]: sshd@126-10.230.58.198:22-139.178.68.195:53636.service: Deactivated successfully. Mar 25 02:58:51.052458 systemd[1]: session-114.scope: Deactivated successfully. Mar 25 02:58:51.053937 systemd-logind[1503]: Session 114 logged out. Waiting for processes to exit. Mar 25 02:58:51.055413 systemd-logind[1503]: Removed session 114. Mar 25 02:58:52.465318 containerd[1520]: time="2025-03-25T02:58:52.465198178Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c89faa174b78b640d649de5ec024d3354c80b621a11241d36aa050b4cc6f9b43\" id:\"025fc4bcca954e59ecfb10737e39cd5bc79a4371653cce2ed6f8341fe33f6926\" pid:7637 exited_at:{seconds:1742871532 nanos:464416115}" Mar 25 02:58:53.029645 kubelet[2798]: E0325 02:58:53.029564 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:58:55.068494 containerd[1520]: time="2025-03-25T02:58:55.068255385Z" level=info msg="TaskExit event in podsandbox handler container_id:\"08b46a662cdc025a4603a6fcdf6a5e348afd7fcf81bec452154928151ac37f47\" id:\"3ed10fea1e9499ab6dca5bf461c4c59307f7bac9a6024ed0a89323e1c0fe844d\" pid:7659 exited_at:{seconds:1742871535 nanos:66815542}" Mar 25 02:58:56.195850 systemd[1]: Started sshd@127-10.230.58.198:22-139.178.68.195:59938.service - OpenSSH per-connection server daemon (139.178.68.195:59938). Mar 25 02:58:57.097046 sshd[7672]: Accepted publickey for core from 139.178.68.195 port 59938 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:58:57.100093 sshd-session[7672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:58:57.110189 systemd-logind[1503]: New session 115 of user core. Mar 25 02:58:57.118182 systemd[1]: Started session-115.scope - Session 115 of User core. Mar 25 02:58:57.800074 sshd[7674]: Connection closed by 139.178.68.195 port 59938 Mar 25 02:58:57.801115 sshd-session[7672]: pam_unix(sshd:session): session closed for user core Mar 25 02:58:57.806439 systemd[1]: sshd@127-10.230.58.198:22-139.178.68.195:59938.service: Deactivated successfully. Mar 25 02:58:57.809474 systemd[1]: session-115.scope: Deactivated successfully. Mar 25 02:58:57.811126 systemd-logind[1503]: Session 115 logged out. Waiting for processes to exit. Mar 25 02:58:57.813217 systemd-logind[1503]: Removed session 115. Mar 25 02:58:58.030726 kubelet[2798]: E0325 02:58:58.030654 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:59:02.954579 systemd[1]: Started sshd@128-10.230.58.198:22-139.178.68.195:59948.service - OpenSSH per-connection server daemon (139.178.68.195:59948). Mar 25 02:59:03.030988 kubelet[2798]: E0325 02:59:03.030863 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:59:03.852929 sshd[7688]: Accepted publickey for core from 139.178.68.195 port 59948 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:59:03.855017 sshd-session[7688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:59:03.864490 systemd-logind[1503]: New session 116 of user core. Mar 25 02:59:03.872323 systemd[1]: Started session-116.scope - Session 116 of User core. Mar 25 02:59:04.578043 sshd[7690]: Connection closed by 139.178.68.195 port 59948 Mar 25 02:59:04.579535 sshd-session[7688]: pam_unix(sshd:session): session closed for user core Mar 25 02:59:04.585112 systemd[1]: sshd@128-10.230.58.198:22-139.178.68.195:59948.service: Deactivated successfully. Mar 25 02:59:04.585899 systemd-logind[1503]: Session 116 logged out. Waiting for processes to exit. Mar 25 02:59:04.591116 systemd[1]: session-116.scope: Deactivated successfully. Mar 25 02:59:04.593507 systemd-logind[1503]: Removed session 116. Mar 25 02:59:08.032140 kubelet[2798]: E0325 02:59:08.032061 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:59:09.742017 systemd[1]: Started sshd@129-10.230.58.198:22-139.178.68.195:35394.service - OpenSSH per-connection server daemon (139.178.68.195:35394). Mar 25 02:59:10.665414 sshd[7702]: Accepted publickey for core from 139.178.68.195 port 35394 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:59:10.667819 sshd-session[7702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:59:10.676688 systemd-logind[1503]: New session 117 of user core. Mar 25 02:59:10.682220 systemd[1]: Started session-117.scope - Session 117 of User core. Mar 25 02:59:11.270049 systemd[1]: Started sshd@130-10.230.58.198:22-95.90.242.212:10778.service - OpenSSH per-connection server daemon (95.90.242.212:10778). Mar 25 02:59:11.371370 sshd[7704]: Connection closed by 139.178.68.195 port 35394 Mar 25 02:59:11.372388 sshd-session[7702]: pam_unix(sshd:session): session closed for user core Mar 25 02:59:11.380036 systemd-logind[1503]: Session 117 logged out. Waiting for processes to exit. Mar 25 02:59:11.381567 systemd[1]: sshd@129-10.230.58.198:22-139.178.68.195:35394.service: Deactivated successfully. Mar 25 02:59:11.384785 systemd[1]: session-117.scope: Deactivated successfully. Mar 25 02:59:11.386684 systemd-logind[1503]: Removed session 117. Mar 25 02:59:11.540918 sshd[7713]: Invalid user dev from 95.90.242.212 port 10778 Mar 25 02:59:11.578913 sshd[7713]: Received disconnect from 95.90.242.212 port 10778:11: Bye Bye [preauth] Mar 25 02:59:11.578913 sshd[7713]: Disconnected from invalid user dev 95.90.242.212 port 10778 [preauth] Mar 25 02:59:11.581089 systemd[1]: sshd@130-10.230.58.198:22-95.90.242.212:10778.service: Deactivated successfully. Mar 25 02:59:13.032333 kubelet[2798]: E0325 02:59:13.032262 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:59:16.527257 systemd[1]: Started sshd@131-10.230.58.198:22-139.178.68.195:52808.service - OpenSSH per-connection server daemon (139.178.68.195:52808). Mar 25 02:59:17.430412 sshd[7721]: Accepted publickey for core from 139.178.68.195 port 52808 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:59:17.432717 sshd-session[7721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:59:17.440832 systemd-logind[1503]: New session 118 of user core. Mar 25 02:59:17.446159 systemd[1]: Started session-118.scope - Session 118 of User core. Mar 25 02:59:18.032729 kubelet[2798]: E0325 02:59:18.032645 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:59:18.128941 sshd[7723]: Connection closed by 139.178.68.195 port 52808 Mar 25 02:59:18.129941 sshd-session[7721]: pam_unix(sshd:session): session closed for user core Mar 25 02:59:18.134501 systemd[1]: sshd@131-10.230.58.198:22-139.178.68.195:52808.service: Deactivated successfully. Mar 25 02:59:18.138691 systemd[1]: session-118.scope: Deactivated successfully. Mar 25 02:59:18.141848 systemd-logind[1503]: Session 118 logged out. Waiting for processes to exit. Mar 25 02:59:18.144314 systemd-logind[1503]: Removed session 118. Mar 25 02:59:18.664633 systemd[1]: Started sshd@132-10.230.58.198:22-95.90.242.212:50496.service - OpenSSH per-connection server daemon (95.90.242.212:50496). Mar 25 02:59:18.938620 sshd[7734]: Invalid user admin from 95.90.242.212 port 50496 Mar 25 02:59:18.985695 sshd[7734]: Received disconnect from 95.90.242.212 port 50496:11: Bye Bye [preauth] Mar 25 02:59:18.985695 sshd[7734]: Disconnected from invalid user admin 95.90.242.212 port 50496 [preauth] Mar 25 02:59:18.988184 systemd[1]: sshd@132-10.230.58.198:22-95.90.242.212:50496.service: Deactivated successfully. Mar 25 02:59:22.456259 containerd[1520]: time="2025-03-25T02:59:22.456052099Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c89faa174b78b640d649de5ec024d3354c80b621a11241d36aa050b4cc6f9b43\" id:\"745eadbbfbe582d0d82c47503da7a6eb4db815f3780274c8f6972d036ce32782\" pid:7753 exited_at:{seconds:1742871562 nanos:455644737}" Mar 25 02:59:23.033332 kubelet[2798]: E0325 02:59:23.033264 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:59:23.288682 systemd[1]: Started sshd@133-10.230.58.198:22-139.178.68.195:52818.service - OpenSSH per-connection server daemon (139.178.68.195:52818). Mar 25 02:59:24.197275 sshd[7763]: Accepted publickey for core from 139.178.68.195 port 52818 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:59:24.199570 sshd-session[7763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:59:24.209990 systemd-logind[1503]: New session 119 of user core. Mar 25 02:59:24.221267 systemd[1]: Started session-119.scope - Session 119 of User core. Mar 25 02:59:24.906911 sshd[7766]: Connection closed by 139.178.68.195 port 52818 Mar 25 02:59:24.905793 sshd-session[7763]: pam_unix(sshd:session): session closed for user core Mar 25 02:59:24.910982 systemd[1]: sshd@133-10.230.58.198:22-139.178.68.195:52818.service: Deactivated successfully. Mar 25 02:59:24.911389 systemd-logind[1503]: Session 119 logged out. Waiting for processes to exit. Mar 25 02:59:24.914314 systemd[1]: session-119.scope: Deactivated successfully. Mar 25 02:59:24.917327 systemd-logind[1503]: Removed session 119. Mar 25 02:59:25.058021 containerd[1520]: time="2025-03-25T02:59:25.057941506Z" level=info msg="TaskExit event in podsandbox handler container_id:\"08b46a662cdc025a4603a6fcdf6a5e348afd7fcf81bec452154928151ac37f47\" id:\"3d03556482a41adab74b47f81a04904a3fdd34eb7072ef69de8f667cc5498f0d\" pid:7790 exited_at:{seconds:1742871565 nanos:57179861}" Mar 25 02:59:28.034266 kubelet[2798]: E0325 02:59:28.034181 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:59:30.062906 systemd[1]: Started sshd@134-10.230.58.198:22-139.178.68.195:43868.service - OpenSSH per-connection server daemon (139.178.68.195:43868). Mar 25 02:59:30.878742 kubelet[2798]: E0325 02:59:30.878618 2798 log.go:32] "Status from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" Mar 25 02:59:30.878742 kubelet[2798]: E0325 02:59:30.878748 2798 kubelet.go:2886] "Container runtime sanity check failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" Mar 25 02:59:31.012168 sshd[7805]: Accepted publickey for core from 139.178.68.195 port 43868 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:59:31.014381 sshd-session[7805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:59:31.024657 systemd-logind[1503]: New session 120 of user core. Mar 25 02:59:31.032094 systemd[1]: Started session-120.scope - Session 120 of User core. Mar 25 02:59:31.772347 sshd[7808]: Connection closed by 139.178.68.195 port 43868 Mar 25 02:59:31.773431 sshd-session[7805]: pam_unix(sshd:session): session closed for user core Mar 25 02:59:31.778573 systemd[1]: sshd@134-10.230.58.198:22-139.178.68.195:43868.service: Deactivated successfully. Mar 25 02:59:31.781311 systemd[1]: session-120.scope: Deactivated successfully. Mar 25 02:59:31.783946 systemd-logind[1503]: Session 120 logged out. Waiting for processes to exit. Mar 25 02:59:31.785508 systemd-logind[1503]: Removed session 120. Mar 25 02:59:33.034431 kubelet[2798]: E0325 02:59:33.034351 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:59:36.929650 systemd[1]: Started sshd@135-10.230.58.198:22-139.178.68.195:48428.service - OpenSSH per-connection server daemon (139.178.68.195:48428). Mar 25 02:59:37.844516 sshd[7820]: Accepted publickey for core from 139.178.68.195 port 48428 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:59:37.846055 sshd-session[7820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:59:37.856700 systemd-logind[1503]: New session 121 of user core. Mar 25 02:59:37.866201 systemd[1]: Started session-121.scope - Session 121 of User core. Mar 25 02:59:38.035482 kubelet[2798]: E0325 02:59:38.035383 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:59:38.562295 sshd[7822]: Connection closed by 139.178.68.195 port 48428 Mar 25 02:59:38.563465 sshd-session[7820]: pam_unix(sshd:session): session closed for user core Mar 25 02:59:38.568565 systemd[1]: sshd@135-10.230.58.198:22-139.178.68.195:48428.service: Deactivated successfully. Mar 25 02:59:38.571789 systemd[1]: session-121.scope: Deactivated successfully. Mar 25 02:59:38.574499 systemd-logind[1503]: Session 121 logged out. Waiting for processes to exit. Mar 25 02:59:38.576701 systemd-logind[1503]: Removed session 121. Mar 25 02:59:43.036414 kubelet[2798]: E0325 02:59:43.036321 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:59:43.717725 systemd[1]: Started sshd@136-10.230.58.198:22-139.178.68.195:48440.service - OpenSSH per-connection server daemon (139.178.68.195:48440). Mar 25 02:59:44.648140 sshd[7839]: Accepted publickey for core from 139.178.68.195 port 48440 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:59:44.650936 sshd-session[7839]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:59:44.659355 systemd-logind[1503]: New session 122 of user core. Mar 25 02:59:44.667418 systemd[1]: Started session-122.scope - Session 122 of User core. Mar 25 02:59:45.344141 containerd[1520]: time="2025-03-25T02:59:45.344044086Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c89faa174b78b640d649de5ec024d3354c80b621a11241d36aa050b4cc6f9b43\" id:\"cb2f7c472a9d711e96d370ccebadff44fc76b6fcfce1e95fe71c0bd8e3da4268\" pid:7862 exited_at:{seconds:1742871585 nanos:342575121}" Mar 25 02:59:45.357496 sshd[7841]: Connection closed by 139.178.68.195 port 48440 Mar 25 02:59:45.361215 sshd-session[7839]: pam_unix(sshd:session): session closed for user core Mar 25 02:59:45.366786 systemd[1]: sshd@136-10.230.58.198:22-139.178.68.195:48440.service: Deactivated successfully. Mar 25 02:59:45.370121 systemd[1]: session-122.scope: Deactivated successfully. Mar 25 02:59:45.372187 systemd-logind[1503]: Session 122 logged out. Waiting for processes to exit. Mar 25 02:59:45.375684 systemd-logind[1503]: Removed session 122. Mar 25 02:59:48.037046 kubelet[2798]: E0325 02:59:48.036957 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:59:49.828982 systemd[1]: Started sshd@137-10.230.58.198:22-81.192.87.130:16745.service - OpenSSH per-connection server daemon (81.192.87.130:16745). Mar 25 02:59:50.181775 sshd[7885]: Invalid user wayne from 81.192.87.130 port 16745 Mar 25 02:59:50.236086 sshd[7885]: Received disconnect from 81.192.87.130 port 16745:11: Bye Bye [preauth] Mar 25 02:59:50.236086 sshd[7885]: Disconnected from invalid user wayne 81.192.87.130 port 16745 [preauth] Mar 25 02:59:50.239424 systemd[1]: sshd@137-10.230.58.198:22-81.192.87.130:16745.service: Deactivated successfully. Mar 25 02:59:50.513025 systemd[1]: Started sshd@138-10.230.58.198:22-139.178.68.195:46898.service - OpenSSH per-connection server daemon (139.178.68.195:46898). Mar 25 02:59:51.422785 sshd[7891]: Accepted publickey for core from 139.178.68.195 port 46898 ssh2: RSA SHA256:8xIt8IySDuMSXZEyyIJ8G/Y59zelbqrs6jDZ/A/UBT0 Mar 25 02:59:51.425144 sshd-session[7891]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:59:51.433242 systemd-logind[1503]: New session 123 of user core. Mar 25 02:59:51.442149 systemd[1]: Started session-123.scope - Session 123 of User core. Mar 25 02:59:52.129091 sshd[7893]: Connection closed by 139.178.68.195 port 46898 Mar 25 02:59:52.128915 sshd-session[7891]: pam_unix(sshd:session): session closed for user core Mar 25 02:59:52.135082 systemd[1]: sshd@138-10.230.58.198:22-139.178.68.195:46898.service: Deactivated successfully. Mar 25 02:59:52.137972 systemd[1]: session-123.scope: Deactivated successfully. Mar 25 02:59:52.141940 systemd-logind[1503]: Session 123 logged out. Waiting for processes to exit. Mar 25 02:59:52.143432 systemd[1]: Starting systemd-tmpfiles-clean.service - Cleanup of Temporary Directories... Mar 25 02:59:52.145264 systemd-logind[1503]: Removed session 123. Mar 25 02:59:52.198576 systemd-tmpfiles[7903]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 25 02:59:52.199952 systemd-tmpfiles[7903]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 25 02:59:52.202159 systemd-tmpfiles[7903]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 25 02:59:52.202704 systemd-tmpfiles[7903]: ACLs are not supported, ignoring. Mar 25 02:59:52.202835 systemd-tmpfiles[7903]: ACLs are not supported, ignoring. Mar 25 02:59:52.210906 systemd-tmpfiles[7903]: Detected autofs mount point /boot during canonicalization of boot. Mar 25 02:59:52.210930 systemd-tmpfiles[7903]: Skipping /boot Mar 25 02:59:52.226125 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Mar 25 02:59:52.226710 systemd[1]: Finished systemd-tmpfiles-clean.service - Cleanup of Temporary Directories. Mar 25 02:59:52.234853 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully. Mar 25 02:59:52.467085 containerd[1520]: time="2025-03-25T02:59:52.466846349Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c89faa174b78b640d649de5ec024d3354c80b621a11241d36aa050b4cc6f9b43\" id:\"607c1e010927177d703101e63dfae582f8e7785ed78df33e345c5ced94dcc5d1\" pid:7920 exited_at:{seconds:1742871592 nanos:466239124}" Mar 25 02:59:53.037577 kubelet[2798]: E0325 02:59:53.037484 2798 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Mar 25 02:59:55.054107 containerd[1520]: time="2025-03-25T02:59:55.054030327Z" level=info msg="TaskExit event in podsandbox handler container_id:\"08b46a662cdc025a4603a6fcdf6a5e348afd7fcf81bec452154928151ac37f47\" id:\"e3a2cb22ee8d36c8dea939e976a0b27aff6c9e484f92c7978d6c71b049af41c1\" pid:7942 exited_at:{seconds:1742871595 nanos:53410339}"