May 14 01:18:35.065667 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue May 13 22:08:35 -00 2025 May 14 01:18:35.065688 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=8b3c5774a4242053287d41edc0d029958b7c22c131f7dd36b16a68182354e130 May 14 01:18:35.065698 kernel: BIOS-provided physical RAM map: May 14 01:18:35.065704 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 14 01:18:35.065710 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 14 01:18:35.065716 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 14 01:18:35.065723 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007cfdbfff] usable May 14 01:18:35.065729 kernel: BIOS-e820: [mem 0x000000007cfdc000-0x000000007cffffff] reserved May 14 01:18:35.065736 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 14 01:18:35.065742 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved May 14 01:18:35.065748 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 14 01:18:35.065754 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 14 01:18:35.065760 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 14 01:18:35.065766 kernel: NX (Execute Disable) protection: active May 14 01:18:35.065774 kernel: APIC: Static calls initialized May 14 01:18:35.065782 kernel: SMBIOS 3.0.0 present. May 14 01:18:35.065789 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 May 14 01:18:35.065795 kernel: Hypervisor detected: KVM May 14 01:18:35.065802 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 14 01:18:35.065808 kernel: kvm-clock: using sched offset of 3261177651 cycles May 14 01:18:35.065815 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 14 01:18:35.065822 kernel: tsc: Detected 2495.312 MHz processor May 14 01:18:35.065829 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 14 01:18:35.065836 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 14 01:18:35.065844 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 May 14 01:18:35.065851 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 14 01:18:35.065858 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 14 01:18:35.065865 kernel: Using GB pages for direct mapping May 14 01:18:35.065872 kernel: ACPI: Early table checksum verification disabled May 14 01:18:35.065878 kernel: ACPI: RSDP 0x00000000000F5270 000014 (v00 BOCHS ) May 14 01:18:35.065885 kernel: ACPI: RSDT 0x000000007CFE2693 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 01:18:35.065892 kernel: ACPI: FACP 0x000000007CFE2483 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 14 01:18:35.065899 kernel: ACPI: DSDT 0x000000007CFE0040 002443 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 01:18:35.065906 kernel: ACPI: FACS 0x000000007CFE0000 000040 May 14 01:18:35.065913 kernel: ACPI: APIC 0x000000007CFE2577 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 14 01:18:35.065920 kernel: ACPI: HPET 0x000000007CFE25F7 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 01:18:35.065927 kernel: ACPI: MCFG 0x000000007CFE262F 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 01:18:35.065933 kernel: ACPI: WAET 0x000000007CFE266B 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 01:18:35.065940 kernel: ACPI: Reserving FACP table memory at [mem 0x7cfe2483-0x7cfe2576] May 14 01:18:35.065947 kernel: ACPI: Reserving DSDT table memory at [mem 0x7cfe0040-0x7cfe2482] May 14 01:18:35.065956 kernel: ACPI: Reserving FACS table memory at [mem 0x7cfe0000-0x7cfe003f] May 14 01:18:35.065964 kernel: ACPI: Reserving APIC table memory at [mem 0x7cfe2577-0x7cfe25f6] May 14 01:18:35.065971 kernel: ACPI: Reserving HPET table memory at [mem 0x7cfe25f7-0x7cfe262e] May 14 01:18:35.065978 kernel: ACPI: Reserving MCFG table memory at [mem 0x7cfe262f-0x7cfe266a] May 14 01:18:35.065984 kernel: ACPI: Reserving WAET table memory at [mem 0x7cfe266b-0x7cfe2692] May 14 01:18:35.065991 kernel: No NUMA configuration found May 14 01:18:35.065998 kernel: Faking a node at [mem 0x0000000000000000-0x000000007cfdbfff] May 14 01:18:35.066006 kernel: NODE_DATA(0) allocated [mem 0x7cfd6000-0x7cfdbfff] May 14 01:18:35.066014 kernel: Zone ranges: May 14 01:18:35.066020 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 14 01:18:35.066027 kernel: DMA32 [mem 0x0000000001000000-0x000000007cfdbfff] May 14 01:18:35.066034 kernel: Normal empty May 14 01:18:35.066041 kernel: Movable zone start for each node May 14 01:18:35.066048 kernel: Early memory node ranges May 14 01:18:35.066055 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 14 01:18:35.066062 kernel: node 0: [mem 0x0000000000100000-0x000000007cfdbfff] May 14 01:18:35.066069 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007cfdbfff] May 14 01:18:35.066077 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 14 01:18:35.066084 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 14 01:18:35.066091 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges May 14 01:18:35.066098 kernel: ACPI: PM-Timer IO Port: 0x608 May 14 01:18:35.066105 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 14 01:18:35.066112 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 14 01:18:35.066119 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 14 01:18:35.066126 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 14 01:18:35.066133 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 14 01:18:35.066141 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 14 01:18:35.066148 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 14 01:18:35.066155 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 14 01:18:35.066162 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 14 01:18:35.066169 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs May 14 01:18:35.066176 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 14 01:18:35.066183 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices May 14 01:18:35.066190 kernel: Booting paravirtualized kernel on KVM May 14 01:18:35.066197 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 14 01:18:35.066206 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 14 01:18:35.066213 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 May 14 01:18:35.066219 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 May 14 01:18:35.066226 kernel: pcpu-alloc: [0] 0 1 May 14 01:18:35.066233 kernel: kvm-guest: PV spinlocks disabled, no host support May 14 01:18:35.066241 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=8b3c5774a4242053287d41edc0d029958b7c22c131f7dd36b16a68182354e130 May 14 01:18:35.066249 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 14 01:18:35.066256 kernel: random: crng init done May 14 01:18:35.066264 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 14 01:18:35.066271 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 14 01:18:35.066278 kernel: Fallback order for Node 0: 0 May 14 01:18:35.066285 kernel: Built 1 zonelists, mobility grouping on. Total pages: 503708 May 14 01:18:35.066292 kernel: Policy zone: DMA32 May 14 01:18:35.066299 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 14 01:18:35.066306 kernel: Memory: 1917956K/2047464K available (14336K kernel code, 2296K rwdata, 25068K rodata, 43604K init, 1468K bss, 129248K reserved, 0K cma-reserved) May 14 01:18:35.066313 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 14 01:18:35.066320 kernel: ftrace: allocating 37993 entries in 149 pages May 14 01:18:35.066329 kernel: ftrace: allocated 149 pages with 4 groups May 14 01:18:35.066336 kernel: Dynamic Preempt: voluntary May 14 01:18:35.066342 kernel: rcu: Preemptible hierarchical RCU implementation. May 14 01:18:35.066350 kernel: rcu: RCU event tracing is enabled. May 14 01:18:35.066357 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 14 01:18:35.066364 kernel: Trampoline variant of Tasks RCU enabled. May 14 01:18:35.066371 kernel: Rude variant of Tasks RCU enabled. May 14 01:18:35.066378 kernel: Tracing variant of Tasks RCU enabled. May 14 01:18:35.066385 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 14 01:18:35.066393 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 14 01:18:35.066400 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 14 01:18:35.066407 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 14 01:18:35.066414 kernel: Console: colour VGA+ 80x25 May 14 01:18:35.066421 kernel: printk: console [tty0] enabled May 14 01:18:35.066428 kernel: printk: console [ttyS0] enabled May 14 01:18:35.066435 kernel: ACPI: Core revision 20230628 May 14 01:18:35.066442 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 14 01:18:35.066459 kernel: APIC: Switch to symmetric I/O mode setup May 14 01:18:35.066468 kernel: x2apic enabled May 14 01:18:35.066475 kernel: APIC: Switched APIC routing to: physical x2apic May 14 01:18:35.066482 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 14 01:18:35.066489 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 14 01:18:35.066496 kernel: Calibrating delay loop (skipped) preset value.. 4990.62 BogoMIPS (lpj=2495312) May 14 01:18:35.066503 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 14 01:18:35.066510 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 14 01:18:35.066517 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 14 01:18:35.066530 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 14 01:18:35.066537 kernel: Spectre V2 : Mitigation: Retpolines May 14 01:18:35.066544 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 14 01:18:35.066551 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 14 01:18:35.066560 kernel: RETBleed: Mitigation: untrained return thunk May 14 01:18:35.066568 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 14 01:18:35.066575 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 14 01:18:35.066582 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 14 01:18:35.066590 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 14 01:18:35.066598 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 14 01:18:35.066606 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 14 01:18:35.066613 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 14 01:18:35.066620 kernel: Freeing SMP alternatives memory: 32K May 14 01:18:35.066627 kernel: pid_max: default: 32768 minimum: 301 May 14 01:18:35.066816 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 14 01:18:35.067112 kernel: landlock: Up and running. May 14 01:18:35.067129 kernel: SELinux: Initializing. May 14 01:18:35.067152 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 14 01:18:35.067187 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 14 01:18:35.067205 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0) May 14 01:18:35.067228 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 14 01:18:35.067236 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 14 01:18:35.067244 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 14 01:18:35.067251 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 14 01:18:35.067259 kernel: ... version: 0 May 14 01:18:35.067266 kernel: ... bit width: 48 May 14 01:18:35.067275 kernel: ... generic registers: 6 May 14 01:18:35.067283 kernel: ... value mask: 0000ffffffffffff May 14 01:18:35.067290 kernel: ... max period: 00007fffffffffff May 14 01:18:35.067297 kernel: ... fixed-purpose events: 0 May 14 01:18:35.067305 kernel: ... event mask: 000000000000003f May 14 01:18:35.067312 kernel: signal: max sigframe size: 1776 May 14 01:18:35.067319 kernel: rcu: Hierarchical SRCU implementation. May 14 01:18:35.067327 kernel: rcu: Max phase no-delay instances is 400. May 14 01:18:35.067334 kernel: smp: Bringing up secondary CPUs ... May 14 01:18:35.067341 kernel: smpboot: x86: Booting SMP configuration: May 14 01:18:35.067350 kernel: .... node #0, CPUs: #1 May 14 01:18:35.067357 kernel: smp: Brought up 1 node, 2 CPUs May 14 01:18:35.067364 kernel: smpboot: Max logical packages: 1 May 14 01:18:35.067372 kernel: smpboot: Total of 2 processors activated (9981.24 BogoMIPS) May 14 01:18:35.067379 kernel: devtmpfs: initialized May 14 01:18:35.067386 kernel: x86/mm: Memory block size: 128MB May 14 01:18:35.067394 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 14 01:18:35.067401 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 14 01:18:35.067409 kernel: pinctrl core: initialized pinctrl subsystem May 14 01:18:35.067417 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 14 01:18:35.067425 kernel: audit: initializing netlink subsys (disabled) May 14 01:18:35.067432 kernel: audit: type=2000 audit(1747185513.607:1): state=initialized audit_enabled=0 res=1 May 14 01:18:35.067440 kernel: thermal_sys: Registered thermal governor 'step_wise' May 14 01:18:35.067457 kernel: thermal_sys: Registered thermal governor 'user_space' May 14 01:18:35.067465 kernel: cpuidle: using governor menu May 14 01:18:35.067472 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 14 01:18:35.067479 kernel: dca service started, version 1.12.1 May 14 01:18:35.067487 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) May 14 01:18:35.067496 kernel: PCI: Using configuration type 1 for base access May 14 01:18:35.067504 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 14 01:18:35.067511 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 14 01:18:35.067518 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 14 01:18:35.067526 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 14 01:18:35.067533 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 14 01:18:35.067540 kernel: ACPI: Added _OSI(Module Device) May 14 01:18:35.067548 kernel: ACPI: Added _OSI(Processor Device) May 14 01:18:35.067555 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 14 01:18:35.067564 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 14 01:18:35.067571 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 14 01:18:35.067578 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 14 01:18:35.067585 kernel: ACPI: Interpreter enabled May 14 01:18:35.067593 kernel: ACPI: PM: (supports S0 S5) May 14 01:18:35.067600 kernel: ACPI: Using IOAPIC for interrupt routing May 14 01:18:35.067607 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 14 01:18:35.067615 kernel: PCI: Using E820 reservations for host bridge windows May 14 01:18:35.067622 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 14 01:18:35.067631 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 14 01:18:35.067798 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 14 01:18:35.067878 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 14 01:18:35.067950 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 14 01:18:35.067959 kernel: PCI host bridge to bus 0000:00 May 14 01:18:35.068037 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 14 01:18:35.068105 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 14 01:18:35.068170 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 14 01:18:35.068235 kernel: pci_bus 0000:00: root bus resource [mem 0x7d000000-0xafffffff window] May 14 01:18:35.068302 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 14 01:18:35.068366 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] May 14 01:18:35.068431 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 14 01:18:35.068530 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 14 01:18:35.068620 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 May 14 01:18:35.069157 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfb800000-0xfbffffff pref] May 14 01:18:35.069235 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfd200000-0xfd203fff 64bit pref] May 14 01:18:35.069308 kernel: pci 0000:00:01.0: reg 0x20: [mem 0xfea10000-0xfea10fff] May 14 01:18:35.069380 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea00000-0xfea0ffff pref] May 14 01:18:35.069466 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 14 01:18:35.069550 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 May 14 01:18:35.069628 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea11000-0xfea11fff] May 14 01:18:35.069739 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 May 14 01:18:35.069814 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea12000-0xfea12fff] May 14 01:18:35.069895 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 May 14 01:18:35.069970 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea13000-0xfea13fff] May 14 01:18:35.070052 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 May 14 01:18:35.070134 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea14000-0xfea14fff] May 14 01:18:35.070218 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 May 14 01:18:35.070293 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea15000-0xfea15fff] May 14 01:18:35.070373 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 May 14 01:18:35.070446 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea16000-0xfea16fff] May 14 01:18:35.070538 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 May 14 01:18:35.070618 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea17000-0xfea17fff] May 14 01:18:35.072150 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 May 14 01:18:35.072230 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea18000-0xfea18fff] May 14 01:18:35.072316 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 May 14 01:18:35.072391 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfea19000-0xfea19fff] May 14 01:18:35.072480 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 14 01:18:35.072558 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 14 01:18:35.072702 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 14 01:18:35.072782 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc040-0xc05f] May 14 01:18:35.072853 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea1a000-0xfea1afff] May 14 01:18:35.072929 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 14 01:18:35.073001 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] May 14 01:18:35.073100 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 May 14 01:18:35.073206 kernel: pci 0000:01:00.0: reg 0x14: [mem 0xfe880000-0xfe880fff] May 14 01:18:35.073308 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] May 14 01:18:35.073386 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfe800000-0xfe87ffff pref] May 14 01:18:35.073472 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] May 14 01:18:35.073545 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] May 14 01:18:35.073618 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] May 14 01:18:35.076993 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 May 14 01:18:35.077080 kernel: pci 0000:02:00.0: reg 0x10: [mem 0xfe600000-0xfe603fff 64bit] May 14 01:18:35.077154 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] May 14 01:18:35.077226 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] May 14 01:18:35.077297 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] May 14 01:18:35.077377 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 May 14 01:18:35.077464 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfe400000-0xfe400fff] May 14 01:18:35.077543 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xfcc00000-0xfcc03fff 64bit pref] May 14 01:18:35.077614 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] May 14 01:18:35.077700 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] May 14 01:18:35.077772 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] May 14 01:18:35.077858 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 May 14 01:18:35.077933 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] May 14 01:18:35.078005 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] May 14 01:18:35.078080 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] May 14 01:18:35.078151 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] May 14 01:18:35.078233 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 May 14 01:18:35.078308 kernel: pci 0000:05:00.0: reg 0x14: [mem 0xfe000000-0xfe000fff] May 14 01:18:35.078428 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xfc800000-0xfc803fff 64bit pref] May 14 01:18:35.078534 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] May 14 01:18:35.078608 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] May 14 01:18:35.078704 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] May 14 01:18:35.078791 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 May 14 01:18:35.078868 kernel: pci 0000:06:00.0: reg 0x14: [mem 0xfde00000-0xfde00fff] May 14 01:18:35.078942 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xfc600000-0xfc603fff 64bit pref] May 14 01:18:35.079014 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] May 14 01:18:35.079085 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] May 14 01:18:35.079158 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] May 14 01:18:35.079167 kernel: acpiphp: Slot [0] registered May 14 01:18:35.080786 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 May 14 01:18:35.080898 kernel: pci 0000:07:00.0: reg 0x14: [mem 0xfdc80000-0xfdc80fff] May 14 01:18:35.080977 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xfc400000-0xfc403fff 64bit pref] May 14 01:18:35.081054 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfdc00000-0xfdc7ffff pref] May 14 01:18:35.081128 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] May 14 01:18:35.081203 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] May 14 01:18:35.081276 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] May 14 01:18:35.081286 kernel: acpiphp: Slot [0-2] registered May 14 01:18:35.081362 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] May 14 01:18:35.081435 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] May 14 01:18:35.081521 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] May 14 01:18:35.081532 kernel: acpiphp: Slot [0-3] registered May 14 01:18:35.081603 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] May 14 01:18:35.081758 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] May 14 01:18:35.081833 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] May 14 01:18:35.081842 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 14 01:18:35.081853 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 14 01:18:35.081861 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 14 01:18:35.081868 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 14 01:18:35.081876 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 14 01:18:35.081883 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 14 01:18:35.081890 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 14 01:18:35.081898 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 14 01:18:35.081905 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 14 01:18:35.081912 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 14 01:18:35.081921 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 14 01:18:35.081929 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 14 01:18:35.081936 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 14 01:18:35.081943 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 14 01:18:35.081950 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 14 01:18:35.081958 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 14 01:18:35.081965 kernel: iommu: Default domain type: Translated May 14 01:18:35.081973 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 14 01:18:35.081980 kernel: PCI: Using ACPI for IRQ routing May 14 01:18:35.081989 kernel: PCI: pci_cache_line_size set to 64 bytes May 14 01:18:35.081996 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 14 01:18:35.082004 kernel: e820: reserve RAM buffer [mem 0x7cfdc000-0x7fffffff] May 14 01:18:35.082076 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 14 01:18:35.082147 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 14 01:18:35.082218 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 14 01:18:35.082228 kernel: vgaarb: loaded May 14 01:18:35.082235 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 14 01:18:35.082243 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 14 01:18:35.082253 kernel: clocksource: Switched to clocksource kvm-clock May 14 01:18:35.082260 kernel: VFS: Disk quotas dquot_6.6.0 May 14 01:18:35.082268 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 14 01:18:35.082275 kernel: pnp: PnP ACPI init May 14 01:18:35.082352 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved May 14 01:18:35.082363 kernel: pnp: PnP ACPI: found 5 devices May 14 01:18:35.082371 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 14 01:18:35.082378 kernel: NET: Registered PF_INET protocol family May 14 01:18:35.082388 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) May 14 01:18:35.082395 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) May 14 01:18:35.082403 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 14 01:18:35.082411 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) May 14 01:18:35.082418 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) May 14 01:18:35.082425 kernel: TCP: Hash tables configured (established 16384 bind 16384) May 14 01:18:35.082433 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) May 14 01:18:35.082440 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) May 14 01:18:35.082459 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 14 01:18:35.082468 kernel: NET: Registered PF_XDP protocol family May 14 01:18:35.082543 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 May 14 01:18:35.082616 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 May 14 01:18:35.083772 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 May 14 01:18:35.083850 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] May 14 01:18:35.083922 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] May 14 01:18:35.083993 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] May 14 01:18:35.084070 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] May 14 01:18:35.084141 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] May 14 01:18:35.084232 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] May 14 01:18:35.084313 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] May 14 01:18:35.084429 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] May 14 01:18:35.084818 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] May 14 01:18:35.084900 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] May 14 01:18:35.084976 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] May 14 01:18:35.085055 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] May 14 01:18:35.085130 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] May 14 01:18:35.085204 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] May 14 01:18:35.085278 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] May 14 01:18:35.085352 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] May 14 01:18:35.085427 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] May 14 01:18:35.085517 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] May 14 01:18:35.085603 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] May 14 01:18:35.085834 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] May 14 01:18:35.085912 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] May 14 01:18:35.085984 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] May 14 01:18:35.086055 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] May 14 01:18:35.086128 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] May 14 01:18:35.086199 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] May 14 01:18:35.086271 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] May 14 01:18:35.086342 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] May 14 01:18:35.086414 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] May 14 01:18:35.086532 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] May 14 01:18:35.086604 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] May 14 01:18:35.086714 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] May 14 01:18:35.086787 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] May 14 01:18:35.086865 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] May 14 01:18:35.086934 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 14 01:18:35.086998 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 14 01:18:35.087065 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 14 01:18:35.087129 kernel: pci_bus 0000:00: resource 7 [mem 0x7d000000-0xafffffff window] May 14 01:18:35.087192 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 14 01:18:35.087257 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] May 14 01:18:35.087334 kernel: pci_bus 0000:01: resource 1 [mem 0xfe800000-0xfe9fffff] May 14 01:18:35.087402 kernel: pci_bus 0000:01: resource 2 [mem 0xfd000000-0xfd1fffff 64bit pref] May 14 01:18:35.087487 kernel: pci_bus 0000:02: resource 1 [mem 0xfe600000-0xfe7fffff] May 14 01:18:35.087556 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] May 14 01:18:35.087696 kernel: pci_bus 0000:03: resource 1 [mem 0xfe400000-0xfe5fffff] May 14 01:18:35.087804 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] May 14 01:18:35.087886 kernel: pci_bus 0000:04: resource 1 [mem 0xfe200000-0xfe3fffff] May 14 01:18:35.087953 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] May 14 01:18:35.088025 kernel: pci_bus 0000:05: resource 1 [mem 0xfe000000-0xfe1fffff] May 14 01:18:35.088092 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] May 14 01:18:35.088162 kernel: pci_bus 0000:06: resource 1 [mem 0xfde00000-0xfdffffff] May 14 01:18:35.088229 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] May 14 01:18:35.088323 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] May 14 01:18:35.088397 kernel: pci_bus 0000:07: resource 1 [mem 0xfdc00000-0xfddfffff] May 14 01:18:35.088474 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] May 14 01:18:35.088547 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] May 14 01:18:35.088613 kernel: pci_bus 0000:08: resource 1 [mem 0xfda00000-0xfdbfffff] May 14 01:18:35.088714 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] May 14 01:18:35.088791 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] May 14 01:18:35.088858 kernel: pci_bus 0000:09: resource 1 [mem 0xfd800000-0xfd9fffff] May 14 01:18:35.088923 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] May 14 01:18:35.088934 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 14 01:18:35.088942 kernel: PCI: CLS 0 bytes, default 64 May 14 01:18:35.088950 kernel: Initialise system trusted keyrings May 14 01:18:35.088958 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 May 14 01:18:35.088965 kernel: Key type asymmetric registered May 14 01:18:35.088975 kernel: Asymmetric key parser 'x509' registered May 14 01:18:35.088983 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 14 01:18:35.088991 kernel: io scheduler mq-deadline registered May 14 01:18:35.088999 kernel: io scheduler kyber registered May 14 01:18:35.089006 kernel: io scheduler bfq registered May 14 01:18:35.089080 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 May 14 01:18:35.089153 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 May 14 01:18:35.089226 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 May 14 01:18:35.089298 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 May 14 01:18:35.089372 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 May 14 01:18:35.089444 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 May 14 01:18:35.089529 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 May 14 01:18:35.089601 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 May 14 01:18:35.089713 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 May 14 01:18:35.089787 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 May 14 01:18:35.089858 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 May 14 01:18:35.089930 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 May 14 01:18:35.090006 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 May 14 01:18:35.090078 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 May 14 01:18:35.090150 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 May 14 01:18:35.090229 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 May 14 01:18:35.090240 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 14 01:18:35.090310 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 May 14 01:18:35.090382 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 May 14 01:18:35.090392 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 14 01:18:35.090400 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 May 14 01:18:35.090410 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 14 01:18:35.090418 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 14 01:18:35.090426 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 14 01:18:35.090434 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 14 01:18:35.090442 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 14 01:18:35.090461 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 14 01:18:35.090537 kernel: rtc_cmos 00:03: RTC can wake from S4 May 14 01:18:35.090605 kernel: rtc_cmos 00:03: registered as rtc0 May 14 01:18:35.090707 kernel: rtc_cmos 00:03: setting system clock to 2025-05-14T01:18:34 UTC (1747185514) May 14 01:18:35.090776 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 14 01:18:35.090786 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 14 01:18:35.090794 kernel: NET: Registered PF_INET6 protocol family May 14 01:18:35.090802 kernel: Segment Routing with IPv6 May 14 01:18:35.090810 kernel: In-situ OAM (IOAM) with IPv6 May 14 01:18:35.090818 kernel: NET: Registered PF_PACKET protocol family May 14 01:18:35.090826 kernel: Key type dns_resolver registered May 14 01:18:35.090836 kernel: IPI shorthand broadcast: enabled May 14 01:18:35.090844 kernel: sched_clock: Marking stable (1211010573, 146557260)->(1368011949, -10444116) May 14 01:18:35.090852 kernel: registered taskstats version 1 May 14 01:18:35.090860 kernel: Loading compiled-in X.509 certificates May 14 01:18:35.090868 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: 166efda032ca4d6e9037c569aca9b53585ee6f94' May 14 01:18:35.090876 kernel: Key type .fscrypt registered May 14 01:18:35.090883 kernel: Key type fscrypt-provisioning registered May 14 01:18:35.090891 kernel: ima: No TPM chip found, activating TPM-bypass! May 14 01:18:35.090899 kernel: ima: Allocated hash algorithm: sha1 May 14 01:18:35.090908 kernel: ima: No architecture policies found May 14 01:18:35.090916 kernel: clk: Disabling unused clocks May 14 01:18:35.090924 kernel: Freeing unused kernel image (initmem) memory: 43604K May 14 01:18:35.090932 kernel: Write protecting the kernel read-only data: 40960k May 14 01:18:35.090939 kernel: Freeing unused kernel image (rodata/data gap) memory: 1556K May 14 01:18:35.090947 kernel: Run /init as init process May 14 01:18:35.090955 kernel: with arguments: May 14 01:18:35.090962 kernel: /init May 14 01:18:35.090970 kernel: with environment: May 14 01:18:35.090979 kernel: HOME=/ May 14 01:18:35.090987 kernel: TERM=linux May 14 01:18:35.090994 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 14 01:18:35.091003 systemd[1]: Successfully made /usr/ read-only. May 14 01:18:35.091015 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 01:18:35.091024 systemd[1]: Detected virtualization kvm. May 14 01:18:35.091032 systemd[1]: Detected architecture x86-64. May 14 01:18:35.091040 systemd[1]: Running in initrd. May 14 01:18:35.091050 systemd[1]: No hostname configured, using default hostname. May 14 01:18:35.091058 systemd[1]: Hostname set to . May 14 01:18:35.091067 systemd[1]: Initializing machine ID from VM UUID. May 14 01:18:35.091075 systemd[1]: Queued start job for default target initrd.target. May 14 01:18:35.091084 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 01:18:35.091092 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 01:18:35.091101 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 14 01:18:35.091109 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 01:18:35.091120 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 14 01:18:35.091129 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 14 01:18:35.091138 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 14 01:18:35.091147 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 14 01:18:35.091155 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 01:18:35.091164 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 01:18:35.091172 systemd[1]: Reached target paths.target - Path Units. May 14 01:18:35.091181 systemd[1]: Reached target slices.target - Slice Units. May 14 01:18:35.091190 systemd[1]: Reached target swap.target - Swaps. May 14 01:18:35.091198 systemd[1]: Reached target timers.target - Timer Units. May 14 01:18:35.091206 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 14 01:18:35.091214 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 01:18:35.091223 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 14 01:18:35.091231 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 14 01:18:35.091239 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 01:18:35.091249 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 01:18:35.091257 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 01:18:35.091265 systemd[1]: Reached target sockets.target - Socket Units. May 14 01:18:35.091274 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 14 01:18:35.091282 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 01:18:35.091290 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 14 01:18:35.091298 systemd[1]: Starting systemd-fsck-usr.service... May 14 01:18:35.091306 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 01:18:35.091315 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 01:18:35.091325 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 01:18:35.091333 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 14 01:18:35.091360 systemd-journald[187]: Collecting audit messages is disabled. May 14 01:18:35.091381 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 01:18:35.091392 systemd[1]: Finished systemd-fsck-usr.service. May 14 01:18:35.091400 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 14 01:18:35.091409 systemd-journald[187]: Journal started May 14 01:18:35.091429 systemd-journald[187]: Runtime Journal (/run/log/journal/9966c5e1062b4f28a225eaea89f367b6) is 4.7M, max 38.3M, 33.5M free. May 14 01:18:35.081698 systemd-modules-load[189]: Inserted module 'overlay' May 14 01:18:35.131709 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 14 01:18:35.131733 kernel: Bridge firewalling registered May 14 01:18:35.131743 systemd[1]: Started systemd-journald.service - Journal Service. May 14 01:18:35.112140 systemd-modules-load[189]: Inserted module 'br_netfilter' May 14 01:18:35.142910 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 01:18:35.143503 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 01:18:35.147785 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 01:18:35.150746 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 01:18:35.152716 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 01:18:35.156950 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 14 01:18:35.162748 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 01:18:35.167208 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 01:18:35.169561 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 01:18:35.176733 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 01:18:35.179974 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 01:18:35.182763 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 14 01:18:35.183977 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 01:18:35.194898 dracut-cmdline[222]: dracut-dracut-053 May 14 01:18:35.197097 dracut-cmdline[222]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=8b3c5774a4242053287d41edc0d029958b7c22c131f7dd36b16a68182354e130 May 14 01:18:35.212146 systemd-resolved[219]: Positive Trust Anchors: May 14 01:18:35.212158 systemd-resolved[219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 01:18:35.212188 systemd-resolved[219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 01:18:35.220846 systemd-resolved[219]: Defaulting to hostname 'linux'. May 14 01:18:35.221675 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 01:18:35.222345 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 01:18:35.250671 kernel: SCSI subsystem initialized May 14 01:18:35.259661 kernel: Loading iSCSI transport class v2.0-870. May 14 01:18:35.274682 kernel: iscsi: registered transport (tcp) May 14 01:18:35.292670 kernel: iscsi: registered transport (qla4xxx) May 14 01:18:35.292708 kernel: QLogic iSCSI HBA Driver May 14 01:18:35.337682 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 14 01:18:35.339659 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 14 01:18:35.402098 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 14 01:18:35.402188 kernel: device-mapper: uevent: version 1.0.3 May 14 01:18:35.403873 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 14 01:18:35.447680 kernel: raid6: avx2x4 gen() 31423 MB/s May 14 01:18:35.464670 kernel: raid6: avx2x2 gen() 31911 MB/s May 14 01:18:35.481817 kernel: raid6: avx2x1 gen() 26424 MB/s May 14 01:18:35.481847 kernel: raid6: using algorithm avx2x2 gen() 31911 MB/s May 14 01:18:35.499886 kernel: raid6: .... xor() 20567 MB/s, rmw enabled May 14 01:18:35.499917 kernel: raid6: using avx2x2 recovery algorithm May 14 01:18:35.519700 kernel: xor: automatically using best checksumming function avx May 14 01:18:35.693694 kernel: Btrfs loaded, zoned=no, fsverity=no May 14 01:18:35.706879 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 14 01:18:35.710002 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 01:18:35.730124 systemd-udevd[406]: Using default interface naming scheme 'v255'. May 14 01:18:35.734599 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 01:18:35.740540 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 14 01:18:35.769617 dracut-pre-trigger[423]: rd.md=0: removing MD RAID activation May 14 01:18:35.817261 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 14 01:18:35.821807 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 01:18:35.886715 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 01:18:35.892878 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 14 01:18:35.925568 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 14 01:18:35.928041 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 14 01:18:35.930055 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 01:18:35.931747 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 01:18:35.934772 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 14 01:18:35.960142 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 14 01:18:35.970656 kernel: scsi host0: Virtio SCSI HBA May 14 01:18:35.981386 kernel: cryptd: max_cpu_qlen set to 1000 May 14 01:18:35.986788 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 May 14 01:18:35.993670 kernel: ACPI: bus type USB registered May 14 01:18:35.997665 kernel: usbcore: registered new interface driver usbfs May 14 01:18:36.004123 kernel: usbcore: registered new interface driver hub May 14 01:18:36.006678 kernel: usbcore: registered new device driver usb May 14 01:18:36.012736 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 14 01:18:36.013347 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 01:18:36.014725 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 01:18:36.015150 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 01:18:36.015243 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 01:18:36.023866 kernel: libata version 3.00 loaded. May 14 01:18:36.020924 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 14 01:18:36.024689 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 01:18:36.031158 kernel: ahci 0000:00:1f.2: version 3.0 May 14 01:18:36.031302 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 14 01:18:36.033910 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 14 01:18:36.034035 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 14 01:18:36.060470 kernel: scsi host1: ahci May 14 01:18:36.060889 kernel: scsi host2: ahci May 14 01:18:36.061066 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 14 01:18:36.064699 kernel: AVX2 version of gcm_enc/dec engaged. May 14 01:18:36.071823 kernel: AES CTR mode by8 optimization enabled May 14 01:18:36.071873 kernel: scsi host3: ahci May 14 01:18:36.072787 kernel: scsi host4: ahci May 14 01:18:36.087679 kernel: scsi host5: ahci May 14 01:18:36.088657 kernel: scsi host6: ahci May 14 01:18:36.088785 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a100 irq 48 May 14 01:18:36.088796 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a180 irq 48 May 14 01:18:36.088806 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a200 irq 48 May 14 01:18:36.088821 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a280 irq 48 May 14 01:18:36.088831 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a300 irq 48 May 14 01:18:36.088840 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a380 irq 48 May 14 01:18:36.133053 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 01:18:36.135854 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 01:18:36.158825 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 01:18:36.406696 kernel: ata3: SATA link down (SStatus 0 SControl 300) May 14 01:18:36.406808 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 14 01:18:36.406831 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 14 01:18:36.407695 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 14 01:18:36.410691 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 14 01:18:36.415655 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 14 01:18:36.417868 kernel: ata1.00: applying bridge limits May 14 01:18:36.420593 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 14 01:18:36.423988 kernel: ata1.00: configured for UDMA/100 May 14 01:18:36.424047 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 14 01:18:36.459888 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller May 14 01:18:36.460208 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 May 14 01:18:36.472815 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 May 14 01:18:36.479472 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller May 14 01:18:36.479775 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 May 14 01:18:36.484979 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed May 14 01:18:36.487501 kernel: hub 1-0:1.0: USB hub found May 14 01:18:36.489684 kernel: hub 1-0:1.0: 4 ports detected May 14 01:18:36.492661 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. May 14 01:18:36.495294 kernel: hub 2-0:1.0: USB hub found May 14 01:18:36.495516 kernel: hub 2-0:1.0: 4 ports detected May 14 01:18:36.497577 kernel: sd 0:0:0:0: Power-on or device reset occurred May 14 01:18:36.502862 kernel: sd 0:0:0:0: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) May 14 01:18:36.503044 kernel: sd 0:0:0:0: [sda] Write Protect is off May 14 01:18:36.503178 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 14 01:18:36.503330 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 May 14 01:18:36.503480 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 14 01:18:36.503496 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA May 14 01:18:36.513837 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 14 01:18:36.513875 kernel: GPT:17805311 != 80003071 May 14 01:18:36.515061 kernel: GPT:Alternate GPT header not at the end of the disk. May 14 01:18:36.521886 kernel: GPT:17805311 != 80003071 May 14 01:18:36.523101 kernel: GPT: Use GNU Parted to correct GPT errors. May 14 01:18:36.524680 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 14 01:18:36.527245 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 14 01:18:36.529239 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 May 14 01:18:36.576660 kernel: BTRFS: device fsid d2fbd39e-42cb-4ccb-87ec-99f56cfe77f8 devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (460) May 14 01:18:36.576724 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (469) May 14 01:18:36.595475 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. May 14 01:18:36.610796 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. May 14 01:18:36.633487 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. May 14 01:18:36.634007 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. May 14 01:18:36.643694 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. May 14 01:18:36.648599 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 14 01:18:36.666253 disk-uuid[581]: Primary Header is updated. May 14 01:18:36.666253 disk-uuid[581]: Secondary Entries is updated. May 14 01:18:36.666253 disk-uuid[581]: Secondary Header is updated. May 14 01:18:36.675773 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 14 01:18:36.686669 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 14 01:18:36.731698 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd May 14 01:18:36.878680 kernel: hid: raw HID events driver (C) Jiri Kosina May 14 01:18:36.885727 kernel: usbcore: registered new interface driver usbhid May 14 01:18:36.885762 kernel: usbhid: USB HID core driver May 14 01:18:36.894156 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 May 14 01:18:36.894189 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 May 14 01:18:37.689204 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 14 01:18:37.690411 disk-uuid[582]: The operation has completed successfully. May 14 01:18:37.785158 systemd[1]: disk-uuid.service: Deactivated successfully. May 14 01:18:37.785289 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 14 01:18:37.831613 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 14 01:18:37.847143 sh[598]: Success May 14 01:18:37.870726 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 14 01:18:37.944239 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 14 01:18:37.949755 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 14 01:18:37.961223 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 14 01:18:37.981573 kernel: BTRFS info (device dm-0): first mount of filesystem d2fbd39e-42cb-4ccb-87ec-99f56cfe77f8 May 14 01:18:37.981625 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 14 01:18:37.985829 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 14 01:18:37.990163 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 14 01:18:37.993499 kernel: BTRFS info (device dm-0): using free space tree May 14 01:18:38.007701 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 14 01:18:38.011121 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 14 01:18:38.012878 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 14 01:18:38.015125 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 14 01:18:38.019774 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 14 01:18:38.068081 kernel: BTRFS info (device sda6): first mount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 14 01:18:38.068166 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 14 01:18:38.068189 kernel: BTRFS info (device sda6): using free space tree May 14 01:18:38.073949 kernel: BTRFS info (device sda6): enabling ssd optimizations May 14 01:18:38.074000 kernel: BTRFS info (device sda6): auto enabling async discard May 14 01:18:38.079668 kernel: BTRFS info (device sda6): last unmount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 14 01:18:38.081396 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 14 01:18:38.084793 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 14 01:18:38.120992 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 01:18:38.124505 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 01:18:38.170494 systemd-networkd[776]: lo: Link UP May 14 01:18:38.170503 systemd-networkd[776]: lo: Gained carrier May 14 01:18:38.173783 systemd-networkd[776]: Enumeration completed May 14 01:18:38.173940 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 01:18:38.174568 systemd-networkd[776]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 01:18:38.174571 systemd-networkd[776]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 01:18:38.175808 systemd[1]: Reached target network.target - Network. May 14 01:18:38.179711 systemd-networkd[776]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 01:18:38.179717 systemd-networkd[776]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 01:18:38.181610 systemd-networkd[776]: eth0: Link UP May 14 01:18:38.181613 systemd-networkd[776]: eth0: Gained carrier May 14 01:18:38.181620 systemd-networkd[776]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 01:18:38.186883 systemd-networkd[776]: eth1: Link UP May 14 01:18:38.186888 systemd-networkd[776]: eth1: Gained carrier May 14 01:18:38.186896 systemd-networkd[776]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 01:18:38.190084 ignition[731]: Ignition 2.20.0 May 14 01:18:38.190100 ignition[731]: Stage: fetch-offline May 14 01:18:38.190140 ignition[731]: no configs at "/usr/lib/ignition/base.d" May 14 01:18:38.191583 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 14 01:18:38.190151 ignition[731]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 14 01:18:38.193733 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 14 01:18:38.190271 ignition[731]: parsed url from cmdline: "" May 14 01:18:38.190275 ignition[731]: no config URL provided May 14 01:18:38.190282 ignition[731]: reading system config file "/usr/lib/ignition/user.ign" May 14 01:18:38.190291 ignition[731]: no config at "/usr/lib/ignition/user.ign" May 14 01:18:38.190297 ignition[731]: failed to fetch config: resource requires networking May 14 01:18:38.190529 ignition[731]: Ignition finished successfully May 14 01:18:38.210659 ignition[785]: Ignition 2.20.0 May 14 01:18:38.210668 ignition[785]: Stage: fetch May 14 01:18:38.210824 ignition[785]: no configs at "/usr/lib/ignition/base.d" May 14 01:18:38.210831 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 14 01:18:38.210907 ignition[785]: parsed url from cmdline: "" May 14 01:18:38.210910 ignition[785]: no config URL provided May 14 01:18:38.210915 ignition[785]: reading system config file "/usr/lib/ignition/user.ign" May 14 01:18:38.210920 ignition[785]: no config at "/usr/lib/ignition/user.ign" May 14 01:18:38.210940 ignition[785]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 May 14 01:18:38.211058 ignition[785]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable May 14 01:18:38.230692 systemd-networkd[776]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 May 14 01:18:38.248667 systemd-networkd[776]: eth0: DHCPv4 address 37.27.220.42/32, gateway 172.31.1.1 acquired from 172.31.1.1 May 14 01:18:38.411771 ignition[785]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 May 14 01:18:38.418315 ignition[785]: GET result: OK May 14 01:18:38.418415 ignition[785]: parsing config with SHA512: f23aa51bbf104d42ee9d9c1aba276ce696044ff805afc1a40fd675cec59b1d7d055e129738b1928d3cf2cbfb3244024d2676c4441a8a6894e507d06f47f6c8ac May 14 01:18:38.426288 unknown[785]: fetched base config from "system" May 14 01:18:38.427334 unknown[785]: fetched base config from "system" May 14 01:18:38.428013 ignition[785]: fetch: fetch complete May 14 01:18:38.427345 unknown[785]: fetched user config from "hetzner" May 14 01:18:38.428022 ignition[785]: fetch: fetch passed May 14 01:18:38.428086 ignition[785]: Ignition finished successfully May 14 01:18:38.431504 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 14 01:18:38.434932 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 14 01:18:38.473946 ignition[793]: Ignition 2.20.0 May 14 01:18:38.473964 ignition[793]: Stage: kargs May 14 01:18:38.474250 ignition[793]: no configs at "/usr/lib/ignition/base.d" May 14 01:18:38.474266 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 14 01:18:38.477728 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 14 01:18:38.475959 ignition[793]: kargs: kargs passed May 14 01:18:38.476033 ignition[793]: Ignition finished successfully May 14 01:18:38.483814 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 14 01:18:38.514571 ignition[800]: Ignition 2.20.0 May 14 01:18:38.514592 ignition[800]: Stage: disks May 14 01:18:38.514910 ignition[800]: no configs at "/usr/lib/ignition/base.d" May 14 01:18:38.514928 ignition[800]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 14 01:18:38.517963 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 14 01:18:38.516530 ignition[800]: disks: disks passed May 14 01:18:38.519818 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 14 01:18:38.516594 ignition[800]: Ignition finished successfully May 14 01:18:38.521564 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 14 01:18:38.523671 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 01:18:38.525457 systemd[1]: Reached target sysinit.target - System Initialization. May 14 01:18:38.527698 systemd[1]: Reached target basic.target - Basic System. May 14 01:18:38.532811 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 14 01:18:38.563297 systemd-fsck[809]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks May 14 01:18:38.567792 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 14 01:18:38.570623 systemd[1]: Mounting sysroot.mount - /sysroot... May 14 01:18:38.689688 kernel: EXT4-fs (sda9): mounted filesystem c413e98b-da35-46b1-9852-45706e1b1f52 r/w with ordered data mode. Quota mode: none. May 14 01:18:38.690148 systemd[1]: Mounted sysroot.mount - /sysroot. May 14 01:18:38.691021 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 14 01:18:38.694954 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 01:18:38.708695 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 14 01:18:38.710741 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 14 01:18:38.712116 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 14 01:18:38.712862 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 14 01:18:38.717004 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 14 01:18:38.721319 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 14 01:18:38.729756 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (817) May 14 01:18:38.739982 kernel: BTRFS info (device sda6): first mount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 14 01:18:38.740023 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 14 01:18:38.744466 kernel: BTRFS info (device sda6): using free space tree May 14 01:18:38.753853 kernel: BTRFS info (device sda6): enabling ssd optimizations May 14 01:18:38.753887 kernel: BTRFS info (device sda6): auto enabling async discard May 14 01:18:38.760316 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 01:18:38.783147 coreos-metadata[819]: May 14 01:18:38.783 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 May 14 01:18:38.785918 coreos-metadata[819]: May 14 01:18:38.784 INFO Fetch successful May 14 01:18:38.785918 coreos-metadata[819]: May 14 01:18:38.784 INFO wrote hostname ci-4284-0-0-n-c0828c9b46 to /sysroot/etc/hostname May 14 01:18:38.786620 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 14 01:18:38.791729 initrd-setup-root[846]: cut: /sysroot/etc/passwd: No such file or directory May 14 01:18:38.796221 initrd-setup-root[853]: cut: /sysroot/etc/group: No such file or directory May 14 01:18:38.800192 initrd-setup-root[860]: cut: /sysroot/etc/shadow: No such file or directory May 14 01:18:38.805083 initrd-setup-root[867]: cut: /sysroot/etc/gshadow: No such file or directory May 14 01:18:38.900166 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 14 01:18:38.903539 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 14 01:18:38.906827 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 14 01:18:38.925693 kernel: BTRFS info (device sda6): last unmount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 14 01:18:38.945278 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 14 01:18:38.957866 ignition[935]: INFO : Ignition 2.20.0 May 14 01:18:38.957866 ignition[935]: INFO : Stage: mount May 14 01:18:38.960035 ignition[935]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 01:18:38.960035 ignition[935]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 14 01:18:38.960035 ignition[935]: INFO : mount: mount passed May 14 01:18:38.960035 ignition[935]: INFO : Ignition finished successfully May 14 01:18:38.961125 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 14 01:18:38.965788 systemd[1]: Starting ignition-files.service - Ignition (files)... May 14 01:18:38.975186 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 14 01:18:38.982318 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 01:18:39.006697 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (947) May 14 01:18:39.012114 kernel: BTRFS info (device sda6): first mount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 14 01:18:39.012180 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 14 01:18:39.015162 kernel: BTRFS info (device sda6): using free space tree May 14 01:18:39.028691 kernel: BTRFS info (device sda6): enabling ssd optimizations May 14 01:18:39.028743 kernel: BTRFS info (device sda6): auto enabling async discard May 14 01:18:39.035825 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 01:18:39.067736 ignition[963]: INFO : Ignition 2.20.0 May 14 01:18:39.067736 ignition[963]: INFO : Stage: files May 14 01:18:39.070054 ignition[963]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 01:18:39.070054 ignition[963]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 14 01:18:39.070054 ignition[963]: DEBUG : files: compiled without relabeling support, skipping May 14 01:18:39.074574 ignition[963]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 14 01:18:39.074574 ignition[963]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 14 01:18:39.077974 ignition[963]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 14 01:18:39.077974 ignition[963]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 14 01:18:39.077974 ignition[963]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 14 01:18:39.076394 unknown[963]: wrote ssh authorized keys file for user: core May 14 01:18:39.083754 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 14 01:18:39.083754 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 May 14 01:18:39.319872 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 14 01:18:39.663945 systemd-networkd[776]: eth0: Gained IPv6LL May 14 01:18:39.677596 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 14 01:18:39.680535 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 14 01:18:39.680535 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 14 01:18:39.680535 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 14 01:18:39.680535 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 14 01:18:39.680535 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 01:18:39.680535 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 01:18:39.680535 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 01:18:39.680535 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 01:18:39.680535 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 14 01:18:39.680535 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 14 01:18:39.680535 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 14 01:18:39.680535 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 14 01:18:39.680535 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 14 01:18:39.680535 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 May 14 01:18:40.239903 systemd-networkd[776]: eth1: Gained IPv6LL May 14 01:18:40.429895 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 14 01:18:41.624850 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 14 01:18:41.624850 ignition[963]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 14 01:18:41.629127 ignition[963]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 01:18:41.629127 ignition[963]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 01:18:41.629127 ignition[963]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 14 01:18:41.629127 ignition[963]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 14 01:18:41.629127 ignition[963]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" May 14 01:18:41.629127 ignition[963]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" May 14 01:18:41.629127 ignition[963]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 14 01:18:41.629127 ignition[963]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" May 14 01:18:41.629127 ignition[963]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" May 14 01:18:41.629127 ignition[963]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" May 14 01:18:41.629127 ignition[963]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" May 14 01:18:41.629127 ignition[963]: INFO : files: files passed May 14 01:18:41.629127 ignition[963]: INFO : Ignition finished successfully May 14 01:18:41.629286 systemd[1]: Finished ignition-files.service - Ignition (files). May 14 01:18:41.638842 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 14 01:18:41.651121 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 14 01:18:41.656952 systemd[1]: ignition-quench.service: Deactivated successfully. May 14 01:18:41.657090 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 14 01:18:41.679125 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 01:18:41.679125 initrd-setup-root-after-ignition[994]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 14 01:18:41.682913 initrd-setup-root-after-ignition[998]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 01:18:41.684311 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 01:18:41.686378 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 14 01:18:41.690832 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 14 01:18:41.748379 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 14 01:18:41.748569 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 14 01:18:41.751260 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 14 01:18:41.754378 systemd[1]: Reached target initrd.target - Initrd Default Target. May 14 01:18:41.755624 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 14 01:18:41.758219 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 14 01:18:41.787413 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 01:18:41.791901 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 14 01:18:41.816373 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 14 01:18:41.818978 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 01:18:41.820372 systemd[1]: Stopped target timers.target - Timer Units. May 14 01:18:41.822548 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 14 01:18:41.822781 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 01:18:41.825272 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 14 01:18:41.826795 systemd[1]: Stopped target basic.target - Basic System. May 14 01:18:41.829102 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 14 01:18:41.831382 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 14 01:18:41.833824 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 14 01:18:41.836072 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 14 01:18:41.838385 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 14 01:18:41.840970 systemd[1]: Stopped target sysinit.target - System Initialization. May 14 01:18:41.843307 systemd[1]: Stopped target local-fs.target - Local File Systems. May 14 01:18:41.845861 systemd[1]: Stopped target swap.target - Swaps. May 14 01:18:41.847956 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 14 01:18:41.848202 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 14 01:18:41.850697 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 14 01:18:41.852173 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 01:18:41.854273 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 14 01:18:41.854494 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 01:18:41.857041 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 14 01:18:41.857280 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 14 01:18:41.860372 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 14 01:18:41.860635 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 01:18:41.863133 systemd[1]: ignition-files.service: Deactivated successfully. May 14 01:18:41.863400 systemd[1]: Stopped ignition-files.service - Ignition (files). May 14 01:18:41.865677 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 14 01:18:41.865939 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 14 01:18:41.870994 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 14 01:18:41.872668 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 14 01:18:41.872968 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 14 01:18:41.879074 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 14 01:18:41.883086 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 14 01:18:41.883323 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 14 01:18:41.886799 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 14 01:18:41.886932 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 14 01:18:41.893959 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 14 01:18:41.894057 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 14 01:18:41.909401 ignition[1018]: INFO : Ignition 2.20.0 May 14 01:18:41.909401 ignition[1018]: INFO : Stage: umount May 14 01:18:41.913747 ignition[1018]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 01:18:41.913747 ignition[1018]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 14 01:18:41.913747 ignition[1018]: INFO : umount: umount passed May 14 01:18:41.913747 ignition[1018]: INFO : Ignition finished successfully May 14 01:18:41.912970 systemd[1]: ignition-mount.service: Deactivated successfully. May 14 01:18:41.913091 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 14 01:18:41.924373 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 14 01:18:41.924869 systemd[1]: sysroot-boot.service: Deactivated successfully. May 14 01:18:41.924946 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 14 01:18:41.925820 systemd[1]: ignition-disks.service: Deactivated successfully. May 14 01:18:41.925874 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 14 01:18:41.926800 systemd[1]: ignition-kargs.service: Deactivated successfully. May 14 01:18:41.926833 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 14 01:18:41.928124 systemd[1]: ignition-fetch.service: Deactivated successfully. May 14 01:18:41.928157 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 14 01:18:41.929297 systemd[1]: Stopped target network.target - Network. May 14 01:18:41.930431 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 14 01:18:41.930534 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 14 01:18:41.931681 systemd[1]: Stopped target paths.target - Path Units. May 14 01:18:41.932844 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 14 01:18:41.937664 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 01:18:41.939071 systemd[1]: Stopped target slices.target - Slice Units. May 14 01:18:41.940480 systemd[1]: Stopped target sockets.target - Socket Units. May 14 01:18:41.941644 systemd[1]: iscsid.socket: Deactivated successfully. May 14 01:18:41.941672 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 14 01:18:41.943084 systemd[1]: iscsiuio.socket: Deactivated successfully. May 14 01:18:41.943111 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 01:18:41.944629 systemd[1]: ignition-setup.service: Deactivated successfully. May 14 01:18:41.944675 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 14 01:18:41.945797 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 14 01:18:41.945828 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 14 01:18:41.946971 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 14 01:18:41.947001 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 14 01:18:41.948210 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 14 01:18:41.949395 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 14 01:18:41.953488 systemd[1]: systemd-resolved.service: Deactivated successfully. May 14 01:18:41.953569 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 14 01:18:41.956889 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 14 01:18:41.957066 systemd[1]: systemd-networkd.service: Deactivated successfully. May 14 01:18:41.957139 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 14 01:18:41.959365 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 14 01:18:41.959946 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 14 01:18:41.959974 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 14 01:18:41.961751 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 14 01:18:41.963062 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 14 01:18:41.963128 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 01:18:41.963973 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 14 01:18:41.964022 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 14 01:18:41.967304 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 14 01:18:41.967340 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 14 01:18:41.969237 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 14 01:18:41.969315 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 01:18:41.970397 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 01:18:41.975471 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 14 01:18:41.975550 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 14 01:18:41.986688 systemd[1]: systemd-udevd.service: Deactivated successfully. May 14 01:18:41.986879 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 01:18:41.988331 systemd[1]: network-cleanup.service: Deactivated successfully. May 14 01:18:41.988425 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 14 01:18:41.990283 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 14 01:18:41.990337 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 14 01:18:41.991381 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 14 01:18:41.991418 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 14 01:18:41.992992 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 14 01:18:41.993042 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 14 01:18:41.995099 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 14 01:18:41.995152 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 14 01:18:41.996499 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 14 01:18:41.996553 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 01:18:42.000763 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 14 01:18:42.001875 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 14 01:18:42.001937 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 01:18:42.004739 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 01:18:42.004790 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 01:18:42.007506 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 14 01:18:42.007574 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 14 01:18:42.018714 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 14 01:18:42.018852 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 14 01:18:42.020516 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 14 01:18:42.022674 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 14 01:18:42.042599 systemd[1]: Switching root. May 14 01:18:42.118391 systemd-journald[187]: Journal stopped May 14 01:18:43.179427 systemd-journald[187]: Received SIGTERM from PID 1 (systemd). May 14 01:18:43.179491 kernel: SELinux: policy capability network_peer_controls=1 May 14 01:18:43.179505 kernel: SELinux: policy capability open_perms=1 May 14 01:18:43.179519 kernel: SELinux: policy capability extended_socket_class=1 May 14 01:18:43.179528 kernel: SELinux: policy capability always_check_network=0 May 14 01:18:43.179538 kernel: SELinux: policy capability cgroup_seclabel=1 May 14 01:18:43.179547 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 14 01:18:43.179556 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 14 01:18:43.179569 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 14 01:18:43.179582 kernel: audit: type=1403 audit(1747185522.285:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 14 01:18:43.179595 systemd[1]: Successfully loaded SELinux policy in 54.917ms. May 14 01:18:43.179618 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 16.127ms. May 14 01:18:43.179632 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 01:18:43.179665 systemd[1]: Detected virtualization kvm. May 14 01:18:43.179675 systemd[1]: Detected architecture x86-64. May 14 01:18:43.179685 systemd[1]: Detected first boot. May 14 01:18:43.179696 systemd[1]: Hostname set to . May 14 01:18:43.179706 systemd[1]: Initializing machine ID from VM UUID. May 14 01:18:43.179716 zram_generator::config[1063]: No configuration found. May 14 01:18:43.179729 kernel: Guest personality initialized and is inactive May 14 01:18:43.179738 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 14 01:18:43.179748 kernel: Initialized host personality May 14 01:18:43.179756 kernel: NET: Registered PF_VSOCK protocol family May 14 01:18:43.179769 systemd[1]: Populated /etc with preset unit settings. May 14 01:18:43.179779 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 14 01:18:43.179790 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 14 01:18:43.179799 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 14 01:18:43.179809 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 14 01:18:43.179820 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 14 01:18:43.179830 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 14 01:18:43.179840 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 14 01:18:43.179850 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 14 01:18:43.179860 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 14 01:18:43.179870 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 14 01:18:43.179879 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 14 01:18:43.179890 systemd[1]: Created slice user.slice - User and Session Slice. May 14 01:18:43.179899 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 01:18:43.179911 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 01:18:43.179921 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 14 01:18:43.179931 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 14 01:18:43.179941 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 14 01:18:43.179951 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 01:18:43.179962 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 14 01:18:43.179973 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 01:18:43.179983 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 14 01:18:43.179993 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 14 01:18:43.180003 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 14 01:18:43.180013 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 14 01:18:43.180023 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 01:18:43.180033 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 01:18:43.180043 systemd[1]: Reached target slices.target - Slice Units. May 14 01:18:43.180053 systemd[1]: Reached target swap.target - Swaps. May 14 01:18:43.180065 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 14 01:18:43.180075 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 14 01:18:43.180085 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 14 01:18:43.180098 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 01:18:43.180110 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 01:18:43.180120 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 01:18:43.180131 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 14 01:18:43.180141 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 14 01:18:43.180151 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 14 01:18:43.180161 systemd[1]: Mounting media.mount - External Media Directory... May 14 01:18:43.180170 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 01:18:43.180180 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 14 01:18:43.180191 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 14 01:18:43.180204 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 14 01:18:43.180214 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 14 01:18:43.180226 systemd[1]: Reached target machines.target - Containers. May 14 01:18:43.180236 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 14 01:18:43.180246 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 01:18:43.180256 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 01:18:43.180266 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 14 01:18:43.180276 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 01:18:43.180286 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 01:18:43.180296 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 01:18:43.180307 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 14 01:18:43.180316 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 01:18:43.180327 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 14 01:18:43.180337 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 14 01:18:43.180347 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 14 01:18:43.180357 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 14 01:18:43.180367 systemd[1]: Stopped systemd-fsck-usr.service. May 14 01:18:43.180378 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 01:18:43.180389 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 01:18:43.180399 kernel: fuse: init (API version 7.39) May 14 01:18:43.180408 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 01:18:43.180418 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 14 01:18:43.180428 kernel: ACPI: bus type drm_connector registered May 14 01:18:43.180449 kernel: loop: module loaded May 14 01:18:43.180459 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 14 01:18:43.180470 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 14 01:18:43.180479 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 01:18:43.180491 systemd[1]: verity-setup.service: Deactivated successfully. May 14 01:18:43.180502 systemd[1]: Stopped verity-setup.service. May 14 01:18:43.180513 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 01:18:43.180523 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 14 01:18:43.180533 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 14 01:18:43.180544 systemd[1]: Mounted media.mount - External Media Directory. May 14 01:18:43.180554 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 14 01:18:43.180564 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 14 01:18:43.180574 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 14 01:18:43.180584 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 14 01:18:43.180596 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 01:18:43.180606 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 14 01:18:43.180616 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 14 01:18:43.180626 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 01:18:43.180662 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 01:18:43.180673 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 01:18:43.180683 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 01:18:43.180695 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 01:18:43.180705 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 01:18:43.180731 systemd-journald[1147]: Collecting audit messages is disabled. May 14 01:18:43.180753 systemd-journald[1147]: Journal started May 14 01:18:43.180781 systemd-journald[1147]: Runtime Journal (/run/log/journal/9966c5e1062b4f28a225eaea89f367b6) is 4.7M, max 38.3M, 33.5M free. May 14 01:18:42.856413 systemd[1]: Queued start job for default target multi-user.target. May 14 01:18:42.864524 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. May 14 01:18:42.865154 systemd[1]: systemd-journald.service: Deactivated successfully. May 14 01:18:43.182671 systemd[1]: Started systemd-journald.service - Journal Service. May 14 01:18:43.182928 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 14 01:18:43.183054 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 14 01:18:43.183711 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 01:18:43.183829 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 01:18:43.184478 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 01:18:43.185125 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 14 01:18:43.185926 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 14 01:18:43.186591 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 14 01:18:43.193534 systemd[1]: Reached target network-pre.target - Preparation for Network. May 14 01:18:43.195738 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 14 01:18:43.200018 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 14 01:18:43.200897 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 14 01:18:43.200972 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 01:18:43.202299 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 14 01:18:43.206922 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 14 01:18:43.209761 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 14 01:18:43.210289 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 01:18:43.213954 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 14 01:18:43.217823 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 14 01:18:43.219731 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 01:18:43.222509 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 14 01:18:43.224912 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 01:18:43.239734 systemd-journald[1147]: Time spent on flushing to /var/log/journal/9966c5e1062b4f28a225eaea89f367b6 is 53.205ms for 1139 entries. May 14 01:18:43.239734 systemd-journald[1147]: System Journal (/var/log/journal/9966c5e1062b4f28a225eaea89f367b6) is 8M, max 584.8M, 576.8M free. May 14 01:18:43.310779 systemd-journald[1147]: Received client request to flush runtime journal. May 14 01:18:43.310818 kernel: loop0: detected capacity change from 0 to 8 May 14 01:18:43.310831 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 14 01:18:43.228839 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 01:18:43.235842 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 14 01:18:43.241543 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 14 01:18:43.243475 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 14 01:18:43.244033 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 14 01:18:43.247716 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 14 01:18:43.254865 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 14 01:18:43.255712 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 14 01:18:43.258203 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 14 01:18:43.265592 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 01:18:43.269259 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 01:18:43.272860 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 14 01:18:43.314380 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 14 01:18:43.316707 kernel: loop1: detected capacity change from 0 to 218376 May 14 01:18:43.319549 udevadm[1200]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 14 01:18:43.333510 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 14 01:18:43.338150 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 14 01:18:43.339805 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 01:18:43.368021 systemd-tmpfiles[1208]: ACLs are not supported, ignoring. May 14 01:18:43.368283 systemd-tmpfiles[1208]: ACLs are not supported, ignoring. May 14 01:18:43.373666 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 01:18:43.374674 kernel: loop2: detected capacity change from 0 to 151640 May 14 01:18:43.427661 kernel: loop3: detected capacity change from 0 to 109808 May 14 01:18:43.467660 kernel: loop4: detected capacity change from 0 to 8 May 14 01:18:43.470654 kernel: loop5: detected capacity change from 0 to 218376 May 14 01:18:43.501835 kernel: loop6: detected capacity change from 0 to 151640 May 14 01:18:43.528063 kernel: loop7: detected capacity change from 0 to 109808 May 14 01:18:43.550764 (sd-merge)[1214]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. May 14 01:18:43.551159 (sd-merge)[1214]: Merged extensions into '/usr'. May 14 01:18:43.557518 systemd[1]: Reload requested from client PID 1189 ('systemd-sysext') (unit systemd-sysext.service)... May 14 01:18:43.557698 systemd[1]: Reloading... May 14 01:18:43.625662 zram_generator::config[1238]: No configuration found. May 14 01:18:43.752774 ldconfig[1184]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 14 01:18:43.771589 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 01:18:43.838554 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 14 01:18:43.838927 systemd[1]: Reloading finished in 280 ms. May 14 01:18:43.851332 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 14 01:18:43.853538 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 14 01:18:43.862727 systemd[1]: Starting ensure-sysext.service... May 14 01:18:43.864605 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 01:18:43.881456 systemd[1]: Reload requested from client PID 1285 ('systemctl') (unit ensure-sysext.service)... May 14 01:18:43.881571 systemd[1]: Reloading... May 14 01:18:43.889300 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 14 01:18:43.889530 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 14 01:18:43.890179 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 14 01:18:43.890404 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. May 14 01:18:43.890468 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. May 14 01:18:43.893987 systemd-tmpfiles[1286]: Detected autofs mount point /boot during canonicalization of boot. May 14 01:18:43.893995 systemd-tmpfiles[1286]: Skipping /boot May 14 01:18:43.909584 systemd-tmpfiles[1286]: Detected autofs mount point /boot during canonicalization of boot. May 14 01:18:43.909879 systemd-tmpfiles[1286]: Skipping /boot May 14 01:18:43.963665 zram_generator::config[1321]: No configuration found. May 14 01:18:44.050328 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 01:18:44.117487 systemd[1]: Reloading finished in 235 ms. May 14 01:18:44.129239 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 14 01:18:44.130100 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 01:18:44.147736 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 01:18:44.151937 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 14 01:18:44.154520 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 14 01:18:44.159981 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 01:18:44.163602 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 01:18:44.165932 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 14 01:18:44.174418 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 01:18:44.174576 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 01:18:44.181080 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 01:18:44.186942 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 01:18:44.190900 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 01:18:44.191617 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 01:18:44.191736 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 01:18:44.191831 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 01:18:44.197873 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 14 01:18:44.202518 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 01:18:44.203785 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 01:18:44.204267 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 01:18:44.204354 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 01:18:44.204453 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 01:18:44.206908 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 01:18:44.207076 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 01:18:44.227489 systemd-udevd[1364]: Using default interface naming scheme 'v255'. May 14 01:18:44.228478 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 01:18:44.228654 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 01:18:44.230266 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 01:18:44.230400 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 01:18:44.236902 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 14 01:18:44.240877 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 14 01:18:44.243475 systemd[1]: Finished ensure-sysext.service. May 14 01:18:44.247688 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 01:18:44.247842 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 01:18:44.249860 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 01:18:44.251594 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 01:18:44.252777 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 01:18:44.252811 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 01:18:44.252855 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 01:18:44.255757 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 14 01:18:44.258051 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 14 01:18:44.259693 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 01:18:44.278391 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 01:18:44.282528 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 01:18:44.283581 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 01:18:44.284142 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 01:18:44.284816 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 01:18:44.285298 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 01:18:44.288621 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 01:18:44.297690 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 14 01:18:44.309191 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 14 01:18:44.314175 augenrules[1422]: No rules May 14 01:18:44.316351 systemd[1]: audit-rules.service: Deactivated successfully. May 14 01:18:44.316560 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 01:18:44.317708 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 14 01:18:44.319148 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 14 01:18:44.400382 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 14 01:18:44.445031 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. May 14 01:18:44.445236 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 01:18:44.445326 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 01:18:44.448020 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 01:18:44.449376 kernel: mousedev: PS/2 mouse device common for all mice May 14 01:18:44.453804 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 01:18:44.458791 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 01:18:44.459784 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 01:18:44.459812 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 01:18:44.459836 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 14 01:18:44.459847 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 14 01:18:44.472168 systemd-networkd[1401]: lo: Link UP May 14 01:18:44.472181 systemd-networkd[1401]: lo: Gained carrier May 14 01:18:44.476294 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 01:18:44.477848 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 14 01:18:44.476451 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 01:18:44.477452 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 01:18:44.478034 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 01:18:44.478623 systemd-networkd[1401]: Enumeration completed May 14 01:18:44.478934 systemd-networkd[1401]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 01:18:44.478938 systemd-networkd[1401]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 01:18:44.478959 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 01:18:44.479787 systemd-networkd[1401]: eth0: Link UP May 14 01:18:44.479795 systemd-networkd[1401]: eth0: Gained carrier May 14 01:18:44.479807 systemd-networkd[1401]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 01:18:44.480607 systemd-resolved[1363]: Positive Trust Anchors: May 14 01:18:44.480615 systemd-resolved[1363]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 01:18:44.480659 systemd-resolved[1363]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 01:18:44.485795 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 14 01:18:44.489949 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 14 01:18:44.490694 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 01:18:44.491001 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 01:18:44.491953 kernel: ACPI: button: Power Button [PWRF] May 14 01:18:44.491504 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 01:18:44.493384 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 01:18:44.494405 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 14 01:18:44.496796 systemd[1]: Reached target time-set.target - System Time Set. May 14 01:18:44.510327 systemd-resolved[1363]: Using system hostname 'ci-4284-0-0-n-c0828c9b46'. May 14 01:18:44.515570 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 01:18:44.516135 systemd[1]: Reached target network.target - Network. May 14 01:18:44.518694 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 01:18:44.523403 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 14 01:18:44.531706 systemd-networkd[1401]: eth0: DHCPv4 address 37.27.220.42/32, gateway 172.31.1.1 acquired from 172.31.1.1 May 14 01:18:44.533241 systemd-timesyncd[1391]: Network configuration changed, trying to establish connection. May 14 01:18:44.541019 systemd-networkd[1401]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 01:18:44.541036 systemd-networkd[1401]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 01:18:44.541832 systemd-networkd[1401]: eth1: Link UP May 14 01:18:44.541838 systemd-networkd[1401]: eth1: Gained carrier May 14 01:18:44.541849 systemd-networkd[1401]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 01:18:44.556713 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1415) May 14 01:18:44.567684 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 May 14 01:18:44.571711 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console May 14 01:18:44.586807 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 14 01:18:44.587686 kernel: Console: switching to colour dummy device 80x25 May 14 01:18:44.587700 kernel: [drm] features: -virgl +edid -resource_blob -host_visible May 14 01:18:44.587713 kernel: [drm] features: -context_init May 14 01:18:44.587724 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 14 01:18:44.587851 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 14 01:18:44.587935 kernel: [drm] number of scanouts: 1 May 14 01:18:44.587953 kernel: [drm] number of cap sets: 0 May 14 01:18:44.577103 systemd-networkd[1401]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 May 14 01:18:44.600684 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 May 14 01:18:44.603663 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 May 14 01:18:44.625210 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device May 14 01:18:44.625288 kernel: Console: switching to colour frame buffer device 160x50 May 14 01:18:44.636758 kernel: EDAC MC: Ver: 3.0.0 May 14 01:18:44.645810 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. May 14 01:18:44.654094 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device May 14 01:18:44.661001 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 14 01:18:44.672724 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 01:18:44.684606 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 01:18:44.685283 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 01:18:44.687942 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 14 01:18:44.688365 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 14 01:18:44.693685 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 01:18:44.757481 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 01:18:44.793380 systemd-timesyncd[1391]: Contacted time server 217.197.91.176:123 (0.flatcar.pool.ntp.org). May 14 01:18:44.793461 systemd-timesyncd[1391]: Initial clock synchronization to Wed 2025-05-14 01:18:45.013361 UTC. May 14 01:18:44.820167 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 14 01:18:44.822771 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 14 01:18:44.851350 lvm[1478]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 14 01:18:44.892103 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 14 01:18:44.893307 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 01:18:44.894922 systemd[1]: Reached target sysinit.target - System Initialization. May 14 01:18:44.895235 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 14 01:18:44.895421 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 14 01:18:44.895975 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 14 01:18:44.896261 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 14 01:18:44.896371 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 14 01:18:44.896510 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 14 01:18:44.896551 systemd[1]: Reached target paths.target - Path Units. May 14 01:18:44.897208 systemd[1]: Reached target timers.target - Timer Units. May 14 01:18:44.899452 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 14 01:18:44.902528 systemd[1]: Starting docker.socket - Docker Socket for the API... May 14 01:18:44.908207 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 14 01:18:44.910946 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 14 01:18:44.913498 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 14 01:18:44.928911 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 14 01:18:44.931777 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 14 01:18:44.936028 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 14 01:18:44.943292 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 14 01:18:44.944880 systemd[1]: Reached target sockets.target - Socket Units. May 14 01:18:44.946056 systemd[1]: Reached target basic.target - Basic System. May 14 01:18:44.950331 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 14 01:18:44.950390 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 14 01:18:44.952401 systemd[1]: Starting containerd.service - containerd container runtime... May 14 01:18:44.962270 lvm[1482]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 14 01:18:44.963022 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 14 01:18:44.971850 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 14 01:18:44.978838 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 14 01:18:44.993929 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 14 01:18:44.997761 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 14 01:18:45.001919 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 14 01:18:45.011592 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 14 01:18:45.019230 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. May 14 01:18:45.025855 coreos-metadata[1484]: May 14 01:18:45.024 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 May 14 01:18:45.026343 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 14 01:18:45.031822 coreos-metadata[1484]: May 14 01:18:45.026 INFO Fetch successful May 14 01:18:45.031822 coreos-metadata[1484]: May 14 01:18:45.026 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 May 14 01:18:45.035736 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 14 01:18:45.045860 extend-filesystems[1489]: Found loop4 May 14 01:18:45.045860 extend-filesystems[1489]: Found loop5 May 14 01:18:45.045860 extend-filesystems[1489]: Found loop6 May 14 01:18:45.045860 extend-filesystems[1489]: Found loop7 May 14 01:18:45.045860 extend-filesystems[1489]: Found sda May 14 01:18:45.045860 extend-filesystems[1489]: Found sda1 May 14 01:18:45.045860 extend-filesystems[1489]: Found sda2 May 14 01:18:45.045860 extend-filesystems[1489]: Found sda3 May 14 01:18:45.045860 extend-filesystems[1489]: Found usr May 14 01:18:45.045860 extend-filesystems[1489]: Found sda4 May 14 01:18:45.045860 extend-filesystems[1489]: Found sda6 May 14 01:18:45.045860 extend-filesystems[1489]: Found sda7 May 14 01:18:45.045860 extend-filesystems[1489]: Found sda9 May 14 01:18:45.045860 extend-filesystems[1489]: Checking size of /dev/sda9 May 14 01:18:45.144429 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks May 14 01:18:45.144455 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1399) May 14 01:18:45.144517 jq[1486]: false May 14 01:18:45.163958 extend-filesystems[1489]: Resized partition /dev/sda9 May 14 01:18:45.165250 coreos-metadata[1484]: May 14 01:18:45.046 INFO Fetch successful May 14 01:18:45.049096 systemd[1]: Starting systemd-logind.service - User Login Management... May 14 01:18:45.105949 dbus-daemon[1485]: [system] SELinux support is enabled May 14 01:18:45.180689 extend-filesystems[1507]: resize2fs 1.47.2 (1-Jan-2025) May 14 01:18:45.055202 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 14 01:18:45.063079 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 14 01:18:45.070837 systemd[1]: Starting update-engine.service - Update Engine... May 14 01:18:45.088982 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 14 01:18:45.104780 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 14 01:18:45.190340 update_engine[1508]: I20250514 01:18:45.162518 1508 main.cc:92] Flatcar Update Engine starting May 14 01:18:45.190340 update_engine[1508]: I20250514 01:18:45.184973 1508 update_check_scheduler.cc:74] Next update check in 7m44s May 14 01:18:45.129941 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 14 01:18:45.190582 jq[1514]: true May 14 01:18:45.151900 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 14 01:18:45.152093 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 14 01:18:45.152321 systemd[1]: motdgen.service: Deactivated successfully. May 14 01:18:45.152465 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 14 01:18:45.157973 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 14 01:18:45.158133 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 14 01:18:45.203267 (ntainerd)[1527]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 14 01:18:45.207394 jq[1526]: true May 14 01:18:45.208152 tar[1519]: linux-amd64/LICENSE May 14 01:18:45.212798 tar[1519]: linux-amd64/helm May 14 01:18:45.223679 kernel: EXT4-fs (sda9): resized filesystem to 9393147 May 14 01:18:45.228861 systemd[1]: Started update-engine.service - Update Engine. May 14 01:18:45.231476 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 14 01:18:45.231496 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 14 01:18:45.234020 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 14 01:18:45.234040 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 14 01:18:45.237765 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 14 01:18:45.252929 extend-filesystems[1507]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required May 14 01:18:45.252929 extend-filesystems[1507]: old_desc_blocks = 1, new_desc_blocks = 5 May 14 01:18:45.252929 extend-filesystems[1507]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. May 14 01:18:45.269103 extend-filesystems[1489]: Resized filesystem in /dev/sda9 May 14 01:18:45.269103 extend-filesystems[1489]: Found sr0 May 14 01:18:45.286080 systemd[1]: extend-filesystems.service: Deactivated successfully. May 14 01:18:45.286717 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 14 01:18:45.309449 systemd-logind[1504]: New seat seat0. May 14 01:18:45.316505 systemd-logind[1504]: Watching system buttons on /dev/input/event2 (Power Button) May 14 01:18:45.316523 systemd-logind[1504]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 14 01:18:45.316817 systemd[1]: Started systemd-logind.service - User Login Management. May 14 01:18:45.345063 bash[1554]: Updated "/home/core/.ssh/authorized_keys" May 14 01:18:45.349144 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 14 01:18:45.367564 systemd[1]: Starting sshkeys.service... May 14 01:18:45.371843 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 14 01:18:45.374357 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 14 01:18:45.407750 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 14 01:18:45.413165 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 14 01:18:45.464548 coreos-metadata[1564]: May 14 01:18:45.463 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 May 14 01:18:45.464919 coreos-metadata[1564]: May 14 01:18:45.464 INFO Fetch successful May 14 01:18:45.468148 unknown[1564]: wrote ssh authorized keys file for user: core May 14 01:18:45.493269 locksmithd[1536]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 14 01:18:45.512609 update-ssh-keys[1573]: Updated "/home/core/.ssh/authorized_keys" May 14 01:18:45.514598 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 14 01:18:45.523116 systemd[1]: Finished sshkeys.service. May 14 01:18:45.594459 containerd[1527]: time="2025-05-14T01:18:45Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 14 01:18:45.595848 containerd[1527]: time="2025-05-14T01:18:45.595829431Z" level=info msg="starting containerd" revision=88aa2f531d6c2922003cc7929e51daf1c14caa0a version=v2.0.1 May 14 01:18:45.616271 containerd[1527]: time="2025-05-14T01:18:45.616212542Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.967µs" May 14 01:18:45.616271 containerd[1527]: time="2025-05-14T01:18:45.616261955Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 14 01:18:45.616271 containerd[1527]: time="2025-05-14T01:18:45.616284664Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 14 01:18:45.616463 containerd[1527]: time="2025-05-14T01:18:45.616440500Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 14 01:18:45.616491 containerd[1527]: time="2025-05-14T01:18:45.616470096Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 14 01:18:45.616510 containerd[1527]: time="2025-05-14T01:18:45.616499518Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 14 01:18:45.616577 containerd[1527]: time="2025-05-14T01:18:45.616555119Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 14 01:18:45.616577 containerd[1527]: time="2025-05-14T01:18:45.616573133Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 14 01:18:45.621837 containerd[1527]: time="2025-05-14T01:18:45.621812276Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 14 01:18:45.621837 containerd[1527]: time="2025-05-14T01:18:45.621833883Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 14 01:18:45.621893 containerd[1527]: time="2025-05-14T01:18:45.621846340Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 14 01:18:45.621893 containerd[1527]: time="2025-05-14T01:18:45.621855995Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 14 01:18:45.621969 containerd[1527]: time="2025-05-14T01:18:45.621925298Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 14 01:18:45.622132 containerd[1527]: time="2025-05-14T01:18:45.622108908Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 14 01:18:45.622155 containerd[1527]: time="2025-05-14T01:18:45.622142942Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 14 01:18:45.622181 containerd[1527]: time="2025-05-14T01:18:45.622153205Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 14 01:18:45.622200 containerd[1527]: time="2025-05-14T01:18:45.622182236Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 14 01:18:45.622442 containerd[1527]: time="2025-05-14T01:18:45.622422145Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 14 01:18:45.622495 containerd[1527]: time="2025-05-14T01:18:45.622476408Z" level=info msg="metadata content store policy set" policy=shared May 14 01:18:45.627592 containerd[1527]: time="2025-05-14T01:18:45.627471418Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 14 01:18:45.627592 containerd[1527]: time="2025-05-14T01:18:45.627534616Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 14 01:18:45.627592 containerd[1527]: time="2025-05-14T01:18:45.627549841Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 14 01:18:45.627592 containerd[1527]: time="2025-05-14T01:18:45.627566600Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 14 01:18:45.627718 containerd[1527]: time="2025-05-14T01:18:45.627578233Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 14 01:18:45.627769 containerd[1527]: time="2025-05-14T01:18:45.627758611Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 14 01:18:45.627821 containerd[1527]: time="2025-05-14T01:18:45.627811844Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 14 01:18:45.627867 containerd[1527]: time="2025-05-14T01:18:45.627858199Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 14 01:18:45.627905 containerd[1527]: time="2025-05-14T01:18:45.627897184Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 14 01:18:45.627942 containerd[1527]: time="2025-05-14T01:18:45.627935098Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 14 01:18:45.627987 containerd[1527]: time="2025-05-14T01:18:45.627978654Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 14 01:18:45.628031 containerd[1527]: time="2025-05-14T01:18:45.628023280Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 14 01:18:45.628199 containerd[1527]: time="2025-05-14T01:18:45.628185643Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 14 01:18:45.628266 containerd[1527]: time="2025-05-14T01:18:45.628256344Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 14 01:18:45.628308 containerd[1527]: time="2025-05-14T01:18:45.628300116Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 14 01:18:45.631046 containerd[1527]: time="2025-05-14T01:18:45.629681314Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 14 01:18:45.631046 containerd[1527]: time="2025-05-14T01:18:45.629697188Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 14 01:18:45.631046 containerd[1527]: time="2025-05-14T01:18:45.629707153Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 14 01:18:45.631046 containerd[1527]: time="2025-05-14T01:18:45.629717541Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 14 01:18:45.631046 containerd[1527]: time="2025-05-14T01:18:45.629732724Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 14 01:18:45.631046 containerd[1527]: time="2025-05-14T01:18:45.629743966Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 14 01:18:45.631046 containerd[1527]: time="2025-05-14T01:18:45.629753509Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 14 01:18:45.631046 containerd[1527]: time="2025-05-14T01:18:45.629763319Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 14 01:18:45.631046 containerd[1527]: time="2025-05-14T01:18:45.629820319Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 14 01:18:45.631046 containerd[1527]: time="2025-05-14T01:18:45.629830964Z" level=info msg="Start snapshots syncer" May 14 01:18:45.631046 containerd[1527]: time="2025-05-14T01:18:45.629857935Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 14 01:18:45.631247 containerd[1527]: time="2025-05-14T01:18:45.630121265Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 14 01:18:45.631247 containerd[1527]: time="2025-05-14T01:18:45.630164585Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 14 01:18:45.631375 containerd[1527]: time="2025-05-14T01:18:45.630223242Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 14 01:18:45.631375 containerd[1527]: time="2025-05-14T01:18:45.630306976Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 14 01:18:45.631375 containerd[1527]: time="2025-05-14T01:18:45.630331199Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 14 01:18:45.631375 containerd[1527]: time="2025-05-14T01:18:45.630343552Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 14 01:18:45.631375 containerd[1527]: time="2025-05-14T01:18:45.630353074Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 14 01:18:45.631375 containerd[1527]: time="2025-05-14T01:18:45.630363451Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 14 01:18:45.631375 containerd[1527]: time="2025-05-14T01:18:45.630373674Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 14 01:18:45.631375 containerd[1527]: time="2025-05-14T01:18:45.630384112Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 14 01:18:45.631375 containerd[1527]: time="2025-05-14T01:18:45.630405812Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 14 01:18:45.631375 containerd[1527]: time="2025-05-14T01:18:45.630418722Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 14 01:18:45.631375 containerd[1527]: time="2025-05-14T01:18:45.630427894Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 14 01:18:45.631375 containerd[1527]: time="2025-05-14T01:18:45.630453692Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 14 01:18:45.631375 containerd[1527]: time="2025-05-14T01:18:45.630465932Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 14 01:18:45.631375 containerd[1527]: time="2025-05-14T01:18:45.630474352Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 14 01:18:45.631603 containerd[1527]: time="2025-05-14T01:18:45.630482866Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 14 01:18:45.631603 containerd[1527]: time="2025-05-14T01:18:45.630490546Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 14 01:18:45.631603 containerd[1527]: time="2025-05-14T01:18:45.630499142Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 14 01:18:45.631603 containerd[1527]: time="2025-05-14T01:18:45.630509303Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 14 01:18:45.631603 containerd[1527]: time="2025-05-14T01:18:45.630524106Z" level=info msg="runtime interface created" May 14 01:18:45.631603 containerd[1527]: time="2025-05-14T01:18:45.630528790Z" level=info msg="created NRI interface" May 14 01:18:45.631603 containerd[1527]: time="2025-05-14T01:18:45.630535872Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 14 01:18:45.631603 containerd[1527]: time="2025-05-14T01:18:45.630547463Z" level=info msg="Connect containerd service" May 14 01:18:45.631603 containerd[1527]: time="2025-05-14T01:18:45.630569267Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 14 01:18:45.633904 containerd[1527]: time="2025-05-14T01:18:45.633886541Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 14 01:18:45.767550 sshd_keygen[1515]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 14 01:18:45.799568 containerd[1527]: time="2025-05-14T01:18:45.799518585Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 14 01:18:45.802282 containerd[1527]: time="2025-05-14T01:18:45.799678344Z" level=info msg="Start subscribing containerd event" May 14 01:18:45.802282 containerd[1527]: time="2025-05-14T01:18:45.801837798Z" level=info msg="Start recovering state" May 14 01:18:45.802282 containerd[1527]: time="2025-05-14T01:18:45.801973663Z" level=info msg="Start event monitor" May 14 01:18:45.802282 containerd[1527]: time="2025-05-14T01:18:45.801996989Z" level=info msg="Start cni network conf syncer for default" May 14 01:18:45.802282 containerd[1527]: time="2025-05-14T01:18:45.802022098Z" level=info msg="Start streaming server" May 14 01:18:45.802282 containerd[1527]: time="2025-05-14T01:18:45.802034770Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 14 01:18:45.802282 containerd[1527]: time="2025-05-14T01:18:45.802043551Z" level=info msg="runtime interface starting up..." May 14 01:18:45.802282 containerd[1527]: time="2025-05-14T01:18:45.802050829Z" level=info msg="starting plugins..." May 14 01:18:45.802282 containerd[1527]: time="2025-05-14T01:18:45.802072190Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 14 01:18:45.802697 containerd[1527]: time="2025-05-14T01:18:45.802676337Z" level=info msg=serving... address=/run/containerd/containerd.sock May 14 01:18:45.802863 containerd[1527]: time="2025-05-14T01:18:45.802843744Z" level=info msg="containerd successfully booted in 0.208748s" May 14 01:18:45.804835 systemd[1]: Started containerd.service - containerd container runtime. May 14 01:18:45.816287 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 14 01:18:45.826767 systemd[1]: Starting issuegen.service - Generate /run/issue... May 14 01:18:45.847751 systemd[1]: issuegen.service: Deactivated successfully. May 14 01:18:45.847934 systemd[1]: Finished issuegen.service - Generate /run/issue. May 14 01:18:45.854998 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 14 01:18:45.873272 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 14 01:18:45.880875 systemd[1]: Started getty@tty1.service - Getty on tty1. May 14 01:18:45.884153 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 14 01:18:45.884629 systemd[1]: Reached target getty.target - Login Prompts. May 14 01:18:45.935813 systemd-networkd[1401]: eth0: Gained IPv6LL May 14 01:18:45.942186 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 14 01:18:45.948443 tar[1519]: linux-amd64/README.md May 14 01:18:45.948090 systemd[1]: Reached target network-online.target - Network is Online. May 14 01:18:45.954914 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 01:18:45.963629 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 14 01:18:45.967845 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 14 01:18:45.996997 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 14 01:18:46.512368 systemd-networkd[1401]: eth1: Gained IPv6LL May 14 01:18:47.196707 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 01:18:47.201914 systemd[1]: Reached target multi-user.target - Multi-User System. May 14 01:18:47.203552 (kubelet)[1627]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 01:18:47.205124 systemd[1]: Startup finished in 1.407s (kernel) + 7.488s (initrd) + 4.972s (userspace) = 13.867s. May 14 01:18:48.105211 kubelet[1627]: E0514 01:18:48.105128 1627 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 01:18:48.109063 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 01:18:48.109345 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 01:18:48.110050 systemd[1]: kubelet.service: Consumed 1.497s CPU time, 251.1M memory peak. May 14 01:18:58.360849 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 14 01:18:58.364380 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 01:18:58.520618 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 01:18:58.525816 (kubelet)[1644]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 01:18:58.581469 kubelet[1644]: E0514 01:18:58.581379 1644 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 01:18:58.587124 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 01:18:58.587358 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 01:18:58.587829 systemd[1]: kubelet.service: Consumed 192ms CPU time, 102.9M memory peak. May 14 01:19:08.792198 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 14 01:19:08.794827 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 01:19:08.942310 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 01:19:08.962025 (kubelet)[1661]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 01:19:09.028556 kubelet[1661]: E0514 01:19:09.028475 1661 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 01:19:09.032190 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 01:19:09.032369 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 01:19:09.032775 systemd[1]: kubelet.service: Consumed 208ms CPU time, 101.6M memory peak. May 14 01:19:19.041540 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 14 01:19:19.044081 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 01:19:19.190888 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 01:19:19.198864 (kubelet)[1676]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 01:19:19.236912 kubelet[1676]: E0514 01:19:19.236823 1676 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 01:19:19.240854 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 01:19:19.241045 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 01:19:19.241500 systemd[1]: kubelet.service: Consumed 155ms CPU time, 103.8M memory peak. May 14 01:19:29.292313 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 14 01:19:29.295058 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 01:19:29.462710 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 01:19:29.471844 (kubelet)[1691]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 01:19:29.518873 kubelet[1691]: E0514 01:19:29.518784 1691 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 01:19:29.522143 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 01:19:29.522382 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 01:19:29.522869 systemd[1]: kubelet.service: Consumed 191ms CPU time, 103.9M memory peak. May 14 01:19:30.375971 update_engine[1508]: I20250514 01:19:30.375792 1508 update_attempter.cc:509] Updating boot flags... May 14 01:19:30.439711 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1707) May 14 01:19:30.502744 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1703) May 14 01:19:30.549675 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1703) May 14 01:19:39.542450 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. May 14 01:19:39.545623 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 01:19:39.722073 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 01:19:39.730899 (kubelet)[1727]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 01:19:39.777803 kubelet[1727]: E0514 01:19:39.777671 1727 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 01:19:39.781617 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 01:19:39.781800 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 01:19:39.782121 systemd[1]: kubelet.service: Consumed 201ms CPU time, 103.9M memory peak. May 14 01:19:49.792314 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. May 14 01:19:49.796012 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 01:19:49.973610 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 01:19:49.976791 (kubelet)[1742]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 01:19:50.015409 kubelet[1742]: E0514 01:19:50.015325 1742 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 01:19:50.018307 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 01:19:50.018561 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 01:19:50.018964 systemd[1]: kubelet.service: Consumed 181ms CPU time, 103.7M memory peak. May 14 01:20:00.042830 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. May 14 01:20:00.046506 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 01:20:00.210498 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 01:20:00.218220 (kubelet)[1758]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 01:20:00.288464 kubelet[1758]: E0514 01:20:00.288379 1758 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 01:20:00.291825 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 01:20:00.291995 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 01:20:00.292336 systemd[1]: kubelet.service: Consumed 209ms CPU time, 103.8M memory peak. May 14 01:20:10.542243 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. May 14 01:20:10.544828 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 01:20:10.716633 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 01:20:10.725871 (kubelet)[1774]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 01:20:10.772917 kubelet[1774]: E0514 01:20:10.772794 1774 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 01:20:10.776972 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 01:20:10.777223 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 01:20:10.778041 systemd[1]: kubelet.service: Consumed 190ms CPU time, 103.7M memory peak. May 14 01:20:15.155457 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 14 01:20:15.157929 systemd[1]: Started sshd@0-37.27.220.42:22-192.81.213.83:50388.service - OpenSSH per-connection server daemon (192.81.213.83:50388). May 14 01:20:15.791923 sshd[1782]: Invalid user ruslan from 192.81.213.83 port 50388 May 14 01:20:15.902129 sshd[1782]: Received disconnect from 192.81.213.83 port 50388:11: Bye Bye [preauth] May 14 01:20:15.902129 sshd[1782]: Disconnected from invalid user ruslan 192.81.213.83 port 50388 [preauth] May 14 01:20:15.905264 systemd[1]: sshd@0-37.27.220.42:22-192.81.213.83:50388.service: Deactivated successfully. May 14 01:20:20.792231 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. May 14 01:20:20.795549 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 01:20:20.967433 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 01:20:20.976034 (kubelet)[1794]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 01:20:21.038733 kubelet[1794]: E0514 01:20:21.038617 1794 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 01:20:21.042410 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 01:20:21.042701 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 01:20:21.043365 systemd[1]: kubelet.service: Consumed 206ms CPU time, 103.6M memory peak. May 14 01:20:29.902588 systemd[1]: Started sshd@1-37.27.220.42:22-139.178.89.65:59022.service - OpenSSH per-connection server daemon (139.178.89.65:59022). May 14 01:20:30.909830 sshd[1802]: Accepted publickey for core from 139.178.89.65 port 59022 ssh2: RSA SHA256:wzXoKaIc/s6rgd+lrAvPV4Ayc93C3Z1S3pv9QsITRgg May 14 01:20:30.913275 sshd-session[1802]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:20:30.933519 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 14 01:20:30.937102 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 14 01:20:30.944013 systemd-logind[1504]: New session 1 of user core. May 14 01:20:30.970874 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 14 01:20:30.976323 systemd[1]: Starting user@500.service - User Manager for UID 500... May 14 01:20:30.997323 (systemd)[1806]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 14 01:20:31.001117 systemd-logind[1504]: New session c1 of user core. May 14 01:20:31.181587 systemd[1806]: Queued start job for default target default.target. May 14 01:20:31.187389 systemd[1806]: Created slice app.slice - User Application Slice. May 14 01:20:31.187410 systemd[1806]: Reached target paths.target - Paths. May 14 01:20:31.187439 systemd[1806]: Reached target timers.target - Timers. May 14 01:20:31.189281 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. May 14 01:20:31.190697 systemd[1806]: Starting dbus.socket - D-Bus User Message Bus Socket... May 14 01:20:31.192892 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 01:20:31.197242 systemd[1806]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 14 01:20:31.197281 systemd[1806]: Reached target sockets.target - Sockets. May 14 01:20:31.197308 systemd[1806]: Reached target basic.target - Basic System. May 14 01:20:31.197334 systemd[1806]: Reached target default.target - Main User Target. May 14 01:20:31.197354 systemd[1806]: Startup finished in 186ms. May 14 01:20:31.197431 systemd[1]: Started user@500.service - User Manager for UID 500. May 14 01:20:31.207906 systemd[1]: Started session-1.scope - Session 1 of User core. May 14 01:20:31.324919 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 01:20:31.328327 (kubelet)[1823]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 01:20:31.368068 kubelet[1823]: E0514 01:20:31.367974 1823 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 01:20:31.370705 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 01:20:31.370935 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 01:20:31.371388 systemd[1]: kubelet.service: Consumed 132ms CPU time, 104.3M memory peak. May 14 01:20:31.904600 systemd[1]: Started sshd@2-37.27.220.42:22-139.178.89.65:59032.service - OpenSSH per-connection server daemon (139.178.89.65:59032). May 14 01:20:32.902363 sshd[1832]: Accepted publickey for core from 139.178.89.65 port 59032 ssh2: RSA SHA256:wzXoKaIc/s6rgd+lrAvPV4Ayc93C3Z1S3pv9QsITRgg May 14 01:20:32.904635 sshd-session[1832]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:20:32.912925 systemd-logind[1504]: New session 2 of user core. May 14 01:20:32.920950 systemd[1]: Started session-2.scope - Session 2 of User core. May 14 01:20:33.578343 sshd[1834]: Connection closed by 139.178.89.65 port 59032 May 14 01:20:33.579319 sshd-session[1832]: pam_unix(sshd:session): session closed for user core May 14 01:20:33.583363 systemd[1]: sshd@2-37.27.220.42:22-139.178.89.65:59032.service: Deactivated successfully. May 14 01:20:33.586219 systemd[1]: session-2.scope: Deactivated successfully. May 14 01:20:33.588271 systemd-logind[1504]: Session 2 logged out. Waiting for processes to exit. May 14 01:20:33.589886 systemd-logind[1504]: Removed session 2. May 14 01:20:33.755532 systemd[1]: Started sshd@3-37.27.220.42:22-139.178.89.65:59046.service - OpenSSH per-connection server daemon (139.178.89.65:59046). May 14 01:20:34.757619 sshd[1840]: Accepted publickey for core from 139.178.89.65 port 59046 ssh2: RSA SHA256:wzXoKaIc/s6rgd+lrAvPV4Ayc93C3Z1S3pv9QsITRgg May 14 01:20:34.760184 sshd-session[1840]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:20:34.768793 systemd-logind[1504]: New session 3 of user core. May 14 01:20:34.780957 systemd[1]: Started session-3.scope - Session 3 of User core. May 14 01:20:35.431674 sshd[1842]: Connection closed by 139.178.89.65 port 59046 May 14 01:20:35.432572 sshd-session[1840]: pam_unix(sshd:session): session closed for user core May 14 01:20:35.436682 systemd[1]: sshd@3-37.27.220.42:22-139.178.89.65:59046.service: Deactivated successfully. May 14 01:20:35.440084 systemd[1]: session-3.scope: Deactivated successfully. May 14 01:20:35.442373 systemd-logind[1504]: Session 3 logged out. Waiting for processes to exit. May 14 01:20:35.444328 systemd-logind[1504]: Removed session 3. May 14 01:20:35.603393 systemd[1]: Started sshd@4-37.27.220.42:22-139.178.89.65:59050.service - OpenSSH per-connection server daemon (139.178.89.65:59050). May 14 01:20:36.606029 sshd[1848]: Accepted publickey for core from 139.178.89.65 port 59050 ssh2: RSA SHA256:wzXoKaIc/s6rgd+lrAvPV4Ayc93C3Z1S3pv9QsITRgg May 14 01:20:36.608273 sshd-session[1848]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:20:36.615632 systemd-logind[1504]: New session 4 of user core. May 14 01:20:36.626946 systemd[1]: Started session-4.scope - Session 4 of User core. May 14 01:20:37.283858 sshd[1850]: Connection closed by 139.178.89.65 port 59050 May 14 01:20:37.284854 sshd-session[1848]: pam_unix(sshd:session): session closed for user core May 14 01:20:37.289806 systemd-logind[1504]: Session 4 logged out. Waiting for processes to exit. May 14 01:20:37.290748 systemd[1]: sshd@4-37.27.220.42:22-139.178.89.65:59050.service: Deactivated successfully. May 14 01:20:37.293437 systemd[1]: session-4.scope: Deactivated successfully. May 14 01:20:37.294947 systemd-logind[1504]: Removed session 4. May 14 01:20:37.455834 systemd[1]: Started sshd@5-37.27.220.42:22-139.178.89.65:37242.service - OpenSSH per-connection server daemon (139.178.89.65:37242). May 14 01:20:38.451248 sshd[1856]: Accepted publickey for core from 139.178.89.65 port 37242 ssh2: RSA SHA256:wzXoKaIc/s6rgd+lrAvPV4Ayc93C3Z1S3pv9QsITRgg May 14 01:20:38.453007 sshd-session[1856]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:20:38.459591 systemd-logind[1504]: New session 5 of user core. May 14 01:20:38.476860 systemd[1]: Started session-5.scope - Session 5 of User core. May 14 01:20:38.982863 sudo[1859]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 14 01:20:38.983317 sudo[1859]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 01:20:39.002265 sudo[1859]: pam_unix(sudo:session): session closed for user root May 14 01:20:39.159744 sshd[1858]: Connection closed by 139.178.89.65 port 37242 May 14 01:20:39.160926 sshd-session[1856]: pam_unix(sshd:session): session closed for user core May 14 01:20:39.165456 systemd[1]: sshd@5-37.27.220.42:22-139.178.89.65:37242.service: Deactivated successfully. May 14 01:20:39.168194 systemd[1]: session-5.scope: Deactivated successfully. May 14 01:20:39.170289 systemd-logind[1504]: Session 5 logged out. Waiting for processes to exit. May 14 01:20:39.171969 systemd-logind[1504]: Removed session 5. May 14 01:20:39.332048 systemd[1]: Started sshd@6-37.27.220.42:22-139.178.89.65:37256.service - OpenSSH per-connection server daemon (139.178.89.65:37256). May 14 01:20:40.333234 sshd[1865]: Accepted publickey for core from 139.178.89.65 port 37256 ssh2: RSA SHA256:wzXoKaIc/s6rgd+lrAvPV4Ayc93C3Z1S3pv9QsITRgg May 14 01:20:40.335393 sshd-session[1865]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:20:40.342729 systemd-logind[1504]: New session 6 of user core. May 14 01:20:40.357880 systemd[1]: Started session-6.scope - Session 6 of User core. May 14 01:20:40.854377 sudo[1869]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 14 01:20:40.854849 sudo[1869]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 01:20:40.860240 sudo[1869]: pam_unix(sudo:session): session closed for user root May 14 01:20:40.868548 sudo[1868]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 14 01:20:40.869025 sudo[1868]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 01:20:40.884466 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 01:20:40.935323 augenrules[1891]: No rules May 14 01:20:40.936274 systemd[1]: audit-rules.service: Deactivated successfully. May 14 01:20:40.936583 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 01:20:40.938748 sudo[1868]: pam_unix(sudo:session): session closed for user root May 14 01:20:41.096839 sshd[1867]: Connection closed by 139.178.89.65 port 37256 May 14 01:20:41.097727 sshd-session[1865]: pam_unix(sshd:session): session closed for user core May 14 01:20:41.103007 systemd[1]: sshd@6-37.27.220.42:22-139.178.89.65:37256.service: Deactivated successfully. May 14 01:20:41.106273 systemd[1]: session-6.scope: Deactivated successfully. May 14 01:20:41.108439 systemd-logind[1504]: Session 6 logged out. Waiting for processes to exit. May 14 01:20:41.110319 systemd-logind[1504]: Removed session 6. May 14 01:20:41.270939 systemd[1]: Started sshd@7-37.27.220.42:22-139.178.89.65:37260.service - OpenSSH per-connection server daemon (139.178.89.65:37260). May 14 01:20:41.542170 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. May 14 01:20:41.545847 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 01:20:41.724671 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 01:20:41.730895 (kubelet)[1910]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 01:20:41.780737 kubelet[1910]: E0514 01:20:41.780584 1910 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 01:20:41.783634 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 01:20:41.783828 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 01:20:41.784187 systemd[1]: kubelet.service: Consumed 194ms CPU time, 103.8M memory peak. May 14 01:20:42.260465 sshd[1900]: Accepted publickey for core from 139.178.89.65 port 37260 ssh2: RSA SHA256:wzXoKaIc/s6rgd+lrAvPV4Ayc93C3Z1S3pv9QsITRgg May 14 01:20:42.262530 sshd-session[1900]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:20:42.270578 systemd-logind[1504]: New session 7 of user core. May 14 01:20:42.277895 systemd[1]: Started session-7.scope - Session 7 of User core. May 14 01:20:42.779389 sudo[1918]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 14 01:20:42.779894 sudo[1918]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 01:20:43.258151 systemd[1]: Starting docker.service - Docker Application Container Engine... May 14 01:20:43.273162 (dockerd)[1936]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 14 01:20:43.705714 dockerd[1936]: time="2025-05-14T01:20:43.704901991Z" level=info msg="Starting up" May 14 01:20:43.710458 dockerd[1936]: time="2025-05-14T01:20:43.710284593Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 14 01:20:43.791492 dockerd[1936]: time="2025-05-14T01:20:43.791404321Z" level=info msg="Loading containers: start." May 14 01:20:43.961733 kernel: Initializing XFRM netlink socket May 14 01:20:44.057087 systemd-networkd[1401]: docker0: Link UP May 14 01:20:44.106872 dockerd[1936]: time="2025-05-14T01:20:44.106813337Z" level=info msg="Loading containers: done." May 14 01:20:44.124141 dockerd[1936]: time="2025-05-14T01:20:44.124084540Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 14 01:20:44.124370 dockerd[1936]: time="2025-05-14T01:20:44.124176052Z" level=info msg="Docker daemon" commit=c710b88579fcb5e0d53f96dcae976d79323b9166 containerd-snapshotter=false storage-driver=overlay2 version=27.4.1 May 14 01:20:44.124370 dockerd[1936]: time="2025-05-14T01:20:44.124252084Z" level=info msg="Daemon has completed initialization" May 14 01:20:44.174410 dockerd[1936]: time="2025-05-14T01:20:44.174319925Z" level=info msg="API listen on /run/docker.sock" May 14 01:20:44.174613 systemd[1]: Started docker.service - Docker Application Container Engine. May 14 01:20:45.581398 containerd[1527]: time="2025-05-14T01:20:45.580897821Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" May 14 01:20:46.223631 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3981426041.mount: Deactivated successfully. May 14 01:20:48.397466 containerd[1527]: time="2025-05-14T01:20:48.397405133Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:20:48.398346 containerd[1527]: time="2025-05-14T01:20:48.398293512Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=28682973" May 14 01:20:48.399325 containerd[1527]: time="2025-05-14T01:20:48.399274275Z" level=info msg="ImageCreate event name:\"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:20:48.401720 containerd[1527]: time="2025-05-14T01:20:48.401652713Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:20:48.402501 containerd[1527]: time="2025-05-14T01:20:48.402347149Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"28679679\" in 2.82139215s" May 14 01:20:48.402501 containerd[1527]: time="2025-05-14T01:20:48.402384689Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\"" May 14 01:20:48.403105 containerd[1527]: time="2025-05-14T01:20:48.403031774Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" May 14 01:20:50.799928 containerd[1527]: time="2025-05-14T01:20:50.799864218Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:20:50.801078 containerd[1527]: time="2025-05-14T01:20:50.801016045Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=24779611" May 14 01:20:50.802257 containerd[1527]: time="2025-05-14T01:20:50.802207375Z" level=info msg="ImageCreate event name:\"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:20:50.804736 containerd[1527]: time="2025-05-14T01:20:50.804670378Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:20:50.805526 containerd[1527]: time="2025-05-14T01:20:50.805402895Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"26267962\" in 2.40233918s" May 14 01:20:50.805526 containerd[1527]: time="2025-05-14T01:20:50.805429987Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\"" May 14 01:20:50.806314 containerd[1527]: time="2025-05-14T01:20:50.806261391Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" May 14 01:20:51.792083 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. May 14 01:20:51.794902 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 01:20:51.935801 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 01:20:51.943824 (kubelet)[2201]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 01:20:51.985677 kubelet[2201]: E0514 01:20:51.985531 2201 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 01:20:51.987228 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 01:20:51.987396 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 01:20:51.987890 systemd[1]: kubelet.service: Consumed 166ms CPU time, 103.6M memory peak. May 14 01:20:52.295870 containerd[1527]: time="2025-05-14T01:20:52.295826944Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:20:52.296927 containerd[1527]: time="2025-05-14T01:20:52.296888031Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=19169960" May 14 01:20:52.298024 containerd[1527]: time="2025-05-14T01:20:52.297991329Z" level=info msg="ImageCreate event name:\"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:20:52.300467 containerd[1527]: time="2025-05-14T01:20:52.300431554Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:20:52.301461 containerd[1527]: time="2025-05-14T01:20:52.301340966Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"20658329\" in 1.495052664s" May 14 01:20:52.301461 containerd[1527]: time="2025-05-14T01:20:52.301370722Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\"" May 14 01:20:52.301965 containerd[1527]: time="2025-05-14T01:20:52.301940355Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 14 01:20:53.309618 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1338336127.mount: Deactivated successfully. May 14 01:20:53.615518 containerd[1527]: time="2025-05-14T01:20:53.615165588Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:20:53.616144 containerd[1527]: time="2025-05-14T01:20:53.616066905Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=30917884" May 14 01:20:53.616985 containerd[1527]: time="2025-05-14T01:20:53.616941814Z" level=info msg="ImageCreate event name:\"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:20:53.618393 containerd[1527]: time="2025-05-14T01:20:53.618377198Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:20:53.619063 containerd[1527]: time="2025-05-14T01:20:53.618822336Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"30916875\" in 1.316850642s" May 14 01:20:53.619063 containerd[1527]: time="2025-05-14T01:20:53.618855949Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\"" May 14 01:20:53.619519 containerd[1527]: time="2025-05-14T01:20:53.619479644Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 14 01:20:54.149862 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount538868351.mount: Deactivated successfully. May 14 01:20:55.030606 containerd[1527]: time="2025-05-14T01:20:55.030537575Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:20:55.031802 containerd[1527]: time="2025-05-14T01:20:55.031754369Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565335" May 14 01:20:55.033427 containerd[1527]: time="2025-05-14T01:20:55.033388410Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:20:55.035847 containerd[1527]: time="2025-05-14T01:20:55.035787914Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:20:55.036656 containerd[1527]: time="2025-05-14T01:20:55.036510466Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.417004182s" May 14 01:20:55.036656 containerd[1527]: time="2025-05-14T01:20:55.036538098Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 14 01:20:55.037109 containerd[1527]: time="2025-05-14T01:20:55.037060493Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 14 01:20:55.522777 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2298381218.mount: Deactivated successfully. May 14 01:20:55.533072 containerd[1527]: time="2025-05-14T01:20:55.532992126Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 01:20:55.534314 containerd[1527]: time="2025-05-14T01:20:55.534236602Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321160" May 14 01:20:55.535808 containerd[1527]: time="2025-05-14T01:20:55.535774601Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 01:20:55.539372 containerd[1527]: time="2025-05-14T01:20:55.539311891Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 01:20:55.540563 containerd[1527]: time="2025-05-14T01:20:55.540507655Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 503.399542ms" May 14 01:20:55.540563 containerd[1527]: time="2025-05-14T01:20:55.540551989Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 14 01:20:55.541372 containerd[1527]: time="2025-05-14T01:20:55.541313395Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 14 01:20:56.072439 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount594091276.mount: Deactivated successfully. May 14 01:20:57.448063 containerd[1527]: time="2025-05-14T01:20:57.447983211Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:20:57.449407 containerd[1527]: time="2025-05-14T01:20:57.449356042Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551430" May 14 01:20:57.450623 containerd[1527]: time="2025-05-14T01:20:57.450523275Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:20:57.453375 containerd[1527]: time="2025-05-14T01:20:57.453336202Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:20:57.455882 containerd[1527]: time="2025-05-14T01:20:57.455788920Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 1.91442469s" May 14 01:20:57.455882 containerd[1527]: time="2025-05-14T01:20:57.455838474Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" May 14 01:20:58.332885 systemd[1]: Started sshd@8-37.27.220.42:22-121.229.10.68:21187.service - OpenSSH per-connection server daemon (121.229.10.68:21187). May 14 01:21:00.010791 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 01:21:00.011021 systemd[1]: kubelet.service: Consumed 166ms CPU time, 103.6M memory peak. May 14 01:21:00.013544 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 01:21:00.044708 systemd[1]: Reload requested from client PID 2362 ('systemctl') (unit session-7.scope)... May 14 01:21:00.044724 systemd[1]: Reloading... May 14 01:21:00.161690 zram_generator::config[2410]: No configuration found. May 14 01:21:00.256233 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 01:21:00.362300 systemd[1]: Reloading finished in 317 ms. May 14 01:21:00.396831 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 14 01:21:00.396896 systemd[1]: kubelet.service: Failed with result 'signal'. May 14 01:21:00.397079 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 01:21:00.397128 systemd[1]: kubelet.service: Consumed 89ms CPU time, 91.6M memory peak. May 14 01:21:00.400351 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 01:21:00.536026 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 01:21:00.540872 (kubelet)[2462]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 01:21:00.589845 kubelet[2462]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 01:21:00.589845 kubelet[2462]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 14 01:21:00.589845 kubelet[2462]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 01:21:00.590385 kubelet[2462]: I0514 01:21:00.589889 2462 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 01:21:00.669343 sshd[2353]: Invalid user ubuntu from 121.229.10.68 port 21187 May 14 01:21:00.742386 kubelet[2462]: I0514 01:21:00.742316 2462 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 14 01:21:00.742386 kubelet[2462]: I0514 01:21:00.742349 2462 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 01:21:00.742646 kubelet[2462]: I0514 01:21:00.742604 2462 server.go:954] "Client rotation is on, will bootstrap in background" May 14 01:21:00.776045 kubelet[2462]: E0514 01:21:00.775986 2462 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://37.27.220.42:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 37.27.220.42:6443: connect: connection refused" logger="UnhandledError" May 14 01:21:00.776443 kubelet[2462]: I0514 01:21:00.776196 2462 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 01:21:00.804194 kubelet[2462]: I0514 01:21:00.804165 2462 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 14 01:21:00.811043 kubelet[2462]: I0514 01:21:00.811006 2462 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 01:21:00.816539 kubelet[2462]: I0514 01:21:00.816472 2462 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 01:21:00.816759 kubelet[2462]: I0514 01:21:00.816509 2462 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4284-0-0-n-c0828c9b46","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 14 01:21:00.819155 kubelet[2462]: I0514 01:21:00.819085 2462 topology_manager.go:138] "Creating topology manager with none policy" May 14 01:21:00.819155 kubelet[2462]: I0514 01:21:00.819107 2462 container_manager_linux.go:304] "Creating device plugin manager" May 14 01:21:00.819275 kubelet[2462]: I0514 01:21:00.819239 2462 state_mem.go:36] "Initialized new in-memory state store" May 14 01:21:00.823628 kubelet[2462]: I0514 01:21:00.823576 2462 kubelet.go:446] "Attempting to sync node with API server" May 14 01:21:00.823628 kubelet[2462]: I0514 01:21:00.823596 2462 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 01:21:00.826084 kubelet[2462]: I0514 01:21:00.826009 2462 kubelet.go:352] "Adding apiserver pod source" May 14 01:21:00.826084 kubelet[2462]: I0514 01:21:00.826034 2462 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 01:21:00.835511 kubelet[2462]: W0514 01:21:00.835103 2462 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://37.27.220.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284-0-0-n-c0828c9b46&limit=500&resourceVersion=0": dial tcp 37.27.220.42:6443: connect: connection refused May 14 01:21:00.835511 kubelet[2462]: E0514 01:21:00.835231 2462 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://37.27.220.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284-0-0-n-c0828c9b46&limit=500&resourceVersion=0\": dial tcp 37.27.220.42:6443: connect: connection refused" logger="UnhandledError" May 14 01:21:00.835511 kubelet[2462]: I0514 01:21:00.835352 2462 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 14 01:21:00.841687 kubelet[2462]: I0514 01:21:00.841513 2462 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 01:21:00.843011 kubelet[2462]: W0514 01:21:00.842946 2462 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://37.27.220.42:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 37.27.220.42:6443: connect: connection refused May 14 01:21:00.843011 kubelet[2462]: E0514 01:21:00.842998 2462 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://37.27.220.42:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 37.27.220.42:6443: connect: connection refused" logger="UnhandledError" May 14 01:21:00.845667 kubelet[2462]: W0514 01:21:00.845605 2462 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 14 01:21:00.846337 kubelet[2462]: I0514 01:21:00.846279 2462 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 14 01:21:00.846337 kubelet[2462]: I0514 01:21:00.846313 2462 server.go:1287] "Started kubelet" May 14 01:21:00.850622 kubelet[2462]: I0514 01:21:00.850255 2462 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 14 01:21:00.855242 kubelet[2462]: I0514 01:21:00.854521 2462 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 01:21:00.855242 kubelet[2462]: I0514 01:21:00.855103 2462 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 01:21:00.857628 kubelet[2462]: I0514 01:21:00.857569 2462 server.go:490] "Adding debug handlers to kubelet server" May 14 01:21:00.862904 kubelet[2462]: E0514 01:21:00.858040 2462 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://37.27.220.42:6443/api/v1/namespaces/default/events\": dial tcp 37.27.220.42:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4284-0-0-n-c0828c9b46.183f4019ff66455f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4284-0-0-n-c0828c9b46,UID:ci-4284-0-0-n-c0828c9b46,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4284-0-0-n-c0828c9b46,},FirstTimestamp:2025-05-14 01:21:00.846294367 +0000 UTC m=+0.302480509,LastTimestamp:2025-05-14 01:21:00.846294367 +0000 UTC m=+0.302480509,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4284-0-0-n-c0828c9b46,}" May 14 01:21:00.866591 kubelet[2462]: I0514 01:21:00.866380 2462 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 01:21:00.867771 kubelet[2462]: I0514 01:21:00.866868 2462 volume_manager.go:297] "Starting Kubelet Volume Manager" May 14 01:21:00.867771 kubelet[2462]: I0514 01:21:00.867308 2462 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 14 01:21:00.872282 kubelet[2462]: I0514 01:21:00.872257 2462 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 14 01:21:00.874550 kubelet[2462]: I0514 01:21:00.874525 2462 reconciler.go:26] "Reconciler: start to sync state" May 14 01:21:00.875774 kubelet[2462]: W0514 01:21:00.875483 2462 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://37.27.220.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 37.27.220.42:6443: connect: connection refused May 14 01:21:00.875774 kubelet[2462]: E0514 01:21:00.875571 2462 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://37.27.220.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 37.27.220.42:6443: connect: connection refused" logger="UnhandledError" May 14 01:21:00.875995 kubelet[2462]: E0514 01:21:00.875981 2462 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4284-0-0-n-c0828c9b46\" not found" May 14 01:21:00.876762 kubelet[2462]: E0514 01:21:00.876742 2462 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://37.27.220.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284-0-0-n-c0828c9b46?timeout=10s\": dial tcp 37.27.220.42:6443: connect: connection refused" interval="200ms" May 14 01:21:00.877032 kubelet[2462]: I0514 01:21:00.877019 2462 factory.go:221] Registration of the systemd container factory successfully May 14 01:21:00.877130 kubelet[2462]: I0514 01:21:00.877118 2462 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 01:21:00.881587 kubelet[2462]: E0514 01:21:00.881574 2462 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 01:21:00.881815 kubelet[2462]: I0514 01:21:00.881806 2462 factory.go:221] Registration of the containerd container factory successfully May 14 01:21:00.886671 sshd[2353]: Received disconnect from 121.229.10.68 port 21187:11: Bye Bye [preauth] May 14 01:21:00.886745 sshd[2353]: Disconnected from invalid user ubuntu 121.229.10.68 port 21187 [preauth] May 14 01:21:00.893010 systemd[1]: sshd@8-37.27.220.42:22-121.229.10.68:21187.service: Deactivated successfully. May 14 01:21:00.905985 kubelet[2462]: I0514 01:21:00.905954 2462 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 01:21:00.906903 kubelet[2462]: I0514 01:21:00.906892 2462 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 01:21:00.906958 kubelet[2462]: I0514 01:21:00.906952 2462 status_manager.go:227] "Starting to sync pod status with apiserver" May 14 01:21:00.907012 kubelet[2462]: I0514 01:21:00.907006 2462 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 14 01:21:00.907057 kubelet[2462]: I0514 01:21:00.907050 2462 kubelet.go:2388] "Starting kubelet main sync loop" May 14 01:21:00.907131 kubelet[2462]: E0514 01:21:00.907119 2462 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 01:21:00.908924 kubelet[2462]: W0514 01:21:00.908867 2462 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://37.27.220.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 37.27.220.42:6443: connect: connection refused May 14 01:21:00.908969 kubelet[2462]: E0514 01:21:00.908942 2462 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://37.27.220.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 37.27.220.42:6443: connect: connection refused" logger="UnhandledError" May 14 01:21:00.912665 kubelet[2462]: I0514 01:21:00.912573 2462 cpu_manager.go:221] "Starting CPU manager" policy="none" May 14 01:21:00.912665 kubelet[2462]: I0514 01:21:00.912584 2462 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 14 01:21:00.912665 kubelet[2462]: I0514 01:21:00.912596 2462 state_mem.go:36] "Initialized new in-memory state store" May 14 01:21:00.916058 kubelet[2462]: I0514 01:21:00.915956 2462 policy_none.go:49] "None policy: Start" May 14 01:21:00.916058 kubelet[2462]: I0514 01:21:00.915970 2462 memory_manager.go:186] "Starting memorymanager" policy="None" May 14 01:21:00.916058 kubelet[2462]: I0514 01:21:00.915979 2462 state_mem.go:35] "Initializing new in-memory state store" May 14 01:21:00.921370 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 14 01:21:00.931525 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 14 01:21:00.935759 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 14 01:21:00.949353 kubelet[2462]: I0514 01:21:00.949336 2462 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 01:21:00.950073 kubelet[2462]: I0514 01:21:00.949591 2462 eviction_manager.go:189] "Eviction manager: starting control loop" May 14 01:21:00.950073 kubelet[2462]: I0514 01:21:00.949603 2462 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 01:21:00.950341 kubelet[2462]: I0514 01:21:00.950317 2462 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 01:21:00.951740 kubelet[2462]: E0514 01:21:00.951717 2462 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 14 01:21:00.951801 kubelet[2462]: E0514 01:21:00.951755 2462 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4284-0-0-n-c0828c9b46\" not found" May 14 01:21:01.028296 systemd[1]: Created slice kubepods-burstable-pod1527dbd9d2e0e5bd676f6e700ed79080.slice - libcontainer container kubepods-burstable-pod1527dbd9d2e0e5bd676f6e700ed79080.slice. May 14 01:21:01.041844 kubelet[2462]: E0514 01:21:01.041670 2462 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284-0-0-n-c0828c9b46\" not found" node="ci-4284-0-0-n-c0828c9b46" May 14 01:21:01.051214 systemd[1]: Created slice kubepods-burstable-pod3d398981942494b5dc23c9a73e304dbc.slice - libcontainer container kubepods-burstable-pod3d398981942494b5dc23c9a73e304dbc.slice. May 14 01:21:01.056390 kubelet[2462]: I0514 01:21:01.055578 2462 kubelet_node_status.go:76] "Attempting to register node" node="ci-4284-0-0-n-c0828c9b46" May 14 01:21:01.056390 kubelet[2462]: E0514 01:21:01.056138 2462 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://37.27.220.42:6443/api/v1/nodes\": dial tcp 37.27.220.42:6443: connect: connection refused" node="ci-4284-0-0-n-c0828c9b46" May 14 01:21:01.058027 systemd[1]: Created slice kubepods-burstable-podcaff0bbade5aaf97f4bf499d3bc8a317.slice - libcontainer container kubepods-burstable-podcaff0bbade5aaf97f4bf499d3bc8a317.slice. May 14 01:21:01.061171 kubelet[2462]: E0514 01:21:01.061125 2462 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284-0-0-n-c0828c9b46\" not found" node="ci-4284-0-0-n-c0828c9b46" May 14 01:21:01.062221 kubelet[2462]: E0514 01:21:01.061945 2462 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284-0-0-n-c0828c9b46\" not found" node="ci-4284-0-0-n-c0828c9b46" May 14 01:21:01.081219 kubelet[2462]: I0514 01:21:01.080864 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1527dbd9d2e0e5bd676f6e700ed79080-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4284-0-0-n-c0828c9b46\" (UID: \"1527dbd9d2e0e5bd676f6e700ed79080\") " pod="kube-system/kube-apiserver-ci-4284-0-0-n-c0828c9b46" May 14 01:21:01.081219 kubelet[2462]: I0514 01:21:01.080918 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3d398981942494b5dc23c9a73e304dbc-ca-certs\") pod \"kube-controller-manager-ci-4284-0-0-n-c0828c9b46\" (UID: \"3d398981942494b5dc23c9a73e304dbc\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-c0828c9b46" May 14 01:21:01.081219 kubelet[2462]: I0514 01:21:01.080945 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3d398981942494b5dc23c9a73e304dbc-flexvolume-dir\") pod \"kube-controller-manager-ci-4284-0-0-n-c0828c9b46\" (UID: \"3d398981942494b5dc23c9a73e304dbc\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-c0828c9b46" May 14 01:21:01.081219 kubelet[2462]: I0514 01:21:01.080964 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3d398981942494b5dc23c9a73e304dbc-k8s-certs\") pod \"kube-controller-manager-ci-4284-0-0-n-c0828c9b46\" (UID: \"3d398981942494b5dc23c9a73e304dbc\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-c0828c9b46" May 14 01:21:01.081219 kubelet[2462]: I0514 01:21:01.080983 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3d398981942494b5dc23c9a73e304dbc-kubeconfig\") pod \"kube-controller-manager-ci-4284-0-0-n-c0828c9b46\" (UID: \"3d398981942494b5dc23c9a73e304dbc\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-c0828c9b46" May 14 01:21:01.081526 kubelet[2462]: I0514 01:21:01.081004 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3d398981942494b5dc23c9a73e304dbc-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4284-0-0-n-c0828c9b46\" (UID: \"3d398981942494b5dc23c9a73e304dbc\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-c0828c9b46" May 14 01:21:01.081526 kubelet[2462]: I0514 01:21:01.081024 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/caff0bbade5aaf97f4bf499d3bc8a317-kubeconfig\") pod \"kube-scheduler-ci-4284-0-0-n-c0828c9b46\" (UID: \"caff0bbade5aaf97f4bf499d3bc8a317\") " pod="kube-system/kube-scheduler-ci-4284-0-0-n-c0828c9b46" May 14 01:21:01.081526 kubelet[2462]: I0514 01:21:01.081043 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1527dbd9d2e0e5bd676f6e700ed79080-ca-certs\") pod \"kube-apiserver-ci-4284-0-0-n-c0828c9b46\" (UID: \"1527dbd9d2e0e5bd676f6e700ed79080\") " pod="kube-system/kube-apiserver-ci-4284-0-0-n-c0828c9b46" May 14 01:21:01.081526 kubelet[2462]: I0514 01:21:01.081060 2462 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1527dbd9d2e0e5bd676f6e700ed79080-k8s-certs\") pod \"kube-apiserver-ci-4284-0-0-n-c0828c9b46\" (UID: \"1527dbd9d2e0e5bd676f6e700ed79080\") " pod="kube-system/kube-apiserver-ci-4284-0-0-n-c0828c9b46" May 14 01:21:01.082102 kubelet[2462]: E0514 01:21:01.082056 2462 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://37.27.220.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284-0-0-n-c0828c9b46?timeout=10s\": dial tcp 37.27.220.42:6443: connect: connection refused" interval="400ms" May 14 01:21:01.259728 kubelet[2462]: I0514 01:21:01.259559 2462 kubelet_node_status.go:76] "Attempting to register node" node="ci-4284-0-0-n-c0828c9b46" May 14 01:21:01.260215 kubelet[2462]: E0514 01:21:01.260118 2462 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://37.27.220.42:6443/api/v1/nodes\": dial tcp 37.27.220.42:6443: connect: connection refused" node="ci-4284-0-0-n-c0828c9b46" May 14 01:21:01.346387 containerd[1527]: time="2025-05-14T01:21:01.346296756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4284-0-0-n-c0828c9b46,Uid:1527dbd9d2e0e5bd676f6e700ed79080,Namespace:kube-system,Attempt:0,}" May 14 01:21:01.363615 containerd[1527]: time="2025-05-14T01:21:01.363138676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4284-0-0-n-c0828c9b46,Uid:caff0bbade5aaf97f4bf499d3bc8a317,Namespace:kube-system,Attempt:0,}" May 14 01:21:01.363615 containerd[1527]: time="2025-05-14T01:21:01.363466084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4284-0-0-n-c0828c9b46,Uid:3d398981942494b5dc23c9a73e304dbc,Namespace:kube-system,Attempt:0,}" May 14 01:21:01.484211 kubelet[2462]: E0514 01:21:01.484079 2462 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://37.27.220.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284-0-0-n-c0828c9b46?timeout=10s\": dial tcp 37.27.220.42:6443: connect: connection refused" interval="800ms" May 14 01:21:01.495661 containerd[1527]: time="2025-05-14T01:21:01.495456635Z" level=info msg="connecting to shim 8c01f47515a5d7532286d6090545adff5fd65e39639b0f0f14616de4983377ca" address="unix:///run/containerd/s/1d6056b51e18d46ba6e52b0aa2b12e2c1fcb9595c507472f6322d21a48ca6e63" namespace=k8s.io protocol=ttrpc version=3 May 14 01:21:01.501577 containerd[1527]: time="2025-05-14T01:21:01.501486181Z" level=info msg="connecting to shim 454c159fcd3f69ac81b9f21a2ccee0c7c4bfeed8cf63ac7b73bd9a7a470b2d1d" address="unix:///run/containerd/s/6d514fc98448330a3562db492d6f2654ea51d8524a2a03cb2bb68c9e277dcb58" namespace=k8s.io protocol=ttrpc version=3 May 14 01:21:01.504549 containerd[1527]: time="2025-05-14T01:21:01.504494656Z" level=info msg="connecting to shim a40ea0bfe470afc30de2c5a527eca5bd6a94263660f1e2aca4cb7f36cf4f9c8d" address="unix:///run/containerd/s/b708869e27e69f179ecfb4f2ae026567c827252927ae8879f9f494e8b83ecd7b" namespace=k8s.io protocol=ttrpc version=3 May 14 01:21:01.593787 systemd[1]: Started cri-containerd-a40ea0bfe470afc30de2c5a527eca5bd6a94263660f1e2aca4cb7f36cf4f9c8d.scope - libcontainer container a40ea0bfe470afc30de2c5a527eca5bd6a94263660f1e2aca4cb7f36cf4f9c8d. May 14 01:21:01.598375 systemd[1]: Started cri-containerd-454c159fcd3f69ac81b9f21a2ccee0c7c4bfeed8cf63ac7b73bd9a7a470b2d1d.scope - libcontainer container 454c159fcd3f69ac81b9f21a2ccee0c7c4bfeed8cf63ac7b73bd9a7a470b2d1d. May 14 01:21:01.599844 systemd[1]: Started cri-containerd-8c01f47515a5d7532286d6090545adff5fd65e39639b0f0f14616de4983377ca.scope - libcontainer container 8c01f47515a5d7532286d6090545adff5fd65e39639b0f0f14616de4983377ca. May 14 01:21:01.658477 containerd[1527]: time="2025-05-14T01:21:01.658438244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4284-0-0-n-c0828c9b46,Uid:1527dbd9d2e0e5bd676f6e700ed79080,Namespace:kube-system,Attempt:0,} returns sandbox id \"454c159fcd3f69ac81b9f21a2ccee0c7c4bfeed8cf63ac7b73bd9a7a470b2d1d\"" May 14 01:21:01.665108 kubelet[2462]: I0514 01:21:01.664800 2462 kubelet_node_status.go:76] "Attempting to register node" node="ci-4284-0-0-n-c0828c9b46" May 14 01:21:01.666209 kubelet[2462]: E0514 01:21:01.665626 2462 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://37.27.220.42:6443/api/v1/nodes\": dial tcp 37.27.220.42:6443: connect: connection refused" node="ci-4284-0-0-n-c0828c9b46" May 14 01:21:01.668747 containerd[1527]: time="2025-05-14T01:21:01.668720437Z" level=info msg="CreateContainer within sandbox \"454c159fcd3f69ac81b9f21a2ccee0c7c4bfeed8cf63ac7b73bd9a7a470b2d1d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 14 01:21:01.679325 containerd[1527]: time="2025-05-14T01:21:01.679264815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4284-0-0-n-c0828c9b46,Uid:3d398981942494b5dc23c9a73e304dbc,Namespace:kube-system,Attempt:0,} returns sandbox id \"8c01f47515a5d7532286d6090545adff5fd65e39639b0f0f14616de4983377ca\"" May 14 01:21:01.681871 containerd[1527]: time="2025-05-14T01:21:01.681718412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4284-0-0-n-c0828c9b46,Uid:caff0bbade5aaf97f4bf499d3bc8a317,Namespace:kube-system,Attempt:0,} returns sandbox id \"a40ea0bfe470afc30de2c5a527eca5bd6a94263660f1e2aca4cb7f36cf4f9c8d\"" May 14 01:21:01.682332 containerd[1527]: time="2025-05-14T01:21:01.682284561Z" level=info msg="CreateContainer within sandbox \"8c01f47515a5d7532286d6090545adff5fd65e39639b0f0f14616de4983377ca\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 14 01:21:01.685860 containerd[1527]: time="2025-05-14T01:21:01.685814052Z" level=info msg="CreateContainer within sandbox \"a40ea0bfe470afc30de2c5a527eca5bd6a94263660f1e2aca4cb7f36cf4f9c8d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 14 01:21:01.691976 containerd[1527]: time="2025-05-14T01:21:01.691922877Z" level=info msg="Container cf0e75c98920fc7d84f2936b577dc5ae11233ff02f1ee2d0de6fcca3e73241f0: CDI devices from CRI Config.CDIDevices: []" May 14 01:21:01.693366 containerd[1527]: time="2025-05-14T01:21:01.693332030Z" level=info msg="Container d8014d557706e64c8bf0fc1376d7ccfaee7b14bf57f8dab453a8ee121dd14b96: CDI devices from CRI Config.CDIDevices: []" May 14 01:21:01.696226 containerd[1527]: time="2025-05-14T01:21:01.696202245Z" level=info msg="Container 656de50a1c7a624dd04387dd914d40e888ec50f743ec0a8376ac7dc30bbe9f45: CDI devices from CRI Config.CDIDevices: []" May 14 01:21:01.706530 containerd[1527]: time="2025-05-14T01:21:01.706491600Z" level=info msg="CreateContainer within sandbox \"454c159fcd3f69ac81b9f21a2ccee0c7c4bfeed8cf63ac7b73bd9a7a470b2d1d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"cf0e75c98920fc7d84f2936b577dc5ae11233ff02f1ee2d0de6fcca3e73241f0\"" May 14 01:21:01.707809 containerd[1527]: time="2025-05-14T01:21:01.707731633Z" level=info msg="StartContainer for \"cf0e75c98920fc7d84f2936b577dc5ae11233ff02f1ee2d0de6fcca3e73241f0\"" May 14 01:21:01.710552 containerd[1527]: time="2025-05-14T01:21:01.710518500Z" level=info msg="connecting to shim cf0e75c98920fc7d84f2936b577dc5ae11233ff02f1ee2d0de6fcca3e73241f0" address="unix:///run/containerd/s/6d514fc98448330a3562db492d6f2654ea51d8524a2a03cb2bb68c9e277dcb58" protocol=ttrpc version=3 May 14 01:21:01.713288 containerd[1527]: time="2025-05-14T01:21:01.712940819Z" level=info msg="CreateContainer within sandbox \"a40ea0bfe470afc30de2c5a527eca5bd6a94263660f1e2aca4cb7f36cf4f9c8d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"656de50a1c7a624dd04387dd914d40e888ec50f743ec0a8376ac7dc30bbe9f45\"" May 14 01:21:01.713747 containerd[1527]: time="2025-05-14T01:21:01.713715052Z" level=info msg="StartContainer for \"656de50a1c7a624dd04387dd914d40e888ec50f743ec0a8376ac7dc30bbe9f45\"" May 14 01:21:01.714666 containerd[1527]: time="2025-05-14T01:21:01.714627155Z" level=info msg="CreateContainer within sandbox \"8c01f47515a5d7532286d6090545adff5fd65e39639b0f0f14616de4983377ca\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d8014d557706e64c8bf0fc1376d7ccfaee7b14bf57f8dab453a8ee121dd14b96\"" May 14 01:21:01.715032 containerd[1527]: time="2025-05-14T01:21:01.715019026Z" level=info msg="StartContainer for \"d8014d557706e64c8bf0fc1376d7ccfaee7b14bf57f8dab453a8ee121dd14b96\"" May 14 01:21:01.716335 containerd[1527]: time="2025-05-14T01:21:01.716296271Z" level=info msg="connecting to shim d8014d557706e64c8bf0fc1376d7ccfaee7b14bf57f8dab453a8ee121dd14b96" address="unix:///run/containerd/s/1d6056b51e18d46ba6e52b0aa2b12e2c1fcb9595c507472f6322d21a48ca6e63" protocol=ttrpc version=3 May 14 01:21:01.717232 containerd[1527]: time="2025-05-14T01:21:01.717214876Z" level=info msg="connecting to shim 656de50a1c7a624dd04387dd914d40e888ec50f743ec0a8376ac7dc30bbe9f45" address="unix:///run/containerd/s/b708869e27e69f179ecfb4f2ae026567c827252927ae8879f9f494e8b83ecd7b" protocol=ttrpc version=3 May 14 01:21:01.735814 systemd[1]: Started cri-containerd-cf0e75c98920fc7d84f2936b577dc5ae11233ff02f1ee2d0de6fcca3e73241f0.scope - libcontainer container cf0e75c98920fc7d84f2936b577dc5ae11233ff02f1ee2d0de6fcca3e73241f0. May 14 01:21:01.740943 systemd[1]: Started cri-containerd-656de50a1c7a624dd04387dd914d40e888ec50f743ec0a8376ac7dc30bbe9f45.scope - libcontainer container 656de50a1c7a624dd04387dd914d40e888ec50f743ec0a8376ac7dc30bbe9f45. May 14 01:21:01.751817 systemd[1]: Started cri-containerd-d8014d557706e64c8bf0fc1376d7ccfaee7b14bf57f8dab453a8ee121dd14b96.scope - libcontainer container d8014d557706e64c8bf0fc1376d7ccfaee7b14bf57f8dab453a8ee121dd14b96. May 14 01:21:01.767081 kubelet[2462]: W0514 01:21:01.767013 2462 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://37.27.220.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284-0-0-n-c0828c9b46&limit=500&resourceVersion=0": dial tcp 37.27.220.42:6443: connect: connection refused May 14 01:21:01.767081 kubelet[2462]: E0514 01:21:01.767084 2462 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://37.27.220.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284-0-0-n-c0828c9b46&limit=500&resourceVersion=0\": dial tcp 37.27.220.42:6443: connect: connection refused" logger="UnhandledError" May 14 01:21:01.791348 containerd[1527]: time="2025-05-14T01:21:01.791313706Z" level=info msg="StartContainer for \"cf0e75c98920fc7d84f2936b577dc5ae11233ff02f1ee2d0de6fcca3e73241f0\" returns successfully" May 14 01:21:01.829741 containerd[1527]: time="2025-05-14T01:21:01.829533498Z" level=info msg="StartContainer for \"d8014d557706e64c8bf0fc1376d7ccfaee7b14bf57f8dab453a8ee121dd14b96\" returns successfully" May 14 01:21:01.834538 containerd[1527]: time="2025-05-14T01:21:01.834353086Z" level=info msg="StartContainer for \"656de50a1c7a624dd04387dd914d40e888ec50f743ec0a8376ac7dc30bbe9f45\" returns successfully" May 14 01:21:01.856749 kubelet[2462]: W0514 01:21:01.856523 2462 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://37.27.220.42:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 37.27.220.42:6443: connect: connection refused May 14 01:21:01.856749 kubelet[2462]: E0514 01:21:01.856617 2462 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://37.27.220.42:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 37.27.220.42:6443: connect: connection refused" logger="UnhandledError" May 14 01:21:01.859120 kubelet[2462]: W0514 01:21:01.859086 2462 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://37.27.220.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 37.27.220.42:6443: connect: connection refused May 14 01:21:01.859200 kubelet[2462]: E0514 01:21:01.859125 2462 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://37.27.220.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 37.27.220.42:6443: connect: connection refused" logger="UnhandledError" May 14 01:21:01.919424 kubelet[2462]: E0514 01:21:01.919194 2462 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284-0-0-n-c0828c9b46\" not found" node="ci-4284-0-0-n-c0828c9b46" May 14 01:21:01.921751 kubelet[2462]: E0514 01:21:01.921054 2462 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284-0-0-n-c0828c9b46\" not found" node="ci-4284-0-0-n-c0828c9b46" May 14 01:21:01.923817 kubelet[2462]: E0514 01:21:01.923802 2462 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284-0-0-n-c0828c9b46\" not found" node="ci-4284-0-0-n-c0828c9b46" May 14 01:21:02.468391 kubelet[2462]: I0514 01:21:02.468340 2462 kubelet_node_status.go:76] "Attempting to register node" node="ci-4284-0-0-n-c0828c9b46" May 14 01:21:02.927507 kubelet[2462]: E0514 01:21:02.927334 2462 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284-0-0-n-c0828c9b46\" not found" node="ci-4284-0-0-n-c0828c9b46" May 14 01:21:02.928707 kubelet[2462]: E0514 01:21:02.928333 2462 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4284-0-0-n-c0828c9b46\" not found" node="ci-4284-0-0-n-c0828c9b46" May 14 01:21:03.417022 kubelet[2462]: E0514 01:21:03.416971 2462 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4284-0-0-n-c0828c9b46\" not found" node="ci-4284-0-0-n-c0828c9b46" May 14 01:21:03.486310 kubelet[2462]: I0514 01:21:03.486135 2462 kubelet_node_status.go:79] "Successfully registered node" node="ci-4284-0-0-n-c0828c9b46" May 14 01:21:03.486310 kubelet[2462]: E0514 01:21:03.486173 2462 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"ci-4284-0-0-n-c0828c9b46\": node \"ci-4284-0-0-n-c0828c9b46\" not found" May 14 01:21:03.490832 kubelet[2462]: E0514 01:21:03.490783 2462 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4284-0-0-n-c0828c9b46\" not found" May 14 01:21:03.591246 kubelet[2462]: E0514 01:21:03.591157 2462 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4284-0-0-n-c0828c9b46\" not found" May 14 01:21:03.692474 kubelet[2462]: E0514 01:21:03.692260 2462 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4284-0-0-n-c0828c9b46\" not found" May 14 01:21:03.792669 kubelet[2462]: E0514 01:21:03.792559 2462 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4284-0-0-n-c0828c9b46\" not found" May 14 01:21:03.893039 kubelet[2462]: E0514 01:21:03.892948 2462 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4284-0-0-n-c0828c9b46\" not found" May 14 01:21:03.993766 kubelet[2462]: E0514 01:21:03.993583 2462 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4284-0-0-n-c0828c9b46\" not found" May 14 01:21:04.094794 kubelet[2462]: E0514 01:21:04.094707 2462 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4284-0-0-n-c0828c9b46\" not found" May 14 01:21:04.195871 kubelet[2462]: E0514 01:21:04.195827 2462 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4284-0-0-n-c0828c9b46\" not found" May 14 01:21:04.278731 kubelet[2462]: I0514 01:21:04.278550 2462 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4284-0-0-n-c0828c9b46" May 14 01:21:04.289135 kubelet[2462]: E0514 01:21:04.289086 2462 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4284-0-0-n-c0828c9b46\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4284-0-0-n-c0828c9b46" May 14 01:21:04.289135 kubelet[2462]: I0514 01:21:04.289129 2462 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4284-0-0-n-c0828c9b46" May 14 01:21:04.291826 kubelet[2462]: E0514 01:21:04.291777 2462 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4284-0-0-n-c0828c9b46\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4284-0-0-n-c0828c9b46" May 14 01:21:04.291826 kubelet[2462]: I0514 01:21:04.291812 2462 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4284-0-0-n-c0828c9b46" May 14 01:21:04.294251 kubelet[2462]: E0514 01:21:04.294181 2462 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4284-0-0-n-c0828c9b46\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4284-0-0-n-c0828c9b46" May 14 01:21:04.845317 kubelet[2462]: I0514 01:21:04.844972 2462 apiserver.go:52] "Watching apiserver" May 14 01:21:04.872865 kubelet[2462]: I0514 01:21:04.872769 2462 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 14 01:21:05.801996 kubelet[2462]: I0514 01:21:05.801912 2462 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4284-0-0-n-c0828c9b46" May 14 01:21:06.037276 systemd[1]: Reload requested from client PID 2734 ('systemctl') (unit session-7.scope)... May 14 01:21:06.037312 systemd[1]: Reloading... May 14 01:21:06.175740 zram_generator::config[2782]: No configuration found. May 14 01:21:06.286002 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 01:21:06.402403 systemd[1]: Reloading finished in 364 ms. May 14 01:21:06.427507 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 14 01:21:06.446377 systemd[1]: kubelet.service: Deactivated successfully. May 14 01:21:06.446604 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 01:21:06.446676 systemd[1]: kubelet.service: Consumed 735ms CPU time, 123.8M memory peak. May 14 01:21:06.449558 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 01:21:06.661721 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 01:21:06.674118 (kubelet)[2830]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 01:21:06.740791 kubelet[2830]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 01:21:06.740791 kubelet[2830]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 14 01:21:06.740791 kubelet[2830]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 01:21:06.741294 kubelet[2830]: I0514 01:21:06.740870 2830 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 01:21:06.747165 kubelet[2830]: I0514 01:21:06.746443 2830 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 14 01:21:06.747165 kubelet[2830]: I0514 01:21:06.746463 2830 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 01:21:06.747165 kubelet[2830]: I0514 01:21:06.746655 2830 server.go:954] "Client rotation is on, will bootstrap in background" May 14 01:21:06.748474 kubelet[2830]: I0514 01:21:06.747969 2830 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 14 01:21:06.755764 kubelet[2830]: I0514 01:21:06.755159 2830 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 01:21:06.760621 kubelet[2830]: I0514 01:21:06.760583 2830 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 14 01:21:06.763165 kubelet[2830]: I0514 01:21:06.763140 2830 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 01:21:06.763948 kubelet[2830]: I0514 01:21:06.763288 2830 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 01:21:06.763948 kubelet[2830]: I0514 01:21:06.763313 2830 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4284-0-0-n-c0828c9b46","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 14 01:21:06.763948 kubelet[2830]: I0514 01:21:06.763467 2830 topology_manager.go:138] "Creating topology manager with none policy" May 14 01:21:06.763948 kubelet[2830]: I0514 01:21:06.763473 2830 container_manager_linux.go:304] "Creating device plugin manager" May 14 01:21:06.764182 kubelet[2830]: I0514 01:21:06.763510 2830 state_mem.go:36] "Initialized new in-memory state store" May 14 01:21:06.764182 kubelet[2830]: I0514 01:21:06.763614 2830 kubelet.go:446] "Attempting to sync node with API server" May 14 01:21:06.764182 kubelet[2830]: I0514 01:21:06.763627 2830 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 01:21:06.764182 kubelet[2830]: I0514 01:21:06.763679 2830 kubelet.go:352] "Adding apiserver pod source" May 14 01:21:06.764182 kubelet[2830]: I0514 01:21:06.763688 2830 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 01:21:06.767108 kubelet[2830]: I0514 01:21:06.766481 2830 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 14 01:21:06.767108 kubelet[2830]: I0514 01:21:06.766768 2830 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 01:21:06.767877 kubelet[2830]: I0514 01:21:06.767853 2830 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 14 01:21:06.767877 kubelet[2830]: I0514 01:21:06.767882 2830 server.go:1287] "Started kubelet" May 14 01:21:06.771686 kubelet[2830]: I0514 01:21:06.770782 2830 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 01:21:06.779628 kubelet[2830]: I0514 01:21:06.779572 2830 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 14 01:21:06.786731 kubelet[2830]: I0514 01:21:06.786674 2830 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 01:21:06.787201 kubelet[2830]: I0514 01:21:06.787187 2830 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 01:21:06.792275 kubelet[2830]: I0514 01:21:06.792252 2830 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 14 01:21:06.792594 kubelet[2830]: I0514 01:21:06.792555 2830 server.go:490] "Adding debug handlers to kubelet server" May 14 01:21:06.794963 kubelet[2830]: I0514 01:21:06.794595 2830 volume_manager.go:297] "Starting Kubelet Volume Manager" May 14 01:21:06.795860 kubelet[2830]: I0514 01:21:06.795584 2830 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 14 01:21:06.795860 kubelet[2830]: I0514 01:21:06.795778 2830 reconciler.go:26] "Reconciler: start to sync state" May 14 01:21:06.802144 kubelet[2830]: E0514 01:21:06.801844 2830 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 01:21:06.802800 kubelet[2830]: I0514 01:21:06.802753 2830 factory.go:221] Registration of the containerd container factory successfully May 14 01:21:06.802800 kubelet[2830]: I0514 01:21:06.802766 2830 factory.go:221] Registration of the systemd container factory successfully May 14 01:21:06.802870 kubelet[2830]: I0514 01:21:06.802815 2830 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 01:21:06.806065 kubelet[2830]: I0514 01:21:06.805892 2830 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 01:21:06.808926 kubelet[2830]: I0514 01:21:06.808902 2830 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 01:21:06.808926 kubelet[2830]: I0514 01:21:06.808930 2830 status_manager.go:227] "Starting to sync pod status with apiserver" May 14 01:21:06.809705 kubelet[2830]: I0514 01:21:06.809683 2830 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 14 01:21:06.809705 kubelet[2830]: I0514 01:21:06.809701 2830 kubelet.go:2388] "Starting kubelet main sync loop" May 14 01:21:06.809870 kubelet[2830]: E0514 01:21:06.809741 2830 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 01:21:06.840567 kubelet[2830]: I0514 01:21:06.840533 2830 cpu_manager.go:221] "Starting CPU manager" policy="none" May 14 01:21:06.840567 kubelet[2830]: I0514 01:21:06.840550 2830 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 14 01:21:06.840567 kubelet[2830]: I0514 01:21:06.840566 2830 state_mem.go:36] "Initialized new in-memory state store" May 14 01:21:06.843331 kubelet[2830]: I0514 01:21:06.840764 2830 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 14 01:21:06.843331 kubelet[2830]: I0514 01:21:06.840775 2830 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 14 01:21:06.843331 kubelet[2830]: I0514 01:21:06.840792 2830 policy_none.go:49] "None policy: Start" May 14 01:21:06.843331 kubelet[2830]: I0514 01:21:06.840801 2830 memory_manager.go:186] "Starting memorymanager" policy="None" May 14 01:21:06.843331 kubelet[2830]: I0514 01:21:06.840808 2830 state_mem.go:35] "Initializing new in-memory state store" May 14 01:21:06.843331 kubelet[2830]: I0514 01:21:06.840889 2830 state_mem.go:75] "Updated machine memory state" May 14 01:21:06.847740 kubelet[2830]: I0514 01:21:06.847708 2830 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 01:21:06.847883 kubelet[2830]: I0514 01:21:06.847857 2830 eviction_manager.go:189] "Eviction manager: starting control loop" May 14 01:21:06.847915 kubelet[2830]: I0514 01:21:06.847873 2830 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 01:21:06.851434 kubelet[2830]: I0514 01:21:06.850207 2830 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 01:21:06.851983 kubelet[2830]: E0514 01:21:06.851966 2830 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 14 01:21:06.911778 kubelet[2830]: I0514 01:21:06.911623 2830 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4284-0-0-n-c0828c9b46" May 14 01:21:06.916172 kubelet[2830]: I0514 01:21:06.914297 2830 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4284-0-0-n-c0828c9b46" May 14 01:21:06.916172 kubelet[2830]: I0514 01:21:06.914624 2830 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4284-0-0-n-c0828c9b46" May 14 01:21:06.926917 kubelet[2830]: E0514 01:21:06.926876 2830 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4284-0-0-n-c0828c9b46\" already exists" pod="kube-system/kube-apiserver-ci-4284-0-0-n-c0828c9b46" May 14 01:21:06.954850 kubelet[2830]: I0514 01:21:06.954820 2830 kubelet_node_status.go:76] "Attempting to register node" node="ci-4284-0-0-n-c0828c9b46" May 14 01:21:06.966301 kubelet[2830]: I0514 01:21:06.966090 2830 kubelet_node_status.go:125] "Node was previously registered" node="ci-4284-0-0-n-c0828c9b46" May 14 01:21:06.966301 kubelet[2830]: I0514 01:21:06.966199 2830 kubelet_node_status.go:79] "Successfully registered node" node="ci-4284-0-0-n-c0828c9b46" May 14 01:21:06.997780 kubelet[2830]: I0514 01:21:06.997704 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3d398981942494b5dc23c9a73e304dbc-flexvolume-dir\") pod \"kube-controller-manager-ci-4284-0-0-n-c0828c9b46\" (UID: \"3d398981942494b5dc23c9a73e304dbc\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-c0828c9b46" May 14 01:21:06.997780 kubelet[2830]: I0514 01:21:06.997765 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3d398981942494b5dc23c9a73e304dbc-k8s-certs\") pod \"kube-controller-manager-ci-4284-0-0-n-c0828c9b46\" (UID: \"3d398981942494b5dc23c9a73e304dbc\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-c0828c9b46" May 14 01:21:06.997979 kubelet[2830]: I0514 01:21:06.997799 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3d398981942494b5dc23c9a73e304dbc-kubeconfig\") pod \"kube-controller-manager-ci-4284-0-0-n-c0828c9b46\" (UID: \"3d398981942494b5dc23c9a73e304dbc\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-c0828c9b46" May 14 01:21:06.997979 kubelet[2830]: I0514 01:21:06.997833 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3d398981942494b5dc23c9a73e304dbc-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4284-0-0-n-c0828c9b46\" (UID: \"3d398981942494b5dc23c9a73e304dbc\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-c0828c9b46" May 14 01:21:06.997979 kubelet[2830]: I0514 01:21:06.997864 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/caff0bbade5aaf97f4bf499d3bc8a317-kubeconfig\") pod \"kube-scheduler-ci-4284-0-0-n-c0828c9b46\" (UID: \"caff0bbade5aaf97f4bf499d3bc8a317\") " pod="kube-system/kube-scheduler-ci-4284-0-0-n-c0828c9b46" May 14 01:21:06.997979 kubelet[2830]: I0514 01:21:06.997891 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1527dbd9d2e0e5bd676f6e700ed79080-k8s-certs\") pod \"kube-apiserver-ci-4284-0-0-n-c0828c9b46\" (UID: \"1527dbd9d2e0e5bd676f6e700ed79080\") " pod="kube-system/kube-apiserver-ci-4284-0-0-n-c0828c9b46" May 14 01:21:06.997979 kubelet[2830]: I0514 01:21:06.997917 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3d398981942494b5dc23c9a73e304dbc-ca-certs\") pod \"kube-controller-manager-ci-4284-0-0-n-c0828c9b46\" (UID: \"3d398981942494b5dc23c9a73e304dbc\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-n-c0828c9b46" May 14 01:21:06.998138 kubelet[2830]: I0514 01:21:06.997949 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1527dbd9d2e0e5bd676f6e700ed79080-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4284-0-0-n-c0828c9b46\" (UID: \"1527dbd9d2e0e5bd676f6e700ed79080\") " pod="kube-system/kube-apiserver-ci-4284-0-0-n-c0828c9b46" May 14 01:21:06.998138 kubelet[2830]: I0514 01:21:06.997975 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1527dbd9d2e0e5bd676f6e700ed79080-ca-certs\") pod \"kube-apiserver-ci-4284-0-0-n-c0828c9b46\" (UID: \"1527dbd9d2e0e5bd676f6e700ed79080\") " pod="kube-system/kube-apiserver-ci-4284-0-0-n-c0828c9b46" May 14 01:21:07.766910 kubelet[2830]: I0514 01:21:07.766853 2830 apiserver.go:52] "Watching apiserver" May 14 01:21:07.797707 kubelet[2830]: I0514 01:21:07.795891 2830 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 14 01:21:07.829294 kubelet[2830]: I0514 01:21:07.829216 2830 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4284-0-0-n-c0828c9b46" May 14 01:21:07.838449 kubelet[2830]: E0514 01:21:07.838405 2830 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4284-0-0-n-c0828c9b46\" already exists" pod="kube-system/kube-scheduler-ci-4284-0-0-n-c0828c9b46" May 14 01:21:07.845222 kubelet[2830]: I0514 01:21:07.845103 2830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4284-0-0-n-c0828c9b46" podStartSLOduration=1.8450806910000002 podStartE2EDuration="1.845080691s" podCreationTimestamp="2025-05-14 01:21:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 01:21:07.834007272 +0000 UTC m=+1.153820885" watchObservedRunningTime="2025-05-14 01:21:07.845080691 +0000 UTC m=+1.164894324" May 14 01:21:07.855668 kubelet[2830]: I0514 01:21:07.855604 2830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4284-0-0-n-c0828c9b46" podStartSLOduration=2.855590242 podStartE2EDuration="2.855590242s" podCreationTimestamp="2025-05-14 01:21:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 01:21:07.845667091 +0000 UTC m=+1.165480725" watchObservedRunningTime="2025-05-14 01:21:07.855590242 +0000 UTC m=+1.175403875" May 14 01:21:07.855998 kubelet[2830]: I0514 01:21:07.855965 2830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4284-0-0-n-c0828c9b46" podStartSLOduration=1.855887075 podStartE2EDuration="1.855887075s" podCreationTimestamp="2025-05-14 01:21:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 01:21:07.854736916 +0000 UTC m=+1.174550549" watchObservedRunningTime="2025-05-14 01:21:07.855887075 +0000 UTC m=+1.175700707" May 14 01:21:10.742247 kubelet[2830]: I0514 01:21:10.742187 2830 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 14 01:21:10.742981 containerd[1527]: time="2025-05-14T01:21:10.742853826Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 14 01:21:10.744072 kubelet[2830]: I0514 01:21:10.743750 2830 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 14 01:21:11.564851 systemd[1]: Created slice kubepods-besteffort-pod393381bb_24da_449b_9f72_ef169ea8ecd8.slice - libcontainer container kubepods-besteffort-pod393381bb_24da_449b_9f72_ef169ea8ecd8.slice. May 14 01:21:11.627824 kubelet[2830]: I0514 01:21:11.627749 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/393381bb-24da-449b-9f72-ef169ea8ecd8-xtables-lock\") pod \"kube-proxy-6nb79\" (UID: \"393381bb-24da-449b-9f72-ef169ea8ecd8\") " pod="kube-system/kube-proxy-6nb79" May 14 01:21:11.627824 kubelet[2830]: I0514 01:21:11.627809 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/393381bb-24da-449b-9f72-ef169ea8ecd8-lib-modules\") pod \"kube-proxy-6nb79\" (UID: \"393381bb-24da-449b-9f72-ef169ea8ecd8\") " pod="kube-system/kube-proxy-6nb79" May 14 01:21:11.627824 kubelet[2830]: I0514 01:21:11.627841 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqh59\" (UniqueName: \"kubernetes.io/projected/393381bb-24da-449b-9f72-ef169ea8ecd8-kube-api-access-rqh59\") pod \"kube-proxy-6nb79\" (UID: \"393381bb-24da-449b-9f72-ef169ea8ecd8\") " pod="kube-system/kube-proxy-6nb79" May 14 01:21:11.628225 kubelet[2830]: I0514 01:21:11.627876 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/393381bb-24da-449b-9f72-ef169ea8ecd8-kube-proxy\") pod \"kube-proxy-6nb79\" (UID: \"393381bb-24da-449b-9f72-ef169ea8ecd8\") " pod="kube-system/kube-proxy-6nb79" May 14 01:21:11.874907 containerd[1527]: time="2025-05-14T01:21:11.873380298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6nb79,Uid:393381bb-24da-449b-9f72-ef169ea8ecd8,Namespace:kube-system,Attempt:0,}" May 14 01:21:11.879235 systemd[1]: Created slice kubepods-besteffort-podf046a430_7045_4008_a8cf_68b0afc3108d.slice - libcontainer container kubepods-besteffort-podf046a430_7045_4008_a8cf_68b0afc3108d.slice. May 14 01:21:11.909539 containerd[1527]: time="2025-05-14T01:21:11.909236422Z" level=info msg="connecting to shim 1753dd8af2c9586042440710ea6306a5fe86d45c38dd972143b35bfa304cf49b" address="unix:///run/containerd/s/4bea2ccd0e363119d0a0a71c59a37af4a1d7375f6f5d440b86803d369e1b3471" namespace=k8s.io protocol=ttrpc version=3 May 14 01:21:11.930590 kubelet[2830]: I0514 01:21:11.930084 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f046a430-7045-4008-a8cf-68b0afc3108d-var-lib-calico\") pod \"tigera-operator-789496d6f5-ccrxv\" (UID: \"f046a430-7045-4008-a8cf-68b0afc3108d\") " pod="tigera-operator/tigera-operator-789496d6f5-ccrxv" May 14 01:21:11.933860 kubelet[2830]: I0514 01:21:11.930600 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lnvf\" (UniqueName: \"kubernetes.io/projected/f046a430-7045-4008-a8cf-68b0afc3108d-kube-api-access-9lnvf\") pod \"tigera-operator-789496d6f5-ccrxv\" (UID: \"f046a430-7045-4008-a8cf-68b0afc3108d\") " pod="tigera-operator/tigera-operator-789496d6f5-ccrxv" May 14 01:21:11.933781 systemd[1]: Started cri-containerd-1753dd8af2c9586042440710ea6306a5fe86d45c38dd972143b35bfa304cf49b.scope - libcontainer container 1753dd8af2c9586042440710ea6306a5fe86d45c38dd972143b35bfa304cf49b. May 14 01:21:11.960335 containerd[1527]: time="2025-05-14T01:21:11.960273217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6nb79,Uid:393381bb-24da-449b-9f72-ef169ea8ecd8,Namespace:kube-system,Attempt:0,} returns sandbox id \"1753dd8af2c9586042440710ea6306a5fe86d45c38dd972143b35bfa304cf49b\"" May 14 01:21:11.963131 containerd[1527]: time="2025-05-14T01:21:11.963100576Z" level=info msg="CreateContainer within sandbox \"1753dd8af2c9586042440710ea6306a5fe86d45c38dd972143b35bfa304cf49b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 14 01:21:11.986329 containerd[1527]: time="2025-05-14T01:21:11.985019991Z" level=info msg="Container 0b2b9e7a8e812a51ded7968c88b4ad6f89564a815b3b7783a0b874fc441872cc: CDI devices from CRI Config.CDIDevices: []" May 14 01:21:11.994268 containerd[1527]: time="2025-05-14T01:21:11.994206898Z" level=info msg="CreateContainer within sandbox \"1753dd8af2c9586042440710ea6306a5fe86d45c38dd972143b35bfa304cf49b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0b2b9e7a8e812a51ded7968c88b4ad6f89564a815b3b7783a0b874fc441872cc\"" May 14 01:21:11.995438 containerd[1527]: time="2025-05-14T01:21:11.995382206Z" level=info msg="StartContainer for \"0b2b9e7a8e812a51ded7968c88b4ad6f89564a815b3b7783a0b874fc441872cc\"" May 14 01:21:11.998069 containerd[1527]: time="2025-05-14T01:21:11.998029624Z" level=info msg="connecting to shim 0b2b9e7a8e812a51ded7968c88b4ad6f89564a815b3b7783a0b874fc441872cc" address="unix:///run/containerd/s/4bea2ccd0e363119d0a0a71c59a37af4a1d7375f6f5d440b86803d369e1b3471" protocol=ttrpc version=3 May 14 01:21:12.025858 systemd[1]: Started cri-containerd-0b2b9e7a8e812a51ded7968c88b4ad6f89564a815b3b7783a0b874fc441872cc.scope - libcontainer container 0b2b9e7a8e812a51ded7968c88b4ad6f89564a815b3b7783a0b874fc441872cc. May 14 01:21:12.070572 containerd[1527]: time="2025-05-14T01:21:12.070519854Z" level=info msg="StartContainer for \"0b2b9e7a8e812a51ded7968c88b4ad6f89564a815b3b7783a0b874fc441872cc\" returns successfully" May 14 01:21:12.185870 containerd[1527]: time="2025-05-14T01:21:12.185668105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-789496d6f5-ccrxv,Uid:f046a430-7045-4008-a8cf-68b0afc3108d,Namespace:tigera-operator,Attempt:0,}" May 14 01:21:12.216916 containerd[1527]: time="2025-05-14T01:21:12.215837555Z" level=info msg="connecting to shim a0fb3db6c3c2ef566a4db7a65902d29c184c9c5b07d10938b1c1536393fe43c7" address="unix:///run/containerd/s/4df6e0b3267a2df4859f0b379233e8ba465a78031520d0c4ed6576108aa80ace" namespace=k8s.io protocol=ttrpc version=3 May 14 01:21:12.257202 systemd[1]: Started cri-containerd-a0fb3db6c3c2ef566a4db7a65902d29c184c9c5b07d10938b1c1536393fe43c7.scope - libcontainer container a0fb3db6c3c2ef566a4db7a65902d29c184c9c5b07d10938b1c1536393fe43c7. May 14 01:21:12.347822 containerd[1527]: time="2025-05-14T01:21:12.347759173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-789496d6f5-ccrxv,Uid:f046a430-7045-4008-a8cf-68b0afc3108d,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"a0fb3db6c3c2ef566a4db7a65902d29c184c9c5b07d10938b1c1536393fe43c7\"" May 14 01:21:12.351106 containerd[1527]: time="2025-05-14T01:21:12.351005830Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 14 01:21:12.502091 sudo[1918]: pam_unix(sudo:session): session closed for user root May 14 01:21:12.665270 sshd[1917]: Connection closed by 139.178.89.65 port 37260 May 14 01:21:12.667807 sshd-session[1900]: pam_unix(sshd:session): session closed for user core May 14 01:21:12.674035 systemd[1]: sshd@7-37.27.220.42:22-139.178.89.65:37260.service: Deactivated successfully. May 14 01:21:12.679185 systemd[1]: session-7.scope: Deactivated successfully. May 14 01:21:12.679708 systemd[1]: session-7.scope: Consumed 4.955s CPU time, 149.7M memory peak. May 14 01:21:12.682521 systemd-logind[1504]: Session 7 logged out. Waiting for processes to exit. May 14 01:21:12.684812 systemd-logind[1504]: Removed session 7. May 14 01:21:12.769930 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3842729704.mount: Deactivated successfully. May 14 01:21:12.889898 kubelet[2830]: I0514 01:21:12.889752 2830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6nb79" podStartSLOduration=1.889722931 podStartE2EDuration="1.889722931s" podCreationTimestamp="2025-05-14 01:21:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 01:21:12.88941147 +0000 UTC m=+6.209225143" watchObservedRunningTime="2025-05-14 01:21:12.889722931 +0000 UTC m=+6.209536604" May 14 01:21:24.551043 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2867923728.mount: Deactivated successfully. May 14 01:21:24.933165 containerd[1527]: time="2025-05-14T01:21:24.933027615Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:21:24.934523 containerd[1527]: time="2025-05-14T01:21:24.934454758Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=22002662" May 14 01:21:24.935704 containerd[1527]: time="2025-05-14T01:21:24.935596658Z" level=info msg="ImageCreate event name:\"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:21:24.937848 containerd[1527]: time="2025-05-14T01:21:24.937822009Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:21:24.938416 containerd[1527]: time="2025-05-14T01:21:24.938254231Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"21998657\" in 12.58720569s" May 14 01:21:24.938416 containerd[1527]: time="2025-05-14T01:21:24.938292374Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\"" May 14 01:21:24.945362 containerd[1527]: time="2025-05-14T01:21:24.944926786Z" level=info msg="CreateContainer within sandbox \"a0fb3db6c3c2ef566a4db7a65902d29c184c9c5b07d10938b1c1536393fe43c7\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 14 01:21:24.955230 containerd[1527]: time="2025-05-14T01:21:24.955182103Z" level=info msg="Container 7852863450d3adb36cb50bc762437c523332805c4a23e1b49ca6d57112f2f2bb: CDI devices from CRI Config.CDIDevices: []" May 14 01:21:24.963850 containerd[1527]: time="2025-05-14T01:21:24.963800016Z" level=info msg="CreateContainer within sandbox \"a0fb3db6c3c2ef566a4db7a65902d29c184c9c5b07d10938b1c1536393fe43c7\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"7852863450d3adb36cb50bc762437c523332805c4a23e1b49ca6d57112f2f2bb\"" May 14 01:21:24.964544 containerd[1527]: time="2025-05-14T01:21:24.964437759Z" level=info msg="StartContainer for \"7852863450d3adb36cb50bc762437c523332805c4a23e1b49ca6d57112f2f2bb\"" May 14 01:21:24.965769 containerd[1527]: time="2025-05-14T01:21:24.965730015Z" level=info msg="connecting to shim 7852863450d3adb36cb50bc762437c523332805c4a23e1b49ca6d57112f2f2bb" address="unix:///run/containerd/s/4df6e0b3267a2df4859f0b379233e8ba465a78031520d0c4ed6576108aa80ace" protocol=ttrpc version=3 May 14 01:21:24.985854 systemd[1]: Started cri-containerd-7852863450d3adb36cb50bc762437c523332805c4a23e1b49ca6d57112f2f2bb.scope - libcontainer container 7852863450d3adb36cb50bc762437c523332805c4a23e1b49ca6d57112f2f2bb. May 14 01:21:25.023861 containerd[1527]: time="2025-05-14T01:21:25.023758570Z" level=info msg="StartContainer for \"7852863450d3adb36cb50bc762437c523332805c4a23e1b49ca6d57112f2f2bb\" returns successfully" May 14 01:21:25.894609 kubelet[2830]: I0514 01:21:25.893948 2830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-789496d6f5-ccrxv" podStartSLOduration=2.3006520999999998 podStartE2EDuration="14.893908088s" podCreationTimestamp="2025-05-14 01:21:11 +0000 UTC" firstStartedPulling="2025-05-14 01:21:12.350253493 +0000 UTC m=+5.670067117" lastFinishedPulling="2025-05-14 01:21:24.943509492 +0000 UTC m=+18.263323105" observedRunningTime="2025-05-14 01:21:25.893503178 +0000 UTC m=+19.213316831" watchObservedRunningTime="2025-05-14 01:21:25.893908088 +0000 UTC m=+19.213721731" May 14 01:21:28.195980 systemd[1]: Created slice kubepods-besteffort-pod4a6291b5_5586_4151_b24e_80e9823aed09.slice - libcontainer container kubepods-besteffort-pod4a6291b5_5586_4151_b24e_80e9823aed09.slice. May 14 01:21:28.246895 kubelet[2830]: I0514 01:21:28.246850 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a6291b5-5586-4151-b24e-80e9823aed09-tigera-ca-bundle\") pod \"calico-typha-8dc7ccffb-w6dqq\" (UID: \"4a6291b5-5586-4151-b24e-80e9823aed09\") " pod="calico-system/calico-typha-8dc7ccffb-w6dqq" May 14 01:21:28.246895 kubelet[2830]: I0514 01:21:28.246895 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbtkz\" (UniqueName: \"kubernetes.io/projected/4a6291b5-5586-4151-b24e-80e9823aed09-kube-api-access-tbtkz\") pod \"calico-typha-8dc7ccffb-w6dqq\" (UID: \"4a6291b5-5586-4151-b24e-80e9823aed09\") " pod="calico-system/calico-typha-8dc7ccffb-w6dqq" May 14 01:21:28.246895 kubelet[2830]: I0514 01:21:28.246909 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/4a6291b5-5586-4151-b24e-80e9823aed09-typha-certs\") pod \"calico-typha-8dc7ccffb-w6dqq\" (UID: \"4a6291b5-5586-4151-b24e-80e9823aed09\") " pod="calico-system/calico-typha-8dc7ccffb-w6dqq" May 14 01:21:28.260552 systemd[1]: Created slice kubepods-besteffort-pod5ba06c45_e29a_43cb_83e3_5bfa306ddbc2.slice - libcontainer container kubepods-besteffort-pod5ba06c45_e29a_43cb_83e3_5bfa306ddbc2.slice. May 14 01:21:28.347552 kubelet[2830]: I0514 01:21:28.347120 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5ba06c45-e29a-43cb-83e3-5bfa306ddbc2-xtables-lock\") pod \"calico-node-wvt49\" (UID: \"5ba06c45-e29a-43cb-83e3-5bfa306ddbc2\") " pod="calico-system/calico-node-wvt49" May 14 01:21:28.347552 kubelet[2830]: I0514 01:21:28.347167 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/5ba06c45-e29a-43cb-83e3-5bfa306ddbc2-cni-net-dir\") pod \"calico-node-wvt49\" (UID: \"5ba06c45-e29a-43cb-83e3-5bfa306ddbc2\") " pod="calico-system/calico-node-wvt49" May 14 01:21:28.347552 kubelet[2830]: I0514 01:21:28.347185 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5ba06c45-e29a-43cb-83e3-5bfa306ddbc2-var-lib-calico\") pod \"calico-node-wvt49\" (UID: \"5ba06c45-e29a-43cb-83e3-5bfa306ddbc2\") " pod="calico-system/calico-node-wvt49" May 14 01:21:28.347552 kubelet[2830]: I0514 01:21:28.347201 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/5ba06c45-e29a-43cb-83e3-5bfa306ddbc2-var-run-calico\") pod \"calico-node-wvt49\" (UID: \"5ba06c45-e29a-43cb-83e3-5bfa306ddbc2\") " pod="calico-system/calico-node-wvt49" May 14 01:21:28.347552 kubelet[2830]: I0514 01:21:28.347214 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/5ba06c45-e29a-43cb-83e3-5bfa306ddbc2-cni-log-dir\") pod \"calico-node-wvt49\" (UID: \"5ba06c45-e29a-43cb-83e3-5bfa306ddbc2\") " pod="calico-system/calico-node-wvt49" May 14 01:21:28.347821 kubelet[2830]: I0514 01:21:28.347229 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/5ba06c45-e29a-43cb-83e3-5bfa306ddbc2-flexvol-driver-host\") pod \"calico-node-wvt49\" (UID: \"5ba06c45-e29a-43cb-83e3-5bfa306ddbc2\") " pod="calico-system/calico-node-wvt49" May 14 01:21:28.347821 kubelet[2830]: I0514 01:21:28.347244 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d75nj\" (UniqueName: \"kubernetes.io/projected/5ba06c45-e29a-43cb-83e3-5bfa306ddbc2-kube-api-access-d75nj\") pod \"calico-node-wvt49\" (UID: \"5ba06c45-e29a-43cb-83e3-5bfa306ddbc2\") " pod="calico-system/calico-node-wvt49" May 14 01:21:28.347821 kubelet[2830]: I0514 01:21:28.347263 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ba06c45-e29a-43cb-83e3-5bfa306ddbc2-tigera-ca-bundle\") pod \"calico-node-wvt49\" (UID: \"5ba06c45-e29a-43cb-83e3-5bfa306ddbc2\") " pod="calico-system/calico-node-wvt49" May 14 01:21:28.347821 kubelet[2830]: I0514 01:21:28.347284 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/5ba06c45-e29a-43cb-83e3-5bfa306ddbc2-node-certs\") pod \"calico-node-wvt49\" (UID: \"5ba06c45-e29a-43cb-83e3-5bfa306ddbc2\") " pod="calico-system/calico-node-wvt49" May 14 01:21:28.347821 kubelet[2830]: I0514 01:21:28.347298 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/5ba06c45-e29a-43cb-83e3-5bfa306ddbc2-cni-bin-dir\") pod \"calico-node-wvt49\" (UID: \"5ba06c45-e29a-43cb-83e3-5bfa306ddbc2\") " pod="calico-system/calico-node-wvt49" May 14 01:21:28.347922 kubelet[2830]: I0514 01:21:28.347353 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5ba06c45-e29a-43cb-83e3-5bfa306ddbc2-lib-modules\") pod \"calico-node-wvt49\" (UID: \"5ba06c45-e29a-43cb-83e3-5bfa306ddbc2\") " pod="calico-system/calico-node-wvt49" May 14 01:21:28.347922 kubelet[2830]: I0514 01:21:28.347415 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/5ba06c45-e29a-43cb-83e3-5bfa306ddbc2-policysync\") pod \"calico-node-wvt49\" (UID: \"5ba06c45-e29a-43cb-83e3-5bfa306ddbc2\") " pod="calico-system/calico-node-wvt49" May 14 01:21:28.391008 kubelet[2830]: E0514 01:21:28.390954 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2jzfd" podUID="1ff2619d-9812-4662-94f4-719dc464e433" May 14 01:21:28.449674 kubelet[2830]: I0514 01:21:28.449023 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1ff2619d-9812-4662-94f4-719dc464e433-registration-dir\") pod \"csi-node-driver-2jzfd\" (UID: \"1ff2619d-9812-4662-94f4-719dc464e433\") " pod="calico-system/csi-node-driver-2jzfd" May 14 01:21:28.450285 kubelet[2830]: I0514 01:21:28.450265 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pps7v\" (UniqueName: \"kubernetes.io/projected/1ff2619d-9812-4662-94f4-719dc464e433-kube-api-access-pps7v\") pod \"csi-node-driver-2jzfd\" (UID: \"1ff2619d-9812-4662-94f4-719dc464e433\") " pod="calico-system/csi-node-driver-2jzfd" May 14 01:21:28.450334 kubelet[2830]: I0514 01:21:28.450312 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1ff2619d-9812-4662-94f4-719dc464e433-kubelet-dir\") pod \"csi-node-driver-2jzfd\" (UID: \"1ff2619d-9812-4662-94f4-719dc464e433\") " pod="calico-system/csi-node-driver-2jzfd" May 14 01:21:28.450334 kubelet[2830]: I0514 01:21:28.450327 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1ff2619d-9812-4662-94f4-719dc464e433-socket-dir\") pod \"csi-node-driver-2jzfd\" (UID: \"1ff2619d-9812-4662-94f4-719dc464e433\") " pod="calico-system/csi-node-driver-2jzfd" May 14 01:21:28.450414 kubelet[2830]: I0514 01:21:28.450356 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/1ff2619d-9812-4662-94f4-719dc464e433-varrun\") pod \"csi-node-driver-2jzfd\" (UID: \"1ff2619d-9812-4662-94f4-719dc464e433\") " pod="calico-system/csi-node-driver-2jzfd" May 14 01:21:28.451664 kubelet[2830]: E0514 01:21:28.451498 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:21:28.451664 kubelet[2830]: W0514 01:21:28.451514 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:21:28.451664 kubelet[2830]: E0514 01:21:28.451528 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:21:28.453866 kubelet[2830]: E0514 01:21:28.453772 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:21:28.453866 kubelet[2830]: W0514 01:21:28.453783 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:21:28.453866 kubelet[2830]: E0514 01:21:28.453794 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:21:28.454104 kubelet[2830]: E0514 01:21:28.454042 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:21:28.454104 kubelet[2830]: W0514 01:21:28.454050 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:21:28.454104 kubelet[2830]: E0514 01:21:28.454057 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:21:28.454630 kubelet[2830]: E0514 01:21:28.454588 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:21:28.454630 kubelet[2830]: W0514 01:21:28.454598 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:21:28.454630 kubelet[2830]: E0514 01:21:28.454606 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:21:28.457608 kubelet[2830]: E0514 01:21:28.457559 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:21:28.457608 kubelet[2830]: W0514 01:21:28.457570 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:21:28.457608 kubelet[2830]: E0514 01:21:28.457579 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:21:28.458210 kubelet[2830]: E0514 01:21:28.458075 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:21:28.458210 kubelet[2830]: W0514 01:21:28.458082 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:21:28.458210 kubelet[2830]: E0514 01:21:28.458092 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:21:28.473053 kubelet[2830]: E0514 01:21:28.471782 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:21:28.473053 kubelet[2830]: W0514 01:21:28.471800 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:21:28.473053 kubelet[2830]: E0514 01:21:28.471819 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:21:28.500435 containerd[1527]: time="2025-05-14T01:21:28.500384264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8dc7ccffb-w6dqq,Uid:4a6291b5-5586-4151-b24e-80e9823aed09,Namespace:calico-system,Attempt:0,}" May 14 01:21:28.545721 containerd[1527]: time="2025-05-14T01:21:28.544744141Z" level=info msg="connecting to shim d0d3dee6febe6d6504a1b4d84499593abf604423adbd7f3be7e8380d26b76c30" address="unix:///run/containerd/s/2316809c0f96de556101bdca5cad07abba56b69543143dc2b7f1c1ecf1163490" namespace=k8s.io protocol=ttrpc version=3 May 14 01:21:28.552881 kubelet[2830]: E0514 01:21:28.552850 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:21:28.552881 kubelet[2830]: W0514 01:21:28.552875 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:21:28.552881 kubelet[2830]: E0514 01:21:28.552896 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:21:28.553303 kubelet[2830]: E0514 01:21:28.553090 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:21:28.553303 kubelet[2830]: W0514 01:21:28.553096 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:21:28.553303 kubelet[2830]: E0514 01:21:28.553112 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:21:28.553303 kubelet[2830]: E0514 01:21:28.553270 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:21:28.553303 kubelet[2830]: W0514 01:21:28.553276 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:21:28.553303 kubelet[2830]: E0514 01:21:28.553293 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:21:28.554019 kubelet[2830]: E0514 01:21:28.553474 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:21:28.554019 kubelet[2830]: W0514 01:21:28.553480 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:21:28.554019 kubelet[2830]: E0514 01:21:28.553489 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:21:28.554019 kubelet[2830]: E0514 01:21:28.553656 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:21:28.554019 kubelet[2830]: W0514 01:21:28.553662 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:21:28.554019 kubelet[2830]: E0514 01:21:28.553670 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:21:28.554019 kubelet[2830]: E0514 01:21:28.553826 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:21:28.554019 kubelet[2830]: W0514 01:21:28.553832 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:21:28.554019 kubelet[2830]: E0514 01:21:28.553848 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:21:28.554019 kubelet[2830]: E0514 01:21:28.553974 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:21:28.554802 kubelet[2830]: W0514 01:21:28.553980 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:21:28.554802 kubelet[2830]: E0514 01:21:28.553997 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:21:28.554802 kubelet[2830]: E0514 01:21:28.554118 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:21:28.554802 kubelet[2830]: W0514 01:21:28.554124 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:21:28.554802 kubelet[2830]: E0514 01:21:28.554173 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:21:28.554802 kubelet[2830]: E0514 01:21:28.554302 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:21:28.554802 kubelet[2830]: W0514 01:21:28.554308 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:21:28.554802 kubelet[2830]: E0514 01:21:28.554661 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:21:28.554802 kubelet[2830]: E0514 01:21:28.554748 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:21:28.554802 kubelet[2830]: W0514 01:21:28.554754 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:21:28.555411 kubelet[2830]: E0514 01:21:28.554831 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:21:28.555411 kubelet[2830]: E0514 01:21:28.554901 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:21:28.555411 kubelet[2830]: W0514 01:21:28.554907 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:21:28.555411 kubelet[2830]: E0514 01:21:28.555134 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:21:28.555411 kubelet[2830]: E0514 01:21:28.555292 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:21:28.555411 kubelet[2830]: W0514 01:21:28.555298 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:21:28.555528 kubelet[2830]: E0514 01:21:28.555443 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:21:28.556204 kubelet[2830]: E0514 01:21:28.555625 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:21:28.556204 kubelet[2830]: W0514 01:21:28.555644 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:21:28.556204 kubelet[2830]: E0514 01:21:28.555786 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:21:28.556204 kubelet[2830]: E0514 01:21:28.555882 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:21:28.556204 kubelet[2830]: W0514 01:21:28.555887 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:21:28.556204 kubelet[2830]: E0514 01:21:28.555949 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:21:28.558650 kubelet[2830]: E0514 01:21:28.556659 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:21:28.558650 kubelet[2830]: W0514 01:21:28.556669 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:21:28.558650 kubelet[2830]: E0514 01:21:28.556759 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:21:28.558650 kubelet[2830]: E0514 01:21:28.556850 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:21:28.558650 kubelet[2830]: W0514 01:21:28.556855 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:21:28.558650 kubelet[2830]: E0514 01:21:28.556902 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:21:28.558650 kubelet[2830]: E0514 01:21:28.556970 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:21:28.558650 kubelet[2830]: W0514 01:21:28.556975 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:21:28.558650 kubelet[2830]: E0514 01:21:28.557021 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:21:28.558650 kubelet[2830]: E0514 01:21:28.557087 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:21:28.558865 kubelet[2830]: W0514 01:21:28.557093 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:21:28.558865 kubelet[2830]: E0514 01:21:28.557174 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:21:28.558865 kubelet[2830]: E0514 01:21:28.557244 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:21:28.558865 kubelet[2830]: W0514 01:21:28.557250 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:21:28.558865 kubelet[2830]: E0514 01:21:28.557683 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:21:28.558865 kubelet[2830]: E0514 01:21:28.557820 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:21:28.558865 kubelet[2830]: W0514 01:21:28.557826 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:21:28.558865 kubelet[2830]: E0514 01:21:28.557843 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:21:28.558865 kubelet[2830]: E0514 01:21:28.557988 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:21:28.558865 kubelet[2830]: W0514 01:21:28.557994 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:21:28.559051 kubelet[2830]: E0514 01:21:28.558010 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:21:28.559051 kubelet[2830]: E0514 01:21:28.558743 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:21:28.559051 kubelet[2830]: W0514 01:21:28.558750 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:21:28.559051 kubelet[2830]: E0514 01:21:28.558760 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:21:28.559051 kubelet[2830]: E0514 01:21:28.558891 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:21:28.559051 kubelet[2830]: W0514 01:21:28.558897 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:21:28.559051 kubelet[2830]: E0514 01:21:28.558903 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:21:28.559051 kubelet[2830]: E0514 01:21:28.559011 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:21:28.559051 kubelet[2830]: W0514 01:21:28.559017 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:21:28.559051 kubelet[2830]: E0514 01:21:28.559023 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:21:28.562657 kubelet[2830]: E0514 01:21:28.560745 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:21:28.562657 kubelet[2830]: W0514 01:21:28.560754 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:21:28.562657 kubelet[2830]: E0514 01:21:28.560764 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:21:28.569587 containerd[1527]: time="2025-05-14T01:21:28.569545741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wvt49,Uid:5ba06c45-e29a-43cb-83e3-5bfa306ddbc2,Namespace:calico-system,Attempt:0,}" May 14 01:21:28.580123 kubelet[2830]: E0514 01:21:28.579808 2830 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 14 01:21:28.580123 kubelet[2830]: W0514 01:21:28.579825 2830 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 14 01:21:28.580123 kubelet[2830]: E0514 01:21:28.579843 2830 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 14 01:21:28.586807 systemd[1]: Started cri-containerd-d0d3dee6febe6d6504a1b4d84499593abf604423adbd7f3be7e8380d26b76c30.scope - libcontainer container d0d3dee6febe6d6504a1b4d84499593abf604423adbd7f3be7e8380d26b76c30. May 14 01:21:28.617992 containerd[1527]: time="2025-05-14T01:21:28.617949772Z" level=info msg="connecting to shim fa2b548e032839e944629722933a48e2d14a69178af5a9ec196ecaf7a39f5b33" address="unix:///run/containerd/s/fe25bffe94ecbf339945e872cae61ca7649501f05630cd00570b002b5197b02c" namespace=k8s.io protocol=ttrpc version=3 May 14 01:21:28.647793 systemd[1]: Started cri-containerd-fa2b548e032839e944629722933a48e2d14a69178af5a9ec196ecaf7a39f5b33.scope - libcontainer container fa2b548e032839e944629722933a48e2d14a69178af5a9ec196ecaf7a39f5b33. May 14 01:21:28.677152 containerd[1527]: time="2025-05-14T01:21:28.677075088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wvt49,Uid:5ba06c45-e29a-43cb-83e3-5bfa306ddbc2,Namespace:calico-system,Attempt:0,} returns sandbox id \"fa2b548e032839e944629722933a48e2d14a69178af5a9ec196ecaf7a39f5b33\"" May 14 01:21:28.679213 containerd[1527]: time="2025-05-14T01:21:28.679058722Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 14 01:21:28.705683 containerd[1527]: time="2025-05-14T01:21:28.704494689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8dc7ccffb-w6dqq,Uid:4a6291b5-5586-4151-b24e-80e9823aed09,Namespace:calico-system,Attempt:0,} returns sandbox id \"d0d3dee6febe6d6504a1b4d84499593abf604423adbd7f3be7e8380d26b76c30\"" May 14 01:21:29.810568 kubelet[2830]: E0514 01:21:29.810482 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2jzfd" podUID="1ff2619d-9812-4662-94f4-719dc464e433" May 14 01:21:30.963369 containerd[1527]: time="2025-05-14T01:21:30.963299906Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:21:30.964703 containerd[1527]: time="2025-05-14T01:21:30.964660165Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5366937" May 14 01:21:30.966043 containerd[1527]: time="2025-05-14T01:21:30.966008090Z" level=info msg="ImageCreate event name:\"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:21:30.968522 containerd[1527]: time="2025-05-14T01:21:30.968487898Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:21:30.969136 containerd[1527]: time="2025-05-14T01:21:30.969039207Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6859519\" in 2.289958564s" May 14 01:21:30.969136 containerd[1527]: time="2025-05-14T01:21:30.969063063Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\"" May 14 01:21:30.970548 containerd[1527]: time="2025-05-14T01:21:30.970402172Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 14 01:21:30.971564 containerd[1527]: time="2025-05-14T01:21:30.971531120Z" level=info msg="CreateContainer within sandbox \"fa2b548e032839e944629722933a48e2d14a69178af5a9ec196ecaf7a39f5b33\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 14 01:21:30.982221 containerd[1527]: time="2025-05-14T01:21:30.980620295Z" level=info msg="Container 17017cc081439c9d44a54e006ac8219afbb01790a8cfae7630b118796a07f517: CDI devices from CRI Config.CDIDevices: []" May 14 01:21:30.999580 containerd[1527]: time="2025-05-14T01:21:30.999514999Z" level=info msg="CreateContainer within sandbox \"fa2b548e032839e944629722933a48e2d14a69178af5a9ec196ecaf7a39f5b33\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"17017cc081439c9d44a54e006ac8219afbb01790a8cfae7630b118796a07f517\"" May 14 01:21:31.003380 containerd[1527]: time="2025-05-14T01:21:31.003246471Z" level=info msg="StartContainer for \"17017cc081439c9d44a54e006ac8219afbb01790a8cfae7630b118796a07f517\"" May 14 01:21:31.007044 containerd[1527]: time="2025-05-14T01:21:31.006984575Z" level=info msg="connecting to shim 17017cc081439c9d44a54e006ac8219afbb01790a8cfae7630b118796a07f517" address="unix:///run/containerd/s/fe25bffe94ecbf339945e872cae61ca7649501f05630cd00570b002b5197b02c" protocol=ttrpc version=3 May 14 01:21:31.033881 systemd[1]: Started cri-containerd-17017cc081439c9d44a54e006ac8219afbb01790a8cfae7630b118796a07f517.scope - libcontainer container 17017cc081439c9d44a54e006ac8219afbb01790a8cfae7630b118796a07f517. May 14 01:21:31.078884 containerd[1527]: time="2025-05-14T01:21:31.078838465Z" level=info msg="StartContainer for \"17017cc081439c9d44a54e006ac8219afbb01790a8cfae7630b118796a07f517\" returns successfully" May 14 01:21:31.086835 systemd[1]: cri-containerd-17017cc081439c9d44a54e006ac8219afbb01790a8cfae7630b118796a07f517.scope: Deactivated successfully. May 14 01:21:31.091049 containerd[1527]: time="2025-05-14T01:21:31.091012424Z" level=info msg="received exit event container_id:\"17017cc081439c9d44a54e006ac8219afbb01790a8cfae7630b118796a07f517\" id:\"17017cc081439c9d44a54e006ac8219afbb01790a8cfae7630b118796a07f517\" pid:3356 exited_at:{seconds:1747185691 nanos:90328171}" May 14 01:21:31.102326 containerd[1527]: time="2025-05-14T01:21:31.102155511Z" level=info msg="TaskExit event in podsandbox handler container_id:\"17017cc081439c9d44a54e006ac8219afbb01790a8cfae7630b118796a07f517\" id:\"17017cc081439c9d44a54e006ac8219afbb01790a8cfae7630b118796a07f517\" pid:3356 exited_at:{seconds:1747185691 nanos:90328171}" May 14 01:21:31.123134 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-17017cc081439c9d44a54e006ac8219afbb01790a8cfae7630b118796a07f517-rootfs.mount: Deactivated successfully. May 14 01:21:31.810447 kubelet[2830]: E0514 01:21:31.810374 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2jzfd" podUID="1ff2619d-9812-4662-94f4-719dc464e433" May 14 01:21:33.773895 containerd[1527]: time="2025-05-14T01:21:33.773804726Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:21:33.775040 containerd[1527]: time="2025-05-14T01:21:33.774982117Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=30426870" May 14 01:21:33.776398 containerd[1527]: time="2025-05-14T01:21:33.776356804Z" level=info msg="ImageCreate event name:\"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:21:33.778488 containerd[1527]: time="2025-05-14T01:21:33.778466130Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:21:33.779089 containerd[1527]: time="2025-05-14T01:21:33.778867754Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"31919484\" in 2.808429695s" May 14 01:21:33.779089 containerd[1527]: time="2025-05-14T01:21:33.778906598Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\"" May 14 01:21:33.780107 containerd[1527]: time="2025-05-14T01:21:33.779959823Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 14 01:21:33.797785 containerd[1527]: time="2025-05-14T01:21:33.797742214Z" level=info msg="CreateContainer within sandbox \"d0d3dee6febe6d6504a1b4d84499593abf604423adbd7f3be7e8380d26b76c30\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 14 01:21:33.807932 containerd[1527]: time="2025-05-14T01:21:33.807876387Z" level=info msg="Container 0d6adeb94069bbf8040a07ed24b8ea85bbd2f68d5a3f60c265c2113d8022be95: CDI devices from CRI Config.CDIDevices: []" May 14 01:21:33.810270 kubelet[2830]: E0514 01:21:33.810233 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2jzfd" podUID="1ff2619d-9812-4662-94f4-719dc464e433" May 14 01:21:33.818817 containerd[1527]: time="2025-05-14T01:21:33.818773141Z" level=info msg="CreateContainer within sandbox \"d0d3dee6febe6d6504a1b4d84499593abf604423adbd7f3be7e8380d26b76c30\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"0d6adeb94069bbf8040a07ed24b8ea85bbd2f68d5a3f60c265c2113d8022be95\"" May 14 01:21:33.820006 containerd[1527]: time="2025-05-14T01:21:33.819269976Z" level=info msg="StartContainer for \"0d6adeb94069bbf8040a07ed24b8ea85bbd2f68d5a3f60c265c2113d8022be95\"" May 14 01:21:33.820305 containerd[1527]: time="2025-05-14T01:21:33.820284188Z" level=info msg="connecting to shim 0d6adeb94069bbf8040a07ed24b8ea85bbd2f68d5a3f60c265c2113d8022be95" address="unix:///run/containerd/s/2316809c0f96de556101bdca5cad07abba56b69543143dc2b7f1c1ecf1163490" protocol=ttrpc version=3 May 14 01:21:33.839746 systemd[1]: Started cri-containerd-0d6adeb94069bbf8040a07ed24b8ea85bbd2f68d5a3f60c265c2113d8022be95.scope - libcontainer container 0d6adeb94069bbf8040a07ed24b8ea85bbd2f68d5a3f60c265c2113d8022be95. May 14 01:21:33.881241 containerd[1527]: time="2025-05-14T01:21:33.881203308Z" level=info msg="StartContainer for \"0d6adeb94069bbf8040a07ed24b8ea85bbd2f68d5a3f60c265c2113d8022be95\" returns successfully" May 14 01:21:34.909501 kubelet[2830]: I0514 01:21:34.909455 2830 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 01:21:35.812334 kubelet[2830]: E0514 01:21:35.812223 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2jzfd" podUID="1ff2619d-9812-4662-94f4-719dc464e433" May 14 01:21:37.811185 kubelet[2830]: E0514 01:21:37.811002 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2jzfd" podUID="1ff2619d-9812-4662-94f4-719dc464e433" May 14 01:21:39.718896 containerd[1527]: time="2025-05-14T01:21:39.718841394Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:21:39.720014 containerd[1527]: time="2025-05-14T01:21:39.719953974Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=97793683" May 14 01:21:39.721142 containerd[1527]: time="2025-05-14T01:21:39.721101509Z" level=info msg="ImageCreate event name:\"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:21:39.722896 containerd[1527]: time="2025-05-14T01:21:39.722860080Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:21:39.723629 containerd[1527]: time="2025-05-14T01:21:39.723329284Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"99286305\" in 5.943349042s" May 14 01:21:39.723629 containerd[1527]: time="2025-05-14T01:21:39.723368368Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\"" May 14 01:21:39.728243 containerd[1527]: time="2025-05-14T01:21:39.728209962Z" level=info msg="CreateContainer within sandbox \"fa2b548e032839e944629722933a48e2d14a69178af5a9ec196ecaf7a39f5b33\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 14 01:21:39.738938 containerd[1527]: time="2025-05-14T01:21:39.738919553Z" level=info msg="Container b7c34e24bd1d93fe0c7045669c48b16df7d9848dda312f07eb3d2b986862d124: CDI devices from CRI Config.CDIDevices: []" May 14 01:21:39.752349 containerd[1527]: time="2025-05-14T01:21:39.752294123Z" level=info msg="CreateContainer within sandbox \"fa2b548e032839e944629722933a48e2d14a69178af5a9ec196ecaf7a39f5b33\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b7c34e24bd1d93fe0c7045669c48b16df7d9848dda312f07eb3d2b986862d124\"" May 14 01:21:39.753979 containerd[1527]: time="2025-05-14T01:21:39.753531690Z" level=info msg="StartContainer for \"b7c34e24bd1d93fe0c7045669c48b16df7d9848dda312f07eb3d2b986862d124\"" May 14 01:21:39.755430 containerd[1527]: time="2025-05-14T01:21:39.755380693Z" level=info msg="connecting to shim b7c34e24bd1d93fe0c7045669c48b16df7d9848dda312f07eb3d2b986862d124" address="unix:///run/containerd/s/fe25bffe94ecbf339945e872cae61ca7649501f05630cd00570b002b5197b02c" protocol=ttrpc version=3 May 14 01:21:39.800866 systemd[1]: Started cri-containerd-b7c34e24bd1d93fe0c7045669c48b16df7d9848dda312f07eb3d2b986862d124.scope - libcontainer container b7c34e24bd1d93fe0c7045669c48b16df7d9848dda312f07eb3d2b986862d124. May 14 01:21:39.816473 kubelet[2830]: E0514 01:21:39.815466 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2jzfd" podUID="1ff2619d-9812-4662-94f4-719dc464e433" May 14 01:21:39.841048 containerd[1527]: time="2025-05-14T01:21:39.840978389Z" level=info msg="StartContainer for \"b7c34e24bd1d93fe0c7045669c48b16df7d9848dda312f07eb3d2b986862d124\" returns successfully" May 14 01:21:39.991868 kubelet[2830]: I0514 01:21:39.991369 2830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-8dc7ccffb-w6dqq" podStartSLOduration=6.917297416 podStartE2EDuration="11.991349671s" podCreationTimestamp="2025-05-14 01:21:28 +0000 UTC" firstStartedPulling="2025-05-14 01:21:28.705811384 +0000 UTC m=+22.025624998" lastFinishedPulling="2025-05-14 01:21:33.779863639 +0000 UTC m=+27.099677253" observedRunningTime="2025-05-14 01:21:33.926618003 +0000 UTC m=+27.246431636" watchObservedRunningTime="2025-05-14 01:21:39.991349671 +0000 UTC m=+33.311163284" May 14 01:21:40.308261 systemd[1]: cri-containerd-b7c34e24bd1d93fe0c7045669c48b16df7d9848dda312f07eb3d2b986862d124.scope: Deactivated successfully. May 14 01:21:40.309525 systemd[1]: cri-containerd-b7c34e24bd1d93fe0c7045669c48b16df7d9848dda312f07eb3d2b986862d124.scope: Consumed 456ms CPU time, 152.4M memory peak, 1.5M read from disk, 154M written to disk. May 14 01:21:40.315834 containerd[1527]: time="2025-05-14T01:21:40.315654556Z" level=info msg="received exit event container_id:\"b7c34e24bd1d93fe0c7045669c48b16df7d9848dda312f07eb3d2b986862d124\" id:\"b7c34e24bd1d93fe0c7045669c48b16df7d9848dda312f07eb3d2b986862d124\" pid:3455 exited_at:{seconds:1747185700 nanos:314752768}" May 14 01:21:40.318296 containerd[1527]: time="2025-05-14T01:21:40.318230133Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b7c34e24bd1d93fe0c7045669c48b16df7d9848dda312f07eb3d2b986862d124\" id:\"b7c34e24bd1d93fe0c7045669c48b16df7d9848dda312f07eb3d2b986862d124\" pid:3455 exited_at:{seconds:1747185700 nanos:314752768}" May 14 01:21:40.353311 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b7c34e24bd1d93fe0c7045669c48b16df7d9848dda312f07eb3d2b986862d124-rootfs.mount: Deactivated successfully. May 14 01:21:40.438340 kubelet[2830]: I0514 01:21:40.436940 2830 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 14 01:21:40.491423 systemd[1]: Created slice kubepods-besteffort-podc4168a6e_c5d1_417e_a08b_b1b4c467bfb1.slice - libcontainer container kubepods-besteffort-podc4168a6e_c5d1_417e_a08b_b1b4c467bfb1.slice. May 14 01:21:40.505518 systemd[1]: Created slice kubepods-burstable-pod69f83135_f92e_4fa8_9b4c_55dff88f63e2.slice - libcontainer container kubepods-burstable-pod69f83135_f92e_4fa8_9b4c_55dff88f63e2.slice. May 14 01:21:40.512509 systemd[1]: Created slice kubepods-besteffort-podcd2bdce0_0bb7_4bf3_bc56_675335a5f3f5.slice - libcontainer container kubepods-besteffort-podcd2bdce0_0bb7_4bf3_bc56_675335a5f3f5.slice. May 14 01:21:40.519612 systemd[1]: Created slice kubepods-burstable-pod1a53d4b9_a854_4ca0_a736_f4ef5a7d06c4.slice - libcontainer container kubepods-burstable-pod1a53d4b9_a854_4ca0_a736_f4ef5a7d06c4.slice. May 14 01:21:40.528113 systemd[1]: Created slice kubepods-besteffort-podb38cbe2b_a17c_45d3_bd71_db37dd71c076.slice - libcontainer container kubepods-besteffort-podb38cbe2b_a17c_45d3_bd71_db37dd71c076.slice. May 14 01:21:40.549446 kubelet[2830]: I0514 01:21:40.549310 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/69f83135-f92e-4fa8-9b4c-55dff88f63e2-config-volume\") pod \"coredns-668d6bf9bc-7wggf\" (UID: \"69f83135-f92e-4fa8-9b4c-55dff88f63e2\") " pod="kube-system/coredns-668d6bf9bc-7wggf" May 14 01:21:40.549446 kubelet[2830]: I0514 01:21:40.549345 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smnxk\" (UniqueName: \"kubernetes.io/projected/69f83135-f92e-4fa8-9b4c-55dff88f63e2-kube-api-access-smnxk\") pod \"coredns-668d6bf9bc-7wggf\" (UID: \"69f83135-f92e-4fa8-9b4c-55dff88f63e2\") " pod="kube-system/coredns-668d6bf9bc-7wggf" May 14 01:21:40.549446 kubelet[2830]: I0514 01:21:40.549362 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c4168a6e-c5d1-417e-a08b-b1b4c467bfb1-calico-apiserver-certs\") pod \"calico-apiserver-76bc6945bb-sc7fw\" (UID: \"c4168a6e-c5d1-417e-a08b-b1b4c467bfb1\") " pod="calico-apiserver/calico-apiserver-76bc6945bb-sc7fw" May 14 01:21:40.549446 kubelet[2830]: I0514 01:21:40.549375 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2hb8\" (UniqueName: \"kubernetes.io/projected/c4168a6e-c5d1-417e-a08b-b1b4c467bfb1-kube-api-access-m2hb8\") pod \"calico-apiserver-76bc6945bb-sc7fw\" (UID: \"c4168a6e-c5d1-417e-a08b-b1b4c467bfb1\") " pod="calico-apiserver/calico-apiserver-76bc6945bb-sc7fw" May 14 01:21:40.549446 kubelet[2830]: I0514 01:21:40.549391 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1a53d4b9-a854-4ca0-a736-f4ef5a7d06c4-config-volume\") pod \"coredns-668d6bf9bc-7vqwz\" (UID: \"1a53d4b9-a854-4ca0-a736-f4ef5a7d06c4\") " pod="kube-system/coredns-668d6bf9bc-7vqwz" May 14 01:21:40.549738 kubelet[2830]: I0514 01:21:40.549406 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b38cbe2b-a17c-45d3-bd71-db37dd71c076-tigera-ca-bundle\") pod \"calico-kube-controllers-75876f4c7b-2qdpw\" (UID: \"b38cbe2b-a17c-45d3-bd71-db37dd71c076\") " pod="calico-system/calico-kube-controllers-75876f4c7b-2qdpw" May 14 01:21:40.549738 kubelet[2830]: I0514 01:21:40.549472 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfgmq\" (UniqueName: \"kubernetes.io/projected/1a53d4b9-a854-4ca0-a736-f4ef5a7d06c4-kube-api-access-cfgmq\") pod \"coredns-668d6bf9bc-7vqwz\" (UID: \"1a53d4b9-a854-4ca0-a736-f4ef5a7d06c4\") " pod="kube-system/coredns-668d6bf9bc-7vqwz" May 14 01:21:40.549738 kubelet[2830]: I0514 01:21:40.549535 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w99lw\" (UniqueName: \"kubernetes.io/projected/b38cbe2b-a17c-45d3-bd71-db37dd71c076-kube-api-access-w99lw\") pod \"calico-kube-controllers-75876f4c7b-2qdpw\" (UID: \"b38cbe2b-a17c-45d3-bd71-db37dd71c076\") " pod="calico-system/calico-kube-controllers-75876f4c7b-2qdpw" May 14 01:21:40.549738 kubelet[2830]: I0514 01:21:40.549612 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/cd2bdce0-0bb7-4bf3-bc56-675335a5f3f5-calico-apiserver-certs\") pod \"calico-apiserver-76bc6945bb-vnsck\" (UID: \"cd2bdce0-0bb7-4bf3-bc56-675335a5f3f5\") " pod="calico-apiserver/calico-apiserver-76bc6945bb-vnsck" May 14 01:21:40.549738 kubelet[2830]: I0514 01:21:40.549678 2830 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qc4r\" (UniqueName: \"kubernetes.io/projected/cd2bdce0-0bb7-4bf3-bc56-675335a5f3f5-kube-api-access-6qc4r\") pod \"calico-apiserver-76bc6945bb-vnsck\" (UID: \"cd2bdce0-0bb7-4bf3-bc56-675335a5f3f5\") " pod="calico-apiserver/calico-apiserver-76bc6945bb-vnsck" May 14 01:21:40.807565 containerd[1527]: time="2025-05-14T01:21:40.807326978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76bc6945bb-sc7fw,Uid:c4168a6e-c5d1-417e-a08b-b1b4c467bfb1,Namespace:calico-apiserver,Attempt:0,}" May 14 01:21:40.813663 containerd[1527]: time="2025-05-14T01:21:40.813551687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7wggf,Uid:69f83135-f92e-4fa8-9b4c-55dff88f63e2,Namespace:kube-system,Attempt:0,}" May 14 01:21:40.847852 containerd[1527]: time="2025-05-14T01:21:40.847444050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76bc6945bb-vnsck,Uid:cd2bdce0-0bb7-4bf3-bc56-675335a5f3f5,Namespace:calico-apiserver,Attempt:0,}" May 14 01:21:40.854329 containerd[1527]: time="2025-05-14T01:21:40.854266529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7vqwz,Uid:1a53d4b9-a854-4ca0-a736-f4ef5a7d06c4,Namespace:kube-system,Attempt:0,}" May 14 01:21:40.855379 containerd[1527]: time="2025-05-14T01:21:40.855042607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75876f4c7b-2qdpw,Uid:b38cbe2b-a17c-45d3-bd71-db37dd71c076,Namespace:calico-system,Attempt:0,}" May 14 01:21:40.966565 containerd[1527]: time="2025-05-14T01:21:40.966316070Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 14 01:21:41.065554 containerd[1527]: time="2025-05-14T01:21:41.065426952Z" level=error msg="Failed to destroy network for sandbox \"f172e9628c8f7b870e58d360da1d570c56e280ee9dab9a70cf34a3fcbb70ecc4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 01:21:41.071998 containerd[1527]: time="2025-05-14T01:21:41.071347894Z" level=error msg="Failed to destroy network for sandbox \"20fbc1942889343dae301d3ac19d1056e57989a5545f02eed5a6268f371fcb78\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 01:21:41.101416 containerd[1527]: time="2025-05-14T01:21:41.077822350Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7wggf,Uid:69f83135-f92e-4fa8-9b4c-55dff88f63e2,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"20fbc1942889343dae301d3ac19d1056e57989a5545f02eed5a6268f371fcb78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 01:21:41.109726 containerd[1527]: time="2025-05-14T01:21:41.092757589Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75876f4c7b-2qdpw,Uid:b38cbe2b-a17c-45d3-bd71-db37dd71c076,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f172e9628c8f7b870e58d360da1d570c56e280ee9dab9a70cf34a3fcbb70ecc4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 01:21:41.109990 containerd[1527]: time="2025-05-14T01:21:41.109782148Z" level=error msg="Failed to destroy network for sandbox \"2e3ac50b6667c76cc30b043535a3063dc51868b5d0312c79b4d97b397deebbce\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 01:21:41.110315 containerd[1527]: time="2025-05-14T01:21:41.092904268Z" level=error msg="Failed to destroy network for sandbox \"8988259962a5bf0b7a2b3a3b2d6b5a68b188dcbe8c806b3de3e3c2492561f91d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 01:21:41.110694 containerd[1527]: time="2025-05-14T01:21:41.092879861Z" level=error msg="Failed to destroy network for sandbox \"5d10a3a5e2869f1cac2606d09a84f0de52d94328a6d7cd6476a1a602ca2f5421\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 01:21:41.111345 kubelet[2830]: E0514 01:21:41.111284 2830 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f172e9628c8f7b870e58d360da1d570c56e280ee9dab9a70cf34a3fcbb70ecc4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 01:21:41.111589 containerd[1527]: time="2025-05-14T01:21:41.111425600Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76bc6945bb-sc7fw,Uid:c4168a6e-c5d1-417e-a08b-b1b4c467bfb1,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e3ac50b6667c76cc30b043535a3063dc51868b5d0312c79b4d97b397deebbce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 01:21:41.112049 kubelet[2830]: E0514 01:21:41.111747 2830 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f172e9628c8f7b870e58d360da1d570c56e280ee9dab9a70cf34a3fcbb70ecc4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-75876f4c7b-2qdpw" May 14 01:21:41.112049 kubelet[2830]: E0514 01:21:41.111772 2830 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f172e9628c8f7b870e58d360da1d570c56e280ee9dab9a70cf34a3fcbb70ecc4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-75876f4c7b-2qdpw" May 14 01:21:41.112049 kubelet[2830]: E0514 01:21:41.111812 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-75876f4c7b-2qdpw_calico-system(b38cbe2b-a17c-45d3-bd71-db37dd71c076)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-75876f4c7b-2qdpw_calico-system(b38cbe2b-a17c-45d3-bd71-db37dd71c076)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f172e9628c8f7b870e58d360da1d570c56e280ee9dab9a70cf34a3fcbb70ecc4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-75876f4c7b-2qdpw" podUID="b38cbe2b-a17c-45d3-bd71-db37dd71c076" May 14 01:21:41.112166 kubelet[2830]: E0514 01:21:41.111838 2830 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e3ac50b6667c76cc30b043535a3063dc51868b5d0312c79b4d97b397deebbce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 01:21:41.112166 kubelet[2830]: E0514 01:21:41.111887 2830 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e3ac50b6667c76cc30b043535a3063dc51868b5d0312c79b4d97b397deebbce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76bc6945bb-sc7fw" May 14 01:21:41.112166 kubelet[2830]: E0514 01:21:41.111917 2830 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e3ac50b6667c76cc30b043535a3063dc51868b5d0312c79b4d97b397deebbce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76bc6945bb-sc7fw" May 14 01:21:41.112237 kubelet[2830]: E0514 01:21:41.111960 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-76bc6945bb-sc7fw_calico-apiserver(c4168a6e-c5d1-417e-a08b-b1b4c467bfb1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-76bc6945bb-sc7fw_calico-apiserver(c4168a6e-c5d1-417e-a08b-b1b4c467bfb1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2e3ac50b6667c76cc30b043535a3063dc51868b5d0312c79b4d97b397deebbce\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-76bc6945bb-sc7fw" podUID="c4168a6e-c5d1-417e-a08b-b1b4c467bfb1" May 14 01:21:41.112237 kubelet[2830]: E0514 01:21:41.111286 2830 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20fbc1942889343dae301d3ac19d1056e57989a5545f02eed5a6268f371fcb78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 01:21:41.112237 kubelet[2830]: E0514 01:21:41.111997 2830 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20fbc1942889343dae301d3ac19d1056e57989a5545f02eed5a6268f371fcb78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-7wggf" May 14 01:21:41.112563 kubelet[2830]: E0514 01:21:41.112532 2830 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20fbc1942889343dae301d3ac19d1056e57989a5545f02eed5a6268f371fcb78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-7wggf" May 14 01:21:41.112762 kubelet[2830]: E0514 01:21:41.112628 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-7wggf_kube-system(69f83135-f92e-4fa8-9b4c-55dff88f63e2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-7wggf_kube-system(69f83135-f92e-4fa8-9b4c-55dff88f63e2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"20fbc1942889343dae301d3ac19d1056e57989a5545f02eed5a6268f371fcb78\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-7wggf" podUID="69f83135-f92e-4fa8-9b4c-55dff88f63e2" May 14 01:21:41.113107 containerd[1527]: time="2025-05-14T01:21:41.112960614Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76bc6945bb-vnsck,Uid:cd2bdce0-0bb7-4bf3-bc56-675335a5f3f5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8988259962a5bf0b7a2b3a3b2d6b5a68b188dcbe8c806b3de3e3c2492561f91d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 01:21:41.113394 kubelet[2830]: E0514 01:21:41.113327 2830 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8988259962a5bf0b7a2b3a3b2d6b5a68b188dcbe8c806b3de3e3c2492561f91d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 01:21:41.113481 kubelet[2830]: E0514 01:21:41.113452 2830 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8988259962a5bf0b7a2b3a3b2d6b5a68b188dcbe8c806b3de3e3c2492561f91d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76bc6945bb-vnsck" May 14 01:21:41.113554 kubelet[2830]: E0514 01:21:41.113523 2830 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8988259962a5bf0b7a2b3a3b2d6b5a68b188dcbe8c806b3de3e3c2492561f91d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76bc6945bb-vnsck" May 14 01:21:41.113816 kubelet[2830]: E0514 01:21:41.113575 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-76bc6945bb-vnsck_calico-apiserver(cd2bdce0-0bb7-4bf3-bc56-675335a5f3f5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-76bc6945bb-vnsck_calico-apiserver(cd2bdce0-0bb7-4bf3-bc56-675335a5f3f5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8988259962a5bf0b7a2b3a3b2d6b5a68b188dcbe8c806b3de3e3c2492561f91d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-76bc6945bb-vnsck" podUID="cd2bdce0-0bb7-4bf3-bc56-675335a5f3f5" May 14 01:21:41.114359 containerd[1527]: time="2025-05-14T01:21:41.114205176Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7vqwz,Uid:1a53d4b9-a854-4ca0-a736-f4ef5a7d06c4,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d10a3a5e2869f1cac2606d09a84f0de52d94328a6d7cd6476a1a602ca2f5421\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 01:21:41.114418 kubelet[2830]: E0514 01:21:41.114367 2830 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d10a3a5e2869f1cac2606d09a84f0de52d94328a6d7cd6476a1a602ca2f5421\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 01:21:41.114418 kubelet[2830]: E0514 01:21:41.114397 2830 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d10a3a5e2869f1cac2606d09a84f0de52d94328a6d7cd6476a1a602ca2f5421\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-7vqwz" May 14 01:21:41.114418 kubelet[2830]: E0514 01:21:41.114412 2830 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d10a3a5e2869f1cac2606d09a84f0de52d94328a6d7cd6476a1a602ca2f5421\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-7vqwz" May 14 01:21:41.114485 kubelet[2830]: E0514 01:21:41.114438 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-7vqwz_kube-system(1a53d4b9-a854-4ca0-a736-f4ef5a7d06c4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-7vqwz_kube-system(1a53d4b9-a854-4ca0-a736-f4ef5a7d06c4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5d10a3a5e2869f1cac2606d09a84f0de52d94328a6d7cd6476a1a602ca2f5421\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-7vqwz" podUID="1a53d4b9-a854-4ca0-a736-f4ef5a7d06c4" May 14 01:21:41.822450 systemd[1]: Created slice kubepods-besteffort-pod1ff2619d_9812_4662_94f4_719dc464e433.slice - libcontainer container kubepods-besteffort-pod1ff2619d_9812_4662_94f4_719dc464e433.slice. May 14 01:21:41.829209 systemd[1]: run-netns-cni\x2d852300c5\x2d282b\x2de2e9\x2d6735\x2d8e43cc04dc1f.mount: Deactivated successfully. May 14 01:21:41.829725 systemd[1]: run-netns-cni\x2dd50548e0\x2d756f\x2da0cf\x2deee2\x2d99966d382c8c.mount: Deactivated successfully. May 14 01:21:41.829867 systemd[1]: run-netns-cni\x2d1a3f02ee\x2d3134\x2d157b\x2d22db\x2d61507fbd57a1.mount: Deactivated successfully. May 14 01:21:41.829987 systemd[1]: run-netns-cni\x2d4dbb10cc\x2ddaa6\x2d0f58\x2dcf1c\x2d99ebad91eb95.mount: Deactivated successfully. May 14 01:21:41.830139 systemd[1]: run-netns-cni\x2d7f032473\x2db223\x2d3667\x2d1d29\x2d6fd098fb1181.mount: Deactivated successfully. May 14 01:21:41.838850 containerd[1527]: time="2025-05-14T01:21:41.838780343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2jzfd,Uid:1ff2619d-9812-4662-94f4-719dc464e433,Namespace:calico-system,Attempt:0,}" May 14 01:21:41.922761 containerd[1527]: time="2025-05-14T01:21:41.922684366Z" level=error msg="Failed to destroy network for sandbox \"e47decbe5e440db711988779cfe059ea22ec7aee51ea1fcefb48a822961d22bd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 01:21:41.926399 containerd[1527]: time="2025-05-14T01:21:41.926349159Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2jzfd,Uid:1ff2619d-9812-4662-94f4-719dc464e433,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e47decbe5e440db711988779cfe059ea22ec7aee51ea1fcefb48a822961d22bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 01:21:41.926936 kubelet[2830]: E0514 01:21:41.926886 2830 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e47decbe5e440db711988779cfe059ea22ec7aee51ea1fcefb48a822961d22bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 14 01:21:41.927714 kubelet[2830]: E0514 01:21:41.926961 2830 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e47decbe5e440db711988779cfe059ea22ec7aee51ea1fcefb48a822961d22bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2jzfd" May 14 01:21:41.927714 kubelet[2830]: E0514 01:21:41.926988 2830 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e47decbe5e440db711988779cfe059ea22ec7aee51ea1fcefb48a822961d22bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2jzfd" May 14 01:21:41.927714 kubelet[2830]: E0514 01:21:41.927040 2830 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2jzfd_calico-system(1ff2619d-9812-4662-94f4-719dc464e433)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2jzfd_calico-system(1ff2619d-9812-4662-94f4-719dc464e433)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e47decbe5e440db711988779cfe059ea22ec7aee51ea1fcefb48a822961d22bd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2jzfd" podUID="1ff2619d-9812-4662-94f4-719dc464e433" May 14 01:21:41.928542 systemd[1]: run-netns-cni\x2d3ac534ac\x2d0006\x2de29b\x2d2dd5\x2d9cccce92ab0c.mount: Deactivated successfully. May 14 01:21:48.439071 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3673542200.mount: Deactivated successfully. May 14 01:21:48.638323 containerd[1527]: time="2025-05-14T01:21:48.635667370Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:21:48.714672 containerd[1527]: time="2025-05-14T01:21:48.612035141Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" May 14 01:21:48.751127 containerd[1527]: time="2025-05-14T01:21:48.751060673Z" level=info msg="ImageCreate event name:\"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:21:48.753994 containerd[1527]: time="2025-05-14T01:21:48.753778354Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:21:48.755140 containerd[1527]: time="2025-05-14T01:21:48.754992349Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"144068610\" in 7.787587654s" May 14 01:21:48.755140 containerd[1527]: time="2025-05-14T01:21:48.755023900Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\"" May 14 01:21:48.847059 containerd[1527]: time="2025-05-14T01:21:48.846995076Z" level=info msg="CreateContainer within sandbox \"fa2b548e032839e944629722933a48e2d14a69178af5a9ec196ecaf7a39f5b33\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 14 01:21:48.906281 containerd[1527]: time="2025-05-14T01:21:48.906120175Z" level=info msg="Container 7b7d873459b2bb5453ece80f283134606a65d1de41345c88671a50bcf3aaddb1: CDI devices from CRI Config.CDIDevices: []" May 14 01:21:48.949860 containerd[1527]: time="2025-05-14T01:21:48.949777264Z" level=info msg="CreateContainer within sandbox \"fa2b548e032839e944629722933a48e2d14a69178af5a9ec196ecaf7a39f5b33\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"7b7d873459b2bb5453ece80f283134606a65d1de41345c88671a50bcf3aaddb1\"" May 14 01:21:48.956561 containerd[1527]: time="2025-05-14T01:21:48.956227613Z" level=info msg="StartContainer for \"7b7d873459b2bb5453ece80f283134606a65d1de41345c88671a50bcf3aaddb1\"" May 14 01:21:48.965574 containerd[1527]: time="2025-05-14T01:21:48.965374402Z" level=info msg="connecting to shim 7b7d873459b2bb5453ece80f283134606a65d1de41345c88671a50bcf3aaddb1" address="unix:///run/containerd/s/fe25bffe94ecbf339945e872cae61ca7649501f05630cd00570b002b5197b02c" protocol=ttrpc version=3 May 14 01:21:49.138781 systemd[1]: Started cri-containerd-7b7d873459b2bb5453ece80f283134606a65d1de41345c88671a50bcf3aaddb1.scope - libcontainer container 7b7d873459b2bb5453ece80f283134606a65d1de41345c88671a50bcf3aaddb1. May 14 01:21:49.157223 systemd[1]: Started sshd@9-37.27.220.42:22-83.222.191.218:57058.service - OpenSSH per-connection server daemon (83.222.191.218:57058). May 14 01:21:49.219759 containerd[1527]: time="2025-05-14T01:21:49.219479512Z" level=info msg="StartContainer for \"7b7d873459b2bb5453ece80f283134606a65d1de41345c88671a50bcf3aaddb1\" returns successfully" May 14 01:21:49.300866 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 14 01:21:49.302470 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 14 01:21:49.449230 sshd[3699]: Invalid user from 83.222.191.218 port 57058 May 14 01:21:50.098364 kubelet[2830]: I0514 01:21:50.093756 2830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-wvt49" podStartSLOduration=1.980390029 podStartE2EDuration="22.084233761s" podCreationTimestamp="2025-05-14 01:21:28 +0000 UTC" firstStartedPulling="2025-05-14 01:21:28.678877658 +0000 UTC m=+21.998691270" lastFinishedPulling="2025-05-14 01:21:48.782721339 +0000 UTC m=+42.102535002" observedRunningTime="2025-05-14 01:21:50.081630857 +0000 UTC m=+43.401444510" watchObservedRunningTime="2025-05-14 01:21:50.084233761 +0000 UTC m=+43.404047404" May 14 01:21:50.245163 containerd[1527]: time="2025-05-14T01:21:50.245111726Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7b7d873459b2bb5453ece80f283134606a65d1de41345c88671a50bcf3aaddb1\" id:\"528e335aad266b952bf6db9df1ada26c28ccea8e716a084082744dafdc3bec24\" pid:3760 exit_status:1 exited_at:{seconds:1747185710 nanos:244762288}" May 14 01:21:51.113670 containerd[1527]: time="2025-05-14T01:21:51.113582242Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7b7d873459b2bb5453ece80f283134606a65d1de41345c88671a50bcf3aaddb1\" id:\"21419eca144c92659ac8d07abe06ce09f297d17a9ffce455333cdd067b69a4bb\" pid:3877 exit_status:1 exited_at:{seconds:1747185711 nanos:113151240}" May 14 01:21:51.815341 containerd[1527]: time="2025-05-14T01:21:51.815248456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76bc6945bb-sc7fw,Uid:c4168a6e-c5d1-417e-a08b-b1b4c467bfb1,Namespace:calico-apiserver,Attempt:0,}" May 14 01:21:52.080338 containerd[1527]: time="2025-05-14T01:21:52.079584537Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7b7d873459b2bb5453ece80f283134606a65d1de41345c88671a50bcf3aaddb1\" id:\"2c0c95684bd1269be02e3e1b30416ac41f316368363aa830c327ff68d86131bd\" pid:3939 exit_status:1 exited_at:{seconds:1747185712 nanos:79261981}" May 14 01:21:52.175765 systemd-networkd[1401]: cali5ed7d74262b: Link UP May 14 01:21:52.177908 systemd-networkd[1401]: cali5ed7d74262b: Gained carrier May 14 01:21:52.194764 containerd[1527]: 2025-05-14 01:21:51.870 [INFO][3895] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 14 01:21:52.194764 containerd[1527]: 2025-05-14 01:21:51.912 [INFO][3895] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4284--0--0--n--c0828c9b46-k8s-calico--apiserver--76bc6945bb--sc7fw-eth0 calico-apiserver-76bc6945bb- calico-apiserver c4168a6e-c5d1-417e-a08b-b1b4c467bfb1 679 0 2025-05-14 01:21:28 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:76bc6945bb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4284-0-0-n-c0828c9b46 calico-apiserver-76bc6945bb-sc7fw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5ed7d74262b [] []}} ContainerID="14d925657b0241b65f107800d27054b1b5d2e8ba1103edc0b7ec2bec874f7def" Namespace="calico-apiserver" Pod="calico-apiserver-76bc6945bb-sc7fw" WorkloadEndpoint="ci--4284--0--0--n--c0828c9b46-k8s-calico--apiserver--76bc6945bb--sc7fw-" May 14 01:21:52.194764 containerd[1527]: 2025-05-14 01:21:51.913 [INFO][3895] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="14d925657b0241b65f107800d27054b1b5d2e8ba1103edc0b7ec2bec874f7def" Namespace="calico-apiserver" Pod="calico-apiserver-76bc6945bb-sc7fw" WorkloadEndpoint="ci--4284--0--0--n--c0828c9b46-k8s-calico--apiserver--76bc6945bb--sc7fw-eth0" May 14 01:21:52.194764 containerd[1527]: 2025-05-14 01:21:52.096 [INFO][3919] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="14d925657b0241b65f107800d27054b1b5d2e8ba1103edc0b7ec2bec874f7def" HandleID="k8s-pod-network.14d925657b0241b65f107800d27054b1b5d2e8ba1103edc0b7ec2bec874f7def" Workload="ci--4284--0--0--n--c0828c9b46-k8s-calico--apiserver--76bc6945bb--sc7fw-eth0" May 14 01:21:52.196655 containerd[1527]: 2025-05-14 01:21:52.112 [INFO][3919] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="14d925657b0241b65f107800d27054b1b5d2e8ba1103edc0b7ec2bec874f7def" HandleID="k8s-pod-network.14d925657b0241b65f107800d27054b1b5d2e8ba1103edc0b7ec2bec874f7def" Workload="ci--4284--0--0--n--c0828c9b46-k8s-calico--apiserver--76bc6945bb--sc7fw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002af9d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4284-0-0-n-c0828c9b46", "pod":"calico-apiserver-76bc6945bb-sc7fw", "timestamp":"2025-05-14 01:21:52.096065369 +0000 UTC"}, Hostname:"ci-4284-0-0-n-c0828c9b46", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 01:21:52.196655 containerd[1527]: 2025-05-14 01:21:52.112 [INFO][3919] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 01:21:52.196655 containerd[1527]: 2025-05-14 01:21:52.113 [INFO][3919] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 01:21:52.196655 containerd[1527]: 2025-05-14 01:21:52.113 [INFO][3919] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4284-0-0-n-c0828c9b46' May 14 01:21:52.196655 containerd[1527]: 2025-05-14 01:21:52.116 [INFO][3919] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.14d925657b0241b65f107800d27054b1b5d2e8ba1103edc0b7ec2bec874f7def" host="ci-4284-0-0-n-c0828c9b46" May 14 01:21:52.196655 containerd[1527]: 2025-05-14 01:21:52.125 [INFO][3919] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4284-0-0-n-c0828c9b46" May 14 01:21:52.196655 containerd[1527]: 2025-05-14 01:21:52.131 [INFO][3919] ipam/ipam.go 489: Trying affinity for 192.168.122.192/26 host="ci-4284-0-0-n-c0828c9b46" May 14 01:21:52.196655 containerd[1527]: 2025-05-14 01:21:52.134 [INFO][3919] ipam/ipam.go 155: Attempting to load block cidr=192.168.122.192/26 host="ci-4284-0-0-n-c0828c9b46" May 14 01:21:52.196655 containerd[1527]: 2025-05-14 01:21:52.137 [INFO][3919] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.122.192/26 host="ci-4284-0-0-n-c0828c9b46" May 14 01:21:52.197058 containerd[1527]: 2025-05-14 01:21:52.137 [INFO][3919] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.122.192/26 handle="k8s-pod-network.14d925657b0241b65f107800d27054b1b5d2e8ba1103edc0b7ec2bec874f7def" host="ci-4284-0-0-n-c0828c9b46" May 14 01:21:52.197058 containerd[1527]: 2025-05-14 01:21:52.139 [INFO][3919] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.14d925657b0241b65f107800d27054b1b5d2e8ba1103edc0b7ec2bec874f7def May 14 01:21:52.197058 containerd[1527]: 2025-05-14 01:21:52.143 [INFO][3919] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.122.192/26 handle="k8s-pod-network.14d925657b0241b65f107800d27054b1b5d2e8ba1103edc0b7ec2bec874f7def" host="ci-4284-0-0-n-c0828c9b46" May 14 01:21:52.197058 containerd[1527]: 2025-05-14 01:21:52.150 [INFO][3919] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.122.193/26] block=192.168.122.192/26 handle="k8s-pod-network.14d925657b0241b65f107800d27054b1b5d2e8ba1103edc0b7ec2bec874f7def" host="ci-4284-0-0-n-c0828c9b46" May 14 01:21:52.197058 containerd[1527]: 2025-05-14 01:21:52.150 [INFO][3919] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.122.193/26] handle="k8s-pod-network.14d925657b0241b65f107800d27054b1b5d2e8ba1103edc0b7ec2bec874f7def" host="ci-4284-0-0-n-c0828c9b46" May 14 01:21:52.197058 containerd[1527]: 2025-05-14 01:21:52.151 [INFO][3919] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 01:21:52.197058 containerd[1527]: 2025-05-14 01:21:52.151 [INFO][3919] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.122.193/26] IPv6=[] ContainerID="14d925657b0241b65f107800d27054b1b5d2e8ba1103edc0b7ec2bec874f7def" HandleID="k8s-pod-network.14d925657b0241b65f107800d27054b1b5d2e8ba1103edc0b7ec2bec874f7def" Workload="ci--4284--0--0--n--c0828c9b46-k8s-calico--apiserver--76bc6945bb--sc7fw-eth0" May 14 01:21:52.197877 containerd[1527]: 2025-05-14 01:21:52.153 [INFO][3895] cni-plugin/k8s.go 386: Populated endpoint ContainerID="14d925657b0241b65f107800d27054b1b5d2e8ba1103edc0b7ec2bec874f7def" Namespace="calico-apiserver" Pod="calico-apiserver-76bc6945bb-sc7fw" WorkloadEndpoint="ci--4284--0--0--n--c0828c9b46-k8s-calico--apiserver--76bc6945bb--sc7fw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284--0--0--n--c0828c9b46-k8s-calico--apiserver--76bc6945bb--sc7fw-eth0", GenerateName:"calico-apiserver-76bc6945bb-", Namespace:"calico-apiserver", SelfLink:"", UID:"c4168a6e-c5d1-417e-a08b-b1b4c467bfb1", ResourceVersion:"679", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 1, 21, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76bc6945bb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284-0-0-n-c0828c9b46", ContainerID:"", Pod:"calico-apiserver-76bc6945bb-sc7fw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.122.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5ed7d74262b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 01:21:52.197969 containerd[1527]: 2025-05-14 01:21:52.153 [INFO][3895] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.122.193/32] ContainerID="14d925657b0241b65f107800d27054b1b5d2e8ba1103edc0b7ec2bec874f7def" Namespace="calico-apiserver" Pod="calico-apiserver-76bc6945bb-sc7fw" WorkloadEndpoint="ci--4284--0--0--n--c0828c9b46-k8s-calico--apiserver--76bc6945bb--sc7fw-eth0" May 14 01:21:52.197969 containerd[1527]: 2025-05-14 01:21:52.153 [INFO][3895] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ed7d74262b ContainerID="14d925657b0241b65f107800d27054b1b5d2e8ba1103edc0b7ec2bec874f7def" Namespace="calico-apiserver" Pod="calico-apiserver-76bc6945bb-sc7fw" WorkloadEndpoint="ci--4284--0--0--n--c0828c9b46-k8s-calico--apiserver--76bc6945bb--sc7fw-eth0" May 14 01:21:52.197969 containerd[1527]: 2025-05-14 01:21:52.169 [INFO][3895] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="14d925657b0241b65f107800d27054b1b5d2e8ba1103edc0b7ec2bec874f7def" Namespace="calico-apiserver" Pod="calico-apiserver-76bc6945bb-sc7fw" WorkloadEndpoint="ci--4284--0--0--n--c0828c9b46-k8s-calico--apiserver--76bc6945bb--sc7fw-eth0" May 14 01:21:52.198818 containerd[1527]: 2025-05-14 01:21:52.169 [INFO][3895] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="14d925657b0241b65f107800d27054b1b5d2e8ba1103edc0b7ec2bec874f7def" Namespace="calico-apiserver" Pod="calico-apiserver-76bc6945bb-sc7fw" WorkloadEndpoint="ci--4284--0--0--n--c0828c9b46-k8s-calico--apiserver--76bc6945bb--sc7fw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284--0--0--n--c0828c9b46-k8s-calico--apiserver--76bc6945bb--sc7fw-eth0", GenerateName:"calico-apiserver-76bc6945bb-", Namespace:"calico-apiserver", SelfLink:"", UID:"c4168a6e-c5d1-417e-a08b-b1b4c467bfb1", ResourceVersion:"679", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 1, 21, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76bc6945bb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284-0-0-n-c0828c9b46", ContainerID:"14d925657b0241b65f107800d27054b1b5d2e8ba1103edc0b7ec2bec874f7def", Pod:"calico-apiserver-76bc6945bb-sc7fw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.122.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5ed7d74262b", MAC:"de:57:17:6b:03:70", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 01:21:52.198989 containerd[1527]: 2025-05-14 01:21:52.189 [INFO][3895] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="14d925657b0241b65f107800d27054b1b5d2e8ba1103edc0b7ec2bec874f7def" Namespace="calico-apiserver" Pod="calico-apiserver-76bc6945bb-sc7fw" WorkloadEndpoint="ci--4284--0--0--n--c0828c9b46-k8s-calico--apiserver--76bc6945bb--sc7fw-eth0" May 14 01:21:52.265784 containerd[1527]: time="2025-05-14T01:21:52.265711642Z" level=info msg="connecting to shim 14d925657b0241b65f107800d27054b1b5d2e8ba1103edc0b7ec2bec874f7def" address="unix:///run/containerd/s/40dd6b00563b748f3452b16b712b47b6c681d9c0e50a30256e2cb2b587c32bfd" namespace=k8s.io protocol=ttrpc version=3 May 14 01:21:52.306870 systemd[1]: Started cri-containerd-14d925657b0241b65f107800d27054b1b5d2e8ba1103edc0b7ec2bec874f7def.scope - libcontainer container 14d925657b0241b65f107800d27054b1b5d2e8ba1103edc0b7ec2bec874f7def. May 14 01:21:52.356330 containerd[1527]: time="2025-05-14T01:21:52.355688488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76bc6945bb-sc7fw,Uid:c4168a6e-c5d1-417e-a08b-b1b4c467bfb1,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"14d925657b0241b65f107800d27054b1b5d2e8ba1103edc0b7ec2bec874f7def\"" May 14 01:21:52.357236 containerd[1527]: time="2025-05-14T01:21:52.357201433Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 14 01:21:52.812806 containerd[1527]: time="2025-05-14T01:21:52.812410957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7vqwz,Uid:1a53d4b9-a854-4ca0-a736-f4ef5a7d06c4,Namespace:kube-system,Attempt:0,}" May 14 01:21:52.813017 containerd[1527]: time="2025-05-14T01:21:52.812993929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7wggf,Uid:69f83135-f92e-4fa8-9b4c-55dff88f63e2,Namespace:kube-system,Attempt:0,}" May 14 01:21:52.813446 containerd[1527]: time="2025-05-14T01:21:52.813225571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76bc6945bb-vnsck,Uid:cd2bdce0-0bb7-4bf3-bc56-675335a5f3f5,Namespace:calico-apiserver,Attempt:0,}" May 14 01:21:53.021955 systemd-networkd[1401]: cali10b5fad0e35: Link UP May 14 01:21:53.022115 systemd-networkd[1401]: cali10b5fad0e35: Gained carrier May 14 01:21:53.055306 containerd[1527]: 2025-05-14 01:21:52.897 [INFO][4011] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 14 01:21:53.055306 containerd[1527]: 2025-05-14 01:21:52.917 [INFO][4011] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4284--0--0--n--c0828c9b46-k8s-coredns--668d6bf9bc--7wggf-eth0 coredns-668d6bf9bc- kube-system 69f83135-f92e-4fa8-9b4c-55dff88f63e2 683 0 2025-05-14 01:21:11 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4284-0-0-n-c0828c9b46 coredns-668d6bf9bc-7wggf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali10b5fad0e35 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="bd319cba7efb3b2e2a20704a7b94401f0bfead92094fce25e0bc8d4f08638e5e" Namespace="kube-system" Pod="coredns-668d6bf9bc-7wggf" WorkloadEndpoint="ci--4284--0--0--n--c0828c9b46-k8s-coredns--668d6bf9bc--7wggf-" May 14 01:21:53.055306 containerd[1527]: 2025-05-14 01:21:52.917 [INFO][4011] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="bd319cba7efb3b2e2a20704a7b94401f0bfead92094fce25e0bc8d4f08638e5e" Namespace="kube-system" Pod="coredns-668d6bf9bc-7wggf" WorkloadEndpoint="ci--4284--0--0--n--c0828c9b46-k8s-coredns--668d6bf9bc--7wggf-eth0" May 14 01:21:53.055306 containerd[1527]: 2025-05-14 01:21:52.954 [INFO][4053] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bd319cba7efb3b2e2a20704a7b94401f0bfead92094fce25e0bc8d4f08638e5e" HandleID="k8s-pod-network.bd319cba7efb3b2e2a20704a7b94401f0bfead92094fce25e0bc8d4f08638e5e" Workload="ci--4284--0--0--n--c0828c9b46-k8s-coredns--668d6bf9bc--7wggf-eth0" May 14 01:21:53.055815 containerd[1527]: 2025-05-14 01:21:52.967 [INFO][4053] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bd319cba7efb3b2e2a20704a7b94401f0bfead92094fce25e0bc8d4f08638e5e" HandleID="k8s-pod-network.bd319cba7efb3b2e2a20704a7b94401f0bfead92094fce25e0bc8d4f08638e5e" Workload="ci--4284--0--0--n--c0828c9b46-k8s-coredns--668d6bf9bc--7wggf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003bb3b0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4284-0-0-n-c0828c9b46", "pod":"coredns-668d6bf9bc-7wggf", "timestamp":"2025-05-14 01:21:52.954790917 +0000 UTC"}, Hostname:"ci-4284-0-0-n-c0828c9b46", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 01:21:53.055815 containerd[1527]: 2025-05-14 01:21:52.967 [INFO][4053] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 01:21:53.055815 containerd[1527]: 2025-05-14 01:21:52.968 [INFO][4053] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 01:21:53.055815 containerd[1527]: 2025-05-14 01:21:52.968 [INFO][4053] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4284-0-0-n-c0828c9b46' May 14 01:21:53.055815 containerd[1527]: 2025-05-14 01:21:52.974 [INFO][4053] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.bd319cba7efb3b2e2a20704a7b94401f0bfead92094fce25e0bc8d4f08638e5e" host="ci-4284-0-0-n-c0828c9b46" May 14 01:21:53.055815 containerd[1527]: 2025-05-14 01:21:52.983 [INFO][4053] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4284-0-0-n-c0828c9b46" May 14 01:21:53.055815 containerd[1527]: 2025-05-14 01:21:52.989 [INFO][4053] ipam/ipam.go 489: Trying affinity for 192.168.122.192/26 host="ci-4284-0-0-n-c0828c9b46" May 14 01:21:53.055815 containerd[1527]: 2025-05-14 01:21:52.991 [INFO][4053] ipam/ipam.go 155: Attempting to load block cidr=192.168.122.192/26 host="ci-4284-0-0-n-c0828c9b46" May 14 01:21:53.055815 containerd[1527]: 2025-05-14 01:21:52.994 [INFO][4053] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.122.192/26 host="ci-4284-0-0-n-c0828c9b46" May 14 01:21:53.057786 containerd[1527]: 2025-05-14 01:21:52.994 [INFO][4053] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.122.192/26 handle="k8s-pod-network.bd319cba7efb3b2e2a20704a7b94401f0bfead92094fce25e0bc8d4f08638e5e" host="ci-4284-0-0-n-c0828c9b46" May 14 01:21:53.057786 containerd[1527]: 2025-05-14 01:21:52.996 [INFO][4053] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.bd319cba7efb3b2e2a20704a7b94401f0bfead92094fce25e0bc8d4f08638e5e May 14 01:21:53.057786 containerd[1527]: 2025-05-14 01:21:53.001 [INFO][4053] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.122.192/26 handle="k8s-pod-network.bd319cba7efb3b2e2a20704a7b94401f0bfead92094fce25e0bc8d4f08638e5e" host="ci-4284-0-0-n-c0828c9b46" May 14 01:21:53.057786 containerd[1527]: 2025-05-14 01:21:53.008 [INFO][4053] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.122.194/26] block=192.168.122.192/26 handle="k8s-pod-network.bd319cba7efb3b2e2a20704a7b94401f0bfead92094fce25e0bc8d4f08638e5e" host="ci-4284-0-0-n-c0828c9b46" May 14 01:21:53.057786 containerd[1527]: 2025-05-14 01:21:53.009 [INFO][4053] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.122.194/26] handle="k8s-pod-network.bd319cba7efb3b2e2a20704a7b94401f0bfead92094fce25e0bc8d4f08638e5e" host="ci-4284-0-0-n-c0828c9b46" May 14 01:21:53.057786 containerd[1527]: 2025-05-14 01:21:53.009 [INFO][4053] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 01:21:53.057786 containerd[1527]: 2025-05-14 01:21:53.009 [INFO][4053] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.122.194/26] IPv6=[] ContainerID="bd319cba7efb3b2e2a20704a7b94401f0bfead92094fce25e0bc8d4f08638e5e" HandleID="k8s-pod-network.bd319cba7efb3b2e2a20704a7b94401f0bfead92094fce25e0bc8d4f08638e5e" Workload="ci--4284--0--0--n--c0828c9b46-k8s-coredns--668d6bf9bc--7wggf-eth0" May 14 01:21:53.057951 containerd[1527]: 2025-05-14 01:21:53.015 [INFO][4011] cni-plugin/k8s.go 386: Populated endpoint ContainerID="bd319cba7efb3b2e2a20704a7b94401f0bfead92094fce25e0bc8d4f08638e5e" Namespace="kube-system" Pod="coredns-668d6bf9bc-7wggf" WorkloadEndpoint="ci--4284--0--0--n--c0828c9b46-k8s-coredns--668d6bf9bc--7wggf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284--0--0--n--c0828c9b46-k8s-coredns--668d6bf9bc--7wggf-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"69f83135-f92e-4fa8-9b4c-55dff88f63e2", ResourceVersion:"683", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 1, 21, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284-0-0-n-c0828c9b46", ContainerID:"", Pod:"coredns-668d6bf9bc-7wggf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.122.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali10b5fad0e35", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 01:21:53.057951 containerd[1527]: 2025-05-14 01:21:53.015 [INFO][4011] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.122.194/32] ContainerID="bd319cba7efb3b2e2a20704a7b94401f0bfead92094fce25e0bc8d4f08638e5e" Namespace="kube-system" Pod="coredns-668d6bf9bc-7wggf" WorkloadEndpoint="ci--4284--0--0--n--c0828c9b46-k8s-coredns--668d6bf9bc--7wggf-eth0" May 14 01:21:53.057951 containerd[1527]: 2025-05-14 01:21:53.015 [INFO][4011] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali10b5fad0e35 ContainerID="bd319cba7efb3b2e2a20704a7b94401f0bfead92094fce25e0bc8d4f08638e5e" Namespace="kube-system" Pod="coredns-668d6bf9bc-7wggf" WorkloadEndpoint="ci--4284--0--0--n--c0828c9b46-k8s-coredns--668d6bf9bc--7wggf-eth0" May 14 01:21:53.057951 containerd[1527]: 2025-05-14 01:21:53.023 [INFO][4011] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bd319cba7efb3b2e2a20704a7b94401f0bfead92094fce25e0bc8d4f08638e5e" Namespace="kube-system" Pod="coredns-668d6bf9bc-7wggf" WorkloadEndpoint="ci--4284--0--0--n--c0828c9b46-k8s-coredns--668d6bf9bc--7wggf-eth0" May 14 01:21:53.057951 containerd[1527]: 2025-05-14 01:21:53.025 [INFO][4011] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="bd319cba7efb3b2e2a20704a7b94401f0bfead92094fce25e0bc8d4f08638e5e" Namespace="kube-system" Pod="coredns-668d6bf9bc-7wggf" WorkloadEndpoint="ci--4284--0--0--n--c0828c9b46-k8s-coredns--668d6bf9bc--7wggf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284--0--0--n--c0828c9b46-k8s-coredns--668d6bf9bc--7wggf-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"69f83135-f92e-4fa8-9b4c-55dff88f63e2", ResourceVersion:"683", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 1, 21, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284-0-0-n-c0828c9b46", ContainerID:"bd319cba7efb3b2e2a20704a7b94401f0bfead92094fce25e0bc8d4f08638e5e", Pod:"coredns-668d6bf9bc-7wggf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.122.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali10b5fad0e35", MAC:"aa:51:34:22:9e:30", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 01:21:53.057951 containerd[1527]: 2025-05-14 01:21:53.043 [INFO][4011] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="bd319cba7efb3b2e2a20704a7b94401f0bfead92094fce25e0bc8d4f08638e5e" Namespace="kube-system" Pod="coredns-668d6bf9bc-7wggf" WorkloadEndpoint="ci--4284--0--0--n--c0828c9b46-k8s-coredns--668d6bf9bc--7wggf-eth0" May 14 01:21:53.099193 containerd[1527]: time="2025-05-14T01:21:53.097752669Z" level=info msg="connecting to shim bd319cba7efb3b2e2a20704a7b94401f0bfead92094fce25e0bc8d4f08638e5e" address="unix:///run/containerd/s/d2d6079f832a3778f316bff039fd2814e891500762beeb404ca973656b1a5814" namespace=k8s.io protocol=ttrpc version=3 May 14 01:21:53.127008 systemd[1]: Started cri-containerd-bd319cba7efb3b2e2a20704a7b94401f0bfead92094fce25e0bc8d4f08638e5e.scope - libcontainer container bd319cba7efb3b2e2a20704a7b94401f0bfead92094fce25e0bc8d4f08638e5e. May 14 01:21:53.163165 systemd-networkd[1401]: califb33671f221: Link UP May 14 01:21:53.163290 systemd-networkd[1401]: califb33671f221: Gained carrier May 14 01:21:53.180821 containerd[1527]: 2025-05-14 01:21:52.888 [INFO][4010] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 14 01:21:53.180821 containerd[1527]: 2025-05-14 01:21:52.905 [INFO][4010] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4284--0--0--n--c0828c9b46-k8s-coredns--668d6bf9bc--7vqwz-eth0 coredns-668d6bf9bc- kube-system 1a53d4b9-a854-4ca0-a736-f4ef5a7d06c4 686 0 2025-05-14 01:21:11 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4284-0-0-n-c0828c9b46 coredns-668d6bf9bc-7vqwz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califb33671f221 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="3451c93d7ce1e413be03b13f8069d29431cbca5b06f697df0d28d7e62a7c8c2a" Namespace="kube-system" Pod="coredns-668d6bf9bc-7vqwz" WorkloadEndpoint="ci--4284--0--0--n--c0828c9b46-k8s-coredns--668d6bf9bc--7vqwz-" May 14 01:21:53.180821 containerd[1527]: 2025-05-14 01:21:52.905 [INFO][4010] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3451c93d7ce1e413be03b13f8069d29431cbca5b06f697df0d28d7e62a7c8c2a" Namespace="kube-system" Pod="coredns-668d6bf9bc-7vqwz" WorkloadEndpoint="ci--4284--0--0--n--c0828c9b46-k8s-coredns--668d6bf9bc--7vqwz-eth0" May 14 01:21:53.180821 containerd[1527]: 2025-05-14 01:21:52.969 [INFO][4048] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3451c93d7ce1e413be03b13f8069d29431cbca5b06f697df0d28d7e62a7c8c2a" HandleID="k8s-pod-network.3451c93d7ce1e413be03b13f8069d29431cbca5b06f697df0d28d7e62a7c8c2a" Workload="ci--4284--0--0--n--c0828c9b46-k8s-coredns--668d6bf9bc--7vqwz-eth0" May 14 01:21:53.180821 containerd[1527]: 2025-05-14 01:21:52.983 [INFO][4048] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3451c93d7ce1e413be03b13f8069d29431cbca5b06f697df0d28d7e62a7c8c2a" HandleID="k8s-pod-network.3451c93d7ce1e413be03b13f8069d29431cbca5b06f697df0d28d7e62a7c8c2a" Workload="ci--4284--0--0--n--c0828c9b46-k8s-coredns--668d6bf9bc--7vqwz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003ba280), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4284-0-0-n-c0828c9b46", "pod":"coredns-668d6bf9bc-7vqwz", "timestamp":"2025-05-14 01:21:52.969218172 +0000 UTC"}, Hostname:"ci-4284-0-0-n-c0828c9b46", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 01:21:53.180821 containerd[1527]: 2025-05-14 01:21:52.984 [INFO][4048] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 01:21:53.180821 containerd[1527]: 2025-05-14 01:21:53.009 [INFO][4048] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 01:21:53.180821 containerd[1527]: 2025-05-14 01:21:53.009 [INFO][4048] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4284-0-0-n-c0828c9b46' May 14 01:21:53.180821 containerd[1527]: 2025-05-14 01:21:53.075 [INFO][4048] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3451c93d7ce1e413be03b13f8069d29431cbca5b06f697df0d28d7e62a7c8c2a" host="ci-4284-0-0-n-c0828c9b46" May 14 01:21:53.180821 containerd[1527]: 2025-05-14 01:21:53.098 [INFO][4048] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4284-0-0-n-c0828c9b46" May 14 01:21:53.180821 containerd[1527]: 2025-05-14 01:21:53.119 [INFO][4048] ipam/ipam.go 489: Trying affinity for 192.168.122.192/26 host="ci-4284-0-0-n-c0828c9b46" May 14 01:21:53.180821 containerd[1527]: 2025-05-14 01:21:53.129 [INFO][4048] ipam/ipam.go 155: Attempting to load block cidr=192.168.122.192/26 host="ci-4284-0-0-n-c0828c9b46" May 14 01:21:53.180821 containerd[1527]: 2025-05-14 01:21:53.134 [INFO][4048] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.122.192/26 host="ci-4284-0-0-n-c0828c9b46" May 14 01:21:53.180821 containerd[1527]: 2025-05-14 01:21:53.134 [INFO][4048] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.122.192/26 handle="k8s-pod-network.3451c93d7ce1e413be03b13f8069d29431cbca5b06f697df0d28d7e62a7c8c2a" host="ci-4284-0-0-n-c0828c9b46" May 14 01:21:53.180821 containerd[1527]: 2025-05-14 01:21:53.141 [INFO][4048] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3451c93d7ce1e413be03b13f8069d29431cbca5b06f697df0d28d7e62a7c8c2a May 14 01:21:53.180821 containerd[1527]: 2025-05-14 01:21:53.148 [INFO][4048] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.122.192/26 handle="k8s-pod-network.3451c93d7ce1e413be03b13f8069d29431cbca5b06f697df0d28d7e62a7c8c2a" host="ci-4284-0-0-n-c0828c9b46" May 14 01:21:53.180821 containerd[1527]: 2025-05-14 01:21:53.155 [INFO][4048] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.122.195/26] block=192.168.122.192/26 handle="k8s-pod-network.3451c93d7ce1e413be03b13f8069d29431cbca5b06f697df0d28d7e62a7c8c2a" host="ci-4284-0-0-n-c0828c9b46" May 14 01:21:53.180821 containerd[1527]: 2025-05-14 01:21:53.155 [INFO][4048] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.122.195/26] handle="k8s-pod-network.3451c93d7ce1e413be03b13f8069d29431cbca5b06f697df0d28d7e62a7c8c2a" host="ci-4284-0-0-n-c0828c9b46" May 14 01:21:53.180821 containerd[1527]: 2025-05-14 01:21:53.155 [INFO][4048] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 01:21:53.180821 containerd[1527]: 2025-05-14 01:21:53.156 [INFO][4048] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.122.195/26] IPv6=[] ContainerID="3451c93d7ce1e413be03b13f8069d29431cbca5b06f697df0d28d7e62a7c8c2a" HandleID="k8s-pod-network.3451c93d7ce1e413be03b13f8069d29431cbca5b06f697df0d28d7e62a7c8c2a" Workload="ci--4284--0--0--n--c0828c9b46-k8s-coredns--668d6bf9bc--7vqwz-eth0" May 14 01:21:53.181735 containerd[1527]: 2025-05-14 01:21:53.159 [INFO][4010] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3451c93d7ce1e413be03b13f8069d29431cbca5b06f697df0d28d7e62a7c8c2a" Namespace="kube-system" Pod="coredns-668d6bf9bc-7vqwz" WorkloadEndpoint="ci--4284--0--0--n--c0828c9b46-k8s-coredns--668d6bf9bc--7vqwz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284--0--0--n--c0828c9b46-k8s-coredns--668d6bf9bc--7vqwz-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"1a53d4b9-a854-4ca0-a736-f4ef5a7d06c4", ResourceVersion:"686", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 1, 21, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284-0-0-n-c0828c9b46", ContainerID:"", Pod:"coredns-668d6bf9bc-7vqwz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.122.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califb33671f221", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 01:21:53.181735 containerd[1527]: 2025-05-14 01:21:53.160 [INFO][4010] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.122.195/32] ContainerID="3451c93d7ce1e413be03b13f8069d29431cbca5b06f697df0d28d7e62a7c8c2a" Namespace="kube-system" Pod="coredns-668d6bf9bc-7vqwz" WorkloadEndpoint="ci--4284--0--0--n--c0828c9b46-k8s-coredns--668d6bf9bc--7vqwz-eth0" May 14 01:21:53.181735 containerd[1527]: 2025-05-14 01:21:53.160 [INFO][4010] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califb33671f221 ContainerID="3451c93d7ce1e413be03b13f8069d29431cbca5b06f697df0d28d7e62a7c8c2a" Namespace="kube-system" Pod="coredns-668d6bf9bc-7vqwz" WorkloadEndpoint="ci--4284--0--0--n--c0828c9b46-k8s-coredns--668d6bf9bc--7vqwz-eth0" May 14 01:21:53.181735 containerd[1527]: 2025-05-14 01:21:53.162 [INFO][4010] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3451c93d7ce1e413be03b13f8069d29431cbca5b06f697df0d28d7e62a7c8c2a" Namespace="kube-system" Pod="coredns-668d6bf9bc-7vqwz" WorkloadEndpoint="ci--4284--0--0--n--c0828c9b46-k8s-coredns--668d6bf9bc--7vqwz-eth0" May 14 01:21:53.181735 containerd[1527]: 2025-05-14 01:21:53.162 [INFO][4010] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3451c93d7ce1e413be03b13f8069d29431cbca5b06f697df0d28d7e62a7c8c2a" Namespace="kube-system" Pod="coredns-668d6bf9bc-7vqwz" WorkloadEndpoint="ci--4284--0--0--n--c0828c9b46-k8s-coredns--668d6bf9bc--7vqwz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284--0--0--n--c0828c9b46-k8s-coredns--668d6bf9bc--7vqwz-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"1a53d4b9-a854-4ca0-a736-f4ef5a7d06c4", ResourceVersion:"686", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 1, 21, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284-0-0-n-c0828c9b46", ContainerID:"3451c93d7ce1e413be03b13f8069d29431cbca5b06f697df0d28d7e62a7c8c2a", Pod:"coredns-668d6bf9bc-7vqwz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.122.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califb33671f221", MAC:"0a:65:60:4e:c9:d2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 01:21:53.181735 containerd[1527]: 2025-05-14 01:21:53.174 [INFO][4010] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3451c93d7ce1e413be03b13f8069d29431cbca5b06f697df0d28d7e62a7c8c2a" Namespace="kube-system" Pod="coredns-668d6bf9bc-7vqwz" WorkloadEndpoint="ci--4284--0--0--n--c0828c9b46-k8s-coredns--668d6bf9bc--7vqwz-eth0" May 14 01:21:53.192611 containerd[1527]: time="2025-05-14T01:21:53.192577734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7wggf,Uid:69f83135-f92e-4fa8-9b4c-55dff88f63e2,Namespace:kube-system,Attempt:0,} returns sandbox id \"bd319cba7efb3b2e2a20704a7b94401f0bfead92094fce25e0bc8d4f08638e5e\"" May 14 01:21:53.198077 containerd[1527]: time="2025-05-14T01:21:53.197940203Z" level=info msg="CreateContainer within sandbox \"bd319cba7efb3b2e2a20704a7b94401f0bfead92094fce25e0bc8d4f08638e5e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 01:21:53.201780 kubelet[2830]: I0514 01:21:53.201685 2830 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 01:21:53.233741 containerd[1527]: time="2025-05-14T01:21:53.233700670Z" level=info msg="connecting to shim 3451c93d7ce1e413be03b13f8069d29431cbca5b06f697df0d28d7e62a7c8c2a" address="unix:///run/containerd/s/06a6b90d4e3993ab7ff84ef4fc3c19ede8632f9ea1687d62cf21fefacabd8cdb" namespace=k8s.io protocol=ttrpc version=3 May 14 01:21:53.247383 containerd[1527]: time="2025-05-14T01:21:53.247295049Z" level=info msg="Container 04b73af5dec4d5c2c19b79ed731c6901f7017ab1ed904b300282d3d762d631c3: CDI devices from CRI Config.CDIDevices: []" May 14 01:21:53.251684 systemd-networkd[1401]: cali4fda7befab3: Link UP May 14 01:21:53.252183 systemd-networkd[1401]: cali4fda7befab3: Gained carrier May 14 01:21:53.266417 containerd[1527]: time="2025-05-14T01:21:53.266273237Z" level=info msg="CreateContainer within sandbox \"bd319cba7efb3b2e2a20704a7b94401f0bfead92094fce25e0bc8d4f08638e5e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"04b73af5dec4d5c2c19b79ed731c6901f7017ab1ed904b300282d3d762d631c3\"" May 14 01:21:53.267604 containerd[1527]: time="2025-05-14T01:21:53.266947323Z" level=info msg="StartContainer for \"04b73af5dec4d5c2c19b79ed731c6901f7017ab1ed904b300282d3d762d631c3\"" May 14 01:21:53.267682 containerd[1527]: time="2025-05-14T01:21:53.267612422Z" level=info msg="connecting to shim 04b73af5dec4d5c2c19b79ed731c6901f7017ab1ed904b300282d3d762d631c3" address="unix:///run/containerd/s/d2d6079f832a3778f316bff039fd2814e891500762beeb404ca973656b1a5814" protocol=ttrpc version=3 May 14 01:21:53.271419 systemd[1]: Started cri-containerd-3451c93d7ce1e413be03b13f8069d29431cbca5b06f697df0d28d7e62a7c8c2a.scope - libcontainer container 3451c93d7ce1e413be03b13f8069d29431cbca5b06f697df0d28d7e62a7c8c2a. May 14 01:21:53.284298 systemd[1]: Started cri-containerd-04b73af5dec4d5c2c19b79ed731c6901f7017ab1ed904b300282d3d762d631c3.scope - libcontainer container 04b73af5dec4d5c2c19b79ed731c6901f7017ab1ed904b300282d3d762d631c3. May 14 01:21:53.319510 containerd[1527]: 2025-05-14 01:21:52.906 [INFO][4030] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 14 01:21:53.319510 containerd[1527]: 2025-05-14 01:21:52.918 [INFO][4030] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4284--0--0--n--c0828c9b46-k8s-calico--apiserver--76bc6945bb--vnsck-eth0 calico-apiserver-76bc6945bb- calico-apiserver cd2bdce0-0bb7-4bf3-bc56-675335a5f3f5 684 0 2025-05-14 01:21:28 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:76bc6945bb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4284-0-0-n-c0828c9b46 calico-apiserver-76bc6945bb-vnsck eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4fda7befab3 [] []}} ContainerID="a6c95c712d67ec1d8e9a9e05725977a44b4125928c3abc669dc8c2b82a9d6390" Namespace="calico-apiserver" Pod="calico-apiserver-76bc6945bb-vnsck" WorkloadEndpoint="ci--4284--0--0--n--c0828c9b46-k8s-calico--apiserver--76bc6945bb--vnsck-" May 14 01:21:53.319510 containerd[1527]: 2025-05-14 01:21:52.918 [INFO][4030] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a6c95c712d67ec1d8e9a9e05725977a44b4125928c3abc669dc8c2b82a9d6390" Namespace="calico-apiserver" Pod="calico-apiserver-76bc6945bb-vnsck" WorkloadEndpoint="ci--4284--0--0--n--c0828c9b46-k8s-calico--apiserver--76bc6945bb--vnsck-eth0" May 14 01:21:53.319510 containerd[1527]: 2025-05-14 01:21:52.976 [INFO][4058] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a6c95c712d67ec1d8e9a9e05725977a44b4125928c3abc669dc8c2b82a9d6390" HandleID="k8s-pod-network.a6c95c712d67ec1d8e9a9e05725977a44b4125928c3abc669dc8c2b82a9d6390" Workload="ci--4284--0--0--n--c0828c9b46-k8s-calico--apiserver--76bc6945bb--vnsck-eth0" May 14 01:21:53.319510 containerd[1527]: 2025-05-14 01:21:52.984 [INFO][4058] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a6c95c712d67ec1d8e9a9e05725977a44b4125928c3abc669dc8c2b82a9d6390" HandleID="k8s-pod-network.a6c95c712d67ec1d8e9a9e05725977a44b4125928c3abc669dc8c2b82a9d6390" Workload="ci--4284--0--0--n--c0828c9b46-k8s-calico--apiserver--76bc6945bb--vnsck-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000050e20), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4284-0-0-n-c0828c9b46", "pod":"calico-apiserver-76bc6945bb-vnsck", "timestamp":"2025-05-14 01:21:52.975991261 +0000 UTC"}, Hostname:"ci-4284-0-0-n-c0828c9b46", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 01:21:53.319510 containerd[1527]: 2025-05-14 01:21:52.984 [INFO][4058] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 01:21:53.319510 containerd[1527]: 2025-05-14 01:21:53.155 [INFO][4058] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 01:21:53.319510 containerd[1527]: 2025-05-14 01:21:53.156 [INFO][4058] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4284-0-0-n-c0828c9b46' May 14 01:21:53.319510 containerd[1527]: 2025-05-14 01:21:53.175 [INFO][4058] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a6c95c712d67ec1d8e9a9e05725977a44b4125928c3abc669dc8c2b82a9d6390" host="ci-4284-0-0-n-c0828c9b46" May 14 01:21:53.319510 containerd[1527]: 2025-05-14 01:21:53.194 [INFO][4058] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4284-0-0-n-c0828c9b46" May 14 01:21:53.319510 containerd[1527]: 2025-05-14 01:21:53.208 [INFO][4058] ipam/ipam.go 489: Trying affinity for 192.168.122.192/26 host="ci-4284-0-0-n-c0828c9b46" May 14 01:21:53.319510 containerd[1527]: 2025-05-14 01:21:53.210 [INFO][4058] ipam/ipam.go 155: Attempting to load block cidr=192.168.122.192/26 host="ci-4284-0-0-n-c0828c9b46" May 14 01:21:53.319510 containerd[1527]: 2025-05-14 01:21:53.215 [INFO][4058] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.122.192/26 host="ci-4284-0-0-n-c0828c9b46" May 14 01:21:53.319510 containerd[1527]: 2025-05-14 01:21:53.215 [INFO][4058] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.122.192/26 handle="k8s-pod-network.a6c95c712d67ec1d8e9a9e05725977a44b4125928c3abc669dc8c2b82a9d6390" host="ci-4284-0-0-n-c0828c9b46" May 14 01:21:53.319510 containerd[1527]: 2025-05-14 01:21:53.217 [INFO][4058] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a6c95c712d67ec1d8e9a9e05725977a44b4125928c3abc669dc8c2b82a9d6390 May 14 01:21:53.319510 containerd[1527]: 2025-05-14 01:21:53.235 [INFO][4058] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.122.192/26 handle="k8s-pod-network.a6c95c712d67ec1d8e9a9e05725977a44b4125928c3abc669dc8c2b82a9d6390" host="ci-4284-0-0-n-c0828c9b46" May 14 01:21:53.319510 containerd[1527]: 2025-05-14 01:21:53.243 [INFO][4058] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.122.196/26] block=192.168.122.192/26 handle="k8s-pod-network.a6c95c712d67ec1d8e9a9e05725977a44b4125928c3abc669dc8c2b82a9d6390" host="ci-4284-0-0-n-c0828c9b46" May 14 01:21:53.319510 containerd[1527]: 2025-05-14 01:21:53.243 [INFO][4058] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.122.196/26] handle="k8s-pod-network.a6c95c712d67ec1d8e9a9e05725977a44b4125928c3abc669dc8c2b82a9d6390" host="ci-4284-0-0-n-c0828c9b46" May 14 01:21:53.319510 containerd[1527]: 2025-05-14 01:21:53.243 [INFO][4058] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 01:21:53.319510 containerd[1527]: 2025-05-14 01:21:53.243 [INFO][4058] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.122.196/26] IPv6=[] ContainerID="a6c95c712d67ec1d8e9a9e05725977a44b4125928c3abc669dc8c2b82a9d6390" HandleID="k8s-pod-network.a6c95c712d67ec1d8e9a9e05725977a44b4125928c3abc669dc8c2b82a9d6390" Workload="ci--4284--0--0--n--c0828c9b46-k8s-calico--apiserver--76bc6945bb--vnsck-eth0" May 14 01:21:53.320691 containerd[1527]: 2025-05-14 01:21:53.246 [INFO][4030] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a6c95c712d67ec1d8e9a9e05725977a44b4125928c3abc669dc8c2b82a9d6390" Namespace="calico-apiserver" Pod="calico-apiserver-76bc6945bb-vnsck" WorkloadEndpoint="ci--4284--0--0--n--c0828c9b46-k8s-calico--apiserver--76bc6945bb--vnsck-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284--0--0--n--c0828c9b46-k8s-calico--apiserver--76bc6945bb--vnsck-eth0", GenerateName:"calico-apiserver-76bc6945bb-", Namespace:"calico-apiserver", SelfLink:"", UID:"cd2bdce0-0bb7-4bf3-bc56-675335a5f3f5", ResourceVersion:"684", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 1, 21, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76bc6945bb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284-0-0-n-c0828c9b46", ContainerID:"", Pod:"calico-apiserver-76bc6945bb-vnsck", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.122.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4fda7befab3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 01:21:53.320691 containerd[1527]: 2025-05-14 01:21:53.247 [INFO][4030] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.122.196/32] ContainerID="a6c95c712d67ec1d8e9a9e05725977a44b4125928c3abc669dc8c2b82a9d6390" Namespace="calico-apiserver" Pod="calico-apiserver-76bc6945bb-vnsck" WorkloadEndpoint="ci--4284--0--0--n--c0828c9b46-k8s-calico--apiserver--76bc6945bb--vnsck-eth0" May 14 01:21:53.320691 containerd[1527]: 2025-05-14 01:21:53.247 [INFO][4030] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4fda7befab3 ContainerID="a6c95c712d67ec1d8e9a9e05725977a44b4125928c3abc669dc8c2b82a9d6390" Namespace="calico-apiserver" Pod="calico-apiserver-76bc6945bb-vnsck" WorkloadEndpoint="ci--4284--0--0--n--c0828c9b46-k8s-calico--apiserver--76bc6945bb--vnsck-eth0" May 14 01:21:53.320691 containerd[1527]: 2025-05-14 01:21:53.253 [INFO][4030] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a6c95c712d67ec1d8e9a9e05725977a44b4125928c3abc669dc8c2b82a9d6390" Namespace="calico-apiserver" Pod="calico-apiserver-76bc6945bb-vnsck" WorkloadEndpoint="ci--4284--0--0--n--c0828c9b46-k8s-calico--apiserver--76bc6945bb--vnsck-eth0" May 14 01:21:53.320691 containerd[1527]: 2025-05-14 01:21:53.253 [INFO][4030] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a6c95c712d67ec1d8e9a9e05725977a44b4125928c3abc669dc8c2b82a9d6390" Namespace="calico-apiserver" Pod="calico-apiserver-76bc6945bb-vnsck" WorkloadEndpoint="ci--4284--0--0--n--c0828c9b46-k8s-calico--apiserver--76bc6945bb--vnsck-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284--0--0--n--c0828c9b46-k8s-calico--apiserver--76bc6945bb--vnsck-eth0", GenerateName:"calico-apiserver-76bc6945bb-", Namespace:"calico-apiserver", SelfLink:"", UID:"cd2bdce0-0bb7-4bf3-bc56-675335a5f3f5", ResourceVersion:"684", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 1, 21, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76bc6945bb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284-0-0-n-c0828c9b46", ContainerID:"a6c95c712d67ec1d8e9a9e05725977a44b4125928c3abc669dc8c2b82a9d6390", Pod:"calico-apiserver-76bc6945bb-vnsck", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.122.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4fda7befab3", MAC:"de:34:d1:b3:e3:1a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 01:21:53.320691 containerd[1527]: 2025-05-14 01:21:53.316 [INFO][4030] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a6c95c712d67ec1d8e9a9e05725977a44b4125928c3abc669dc8c2b82a9d6390" Namespace="calico-apiserver" Pod="calico-apiserver-76bc6945bb-vnsck" WorkloadEndpoint="ci--4284--0--0--n--c0828c9b46-k8s-calico--apiserver--76bc6945bb--vnsck-eth0" May 14 01:21:53.371273 containerd[1527]: time="2025-05-14T01:21:53.371097655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7vqwz,Uid:1a53d4b9-a854-4ca0-a736-f4ef5a7d06c4,Namespace:kube-system,Attempt:0,} returns sandbox id \"3451c93d7ce1e413be03b13f8069d29431cbca5b06f697df0d28d7e62a7c8c2a\"" May 14 01:21:53.379594 containerd[1527]: time="2025-05-14T01:21:53.378789325Z" level=info msg="CreateContainer within sandbox \"3451c93d7ce1e413be03b13f8069d29431cbca5b06f697df0d28d7e62a7c8c2a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 01:21:53.421377 containerd[1527]: time="2025-05-14T01:21:53.421335055Z" level=info msg="connecting to shim a6c95c712d67ec1d8e9a9e05725977a44b4125928c3abc669dc8c2b82a9d6390" address="unix:///run/containerd/s/f72eaa9b52d9d4452ea56e944bcb4dcb7ac83e1cbfa129ba65b68ea155f83dc0" namespace=k8s.io protocol=ttrpc version=3 May 14 01:21:53.431576 containerd[1527]: time="2025-05-14T01:21:53.431391055Z" level=info msg="Container 599d59d2de206ff95e2e13e1301aa2e5dd3b4c475b04a80234d12d7b6aa4926b: CDI devices from CRI Config.CDIDevices: []" May 14 01:21:53.435184 containerd[1527]: time="2025-05-14T01:21:53.435088548Z" level=info msg="StartContainer for \"04b73af5dec4d5c2c19b79ed731c6901f7017ab1ed904b300282d3d762d631c3\" returns successfully" May 14 01:21:53.446233 containerd[1527]: time="2025-05-14T01:21:53.446147500Z" level=info msg="CreateContainer within sandbox \"3451c93d7ce1e413be03b13f8069d29431cbca5b06f697df0d28d7e62a7c8c2a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"599d59d2de206ff95e2e13e1301aa2e5dd3b4c475b04a80234d12d7b6aa4926b\"" May 14 01:21:53.448873 containerd[1527]: time="2025-05-14T01:21:53.448105575Z" level=info msg="StartContainer for \"599d59d2de206ff95e2e13e1301aa2e5dd3b4c475b04a80234d12d7b6aa4926b\"" May 14 01:21:53.450605 containerd[1527]: time="2025-05-14T01:21:53.450437013Z" level=info msg="connecting to shim 599d59d2de206ff95e2e13e1301aa2e5dd3b4c475b04a80234d12d7b6aa4926b" address="unix:///run/containerd/s/06a6b90d4e3993ab7ff84ef4fc3c19ede8632f9ea1687d62cf21fefacabd8cdb" protocol=ttrpc version=3 May 14 01:21:53.472711 systemd[1]: Started cri-containerd-a6c95c712d67ec1d8e9a9e05725977a44b4125928c3abc669dc8c2b82a9d6390.scope - libcontainer container a6c95c712d67ec1d8e9a9e05725977a44b4125928c3abc669dc8c2b82a9d6390. May 14 01:21:53.479713 systemd[1]: Started cri-containerd-599d59d2de206ff95e2e13e1301aa2e5dd3b4c475b04a80234d12d7b6aa4926b.scope - libcontainer container 599d59d2de206ff95e2e13e1301aa2e5dd3b4c475b04a80234d12d7b6aa4926b. May 14 01:21:53.510680 containerd[1527]: time="2025-05-14T01:21:53.510618620Z" level=info msg="StartContainer for \"599d59d2de206ff95e2e13e1301aa2e5dd3b4c475b04a80234d12d7b6aa4926b\" returns successfully" May 14 01:21:53.530478 containerd[1527]: time="2025-05-14T01:21:53.530374321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76bc6945bb-vnsck,Uid:cd2bdce0-0bb7-4bf3-bc56-675335a5f3f5,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a6c95c712d67ec1d8e9a9e05725977a44b4125928c3abc669dc8c2b82a9d6390\"" May 14 01:21:53.712046 systemd-networkd[1401]: cali5ed7d74262b: Gained IPv6LL May 14 01:21:53.815375 containerd[1527]: time="2025-05-14T01:21:53.815301548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75876f4c7b-2qdpw,Uid:b38cbe2b-a17c-45d3-bd71-db37dd71c076,Namespace:calico-system,Attempt:0,}" May 14 01:21:53.995014 systemd-networkd[1401]: cali84dc736c448: Link UP May 14 01:21:53.995286 systemd-networkd[1401]: cali84dc736c448: Gained carrier May 14 01:21:54.014234 containerd[1527]: 2025-05-14 01:21:53.876 [INFO][4328] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 14 01:21:54.014234 containerd[1527]: 2025-05-14 01:21:53.894 [INFO][4328] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4284--0--0--n--c0828c9b46-k8s-calico--kube--controllers--75876f4c7b--2qdpw-eth0 calico-kube-controllers-75876f4c7b- calico-system b38cbe2b-a17c-45d3-bd71-db37dd71c076 685 0 2025-05-14 01:21:28 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:75876f4c7b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4284-0-0-n-c0828c9b46 calico-kube-controllers-75876f4c7b-2qdpw eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali84dc736c448 [] []}} ContainerID="dcd95909ae97c31daca874a06b06946d4f93ad7dac80d2049c9eadb193244c75" Namespace="calico-system" Pod="calico-kube-controllers-75876f4c7b-2qdpw" WorkloadEndpoint="ci--4284--0--0--n--c0828c9b46-k8s-calico--kube--controllers--75876f4c7b--2qdpw-" May 14 01:21:54.014234 containerd[1527]: 2025-05-14 01:21:53.895 [INFO][4328] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="dcd95909ae97c31daca874a06b06946d4f93ad7dac80d2049c9eadb193244c75" Namespace="calico-system" Pod="calico-kube-controllers-75876f4c7b-2qdpw" WorkloadEndpoint="ci--4284--0--0--n--c0828c9b46-k8s-calico--kube--controllers--75876f4c7b--2qdpw-eth0" May 14 01:21:54.014234 containerd[1527]: 2025-05-14 01:21:53.946 [INFO][4336] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dcd95909ae97c31daca874a06b06946d4f93ad7dac80d2049c9eadb193244c75" HandleID="k8s-pod-network.dcd95909ae97c31daca874a06b06946d4f93ad7dac80d2049c9eadb193244c75" Workload="ci--4284--0--0--n--c0828c9b46-k8s-calico--kube--controllers--75876f4c7b--2qdpw-eth0" May 14 01:21:54.014234 containerd[1527]: 2025-05-14 01:21:53.956 [INFO][4336] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dcd95909ae97c31daca874a06b06946d4f93ad7dac80d2049c9eadb193244c75" HandleID="k8s-pod-network.dcd95909ae97c31daca874a06b06946d4f93ad7dac80d2049c9eadb193244c75" Workload="ci--4284--0--0--n--c0828c9b46-k8s-calico--kube--controllers--75876f4c7b--2qdpw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d8110), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4284-0-0-n-c0828c9b46", "pod":"calico-kube-controllers-75876f4c7b-2qdpw", "timestamp":"2025-05-14 01:21:53.946270211 +0000 UTC"}, Hostname:"ci-4284-0-0-n-c0828c9b46", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 01:21:54.014234 containerd[1527]: 2025-05-14 01:21:53.956 [INFO][4336] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 01:21:54.014234 containerd[1527]: 2025-05-14 01:21:53.956 [INFO][4336] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 01:21:54.014234 containerd[1527]: 2025-05-14 01:21:53.956 [INFO][4336] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4284-0-0-n-c0828c9b46' May 14 01:21:54.014234 containerd[1527]: 2025-05-14 01:21:53.960 [INFO][4336] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.dcd95909ae97c31daca874a06b06946d4f93ad7dac80d2049c9eadb193244c75" host="ci-4284-0-0-n-c0828c9b46" May 14 01:21:54.014234 containerd[1527]: 2025-05-14 01:21:53.965 [INFO][4336] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4284-0-0-n-c0828c9b46" May 14 01:21:54.014234 containerd[1527]: 2025-05-14 01:21:53.971 [INFO][4336] ipam/ipam.go 489: Trying affinity for 192.168.122.192/26 host="ci-4284-0-0-n-c0828c9b46" May 14 01:21:54.014234 containerd[1527]: 2025-05-14 01:21:53.974 [INFO][4336] ipam/ipam.go 155: Attempting to load block cidr=192.168.122.192/26 host="ci-4284-0-0-n-c0828c9b46" May 14 01:21:54.014234 containerd[1527]: 2025-05-14 01:21:53.977 [INFO][4336] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.122.192/26 host="ci-4284-0-0-n-c0828c9b46" May 14 01:21:54.014234 containerd[1527]: 2025-05-14 01:21:53.977 [INFO][4336] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.122.192/26 handle="k8s-pod-network.dcd95909ae97c31daca874a06b06946d4f93ad7dac80d2049c9eadb193244c75" host="ci-4284-0-0-n-c0828c9b46" May 14 01:21:54.014234 containerd[1527]: 2025-05-14 01:21:53.978 [INFO][4336] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.dcd95909ae97c31daca874a06b06946d4f93ad7dac80d2049c9eadb193244c75 May 14 01:21:54.014234 containerd[1527]: 2025-05-14 01:21:53.982 [INFO][4336] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.122.192/26 handle="k8s-pod-network.dcd95909ae97c31daca874a06b06946d4f93ad7dac80d2049c9eadb193244c75" host="ci-4284-0-0-n-c0828c9b46" May 14 01:21:54.014234 containerd[1527]: 2025-05-14 01:21:53.989 [INFO][4336] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.122.197/26] block=192.168.122.192/26 handle="k8s-pod-network.dcd95909ae97c31daca874a06b06946d4f93ad7dac80d2049c9eadb193244c75" host="ci-4284-0-0-n-c0828c9b46" May 14 01:21:54.014234 containerd[1527]: 2025-05-14 01:21:53.989 [INFO][4336] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.122.197/26] handle="k8s-pod-network.dcd95909ae97c31daca874a06b06946d4f93ad7dac80d2049c9eadb193244c75" host="ci-4284-0-0-n-c0828c9b46" May 14 01:21:54.014234 containerd[1527]: 2025-05-14 01:21:53.989 [INFO][4336] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 01:21:54.014234 containerd[1527]: 2025-05-14 01:21:53.989 [INFO][4336] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.122.197/26] IPv6=[] ContainerID="dcd95909ae97c31daca874a06b06946d4f93ad7dac80d2049c9eadb193244c75" HandleID="k8s-pod-network.dcd95909ae97c31daca874a06b06946d4f93ad7dac80d2049c9eadb193244c75" Workload="ci--4284--0--0--n--c0828c9b46-k8s-calico--kube--controllers--75876f4c7b--2qdpw-eth0" May 14 01:21:54.015991 containerd[1527]: 2025-05-14 01:21:53.992 [INFO][4328] cni-plugin/k8s.go 386: Populated endpoint ContainerID="dcd95909ae97c31daca874a06b06946d4f93ad7dac80d2049c9eadb193244c75" Namespace="calico-system" Pod="calico-kube-controllers-75876f4c7b-2qdpw" WorkloadEndpoint="ci--4284--0--0--n--c0828c9b46-k8s-calico--kube--controllers--75876f4c7b--2qdpw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284--0--0--n--c0828c9b46-k8s-calico--kube--controllers--75876f4c7b--2qdpw-eth0", GenerateName:"calico-kube-controllers-75876f4c7b-", Namespace:"calico-system", SelfLink:"", UID:"b38cbe2b-a17c-45d3-bd71-db37dd71c076", ResourceVersion:"685", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 1, 21, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"75876f4c7b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284-0-0-n-c0828c9b46", ContainerID:"", Pod:"calico-kube-controllers-75876f4c7b-2qdpw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.122.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali84dc736c448", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 01:21:54.015991 containerd[1527]: 2025-05-14 01:21:53.992 [INFO][4328] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.122.197/32] ContainerID="dcd95909ae97c31daca874a06b06946d4f93ad7dac80d2049c9eadb193244c75" Namespace="calico-system" Pod="calico-kube-controllers-75876f4c7b-2qdpw" WorkloadEndpoint="ci--4284--0--0--n--c0828c9b46-k8s-calico--kube--controllers--75876f4c7b--2qdpw-eth0" May 14 01:21:54.015991 containerd[1527]: 2025-05-14 01:21:53.992 [INFO][4328] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali84dc736c448 ContainerID="dcd95909ae97c31daca874a06b06946d4f93ad7dac80d2049c9eadb193244c75" Namespace="calico-system" Pod="calico-kube-controllers-75876f4c7b-2qdpw" WorkloadEndpoint="ci--4284--0--0--n--c0828c9b46-k8s-calico--kube--controllers--75876f4c7b--2qdpw-eth0" May 14 01:21:54.015991 containerd[1527]: 2025-05-14 01:21:53.996 [INFO][4328] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dcd95909ae97c31daca874a06b06946d4f93ad7dac80d2049c9eadb193244c75" Namespace="calico-system" Pod="calico-kube-controllers-75876f4c7b-2qdpw" WorkloadEndpoint="ci--4284--0--0--n--c0828c9b46-k8s-calico--kube--controllers--75876f4c7b--2qdpw-eth0" May 14 01:21:54.015991 containerd[1527]: 2025-05-14 01:21:53.996 [INFO][4328] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="dcd95909ae97c31daca874a06b06946d4f93ad7dac80d2049c9eadb193244c75" Namespace="calico-system" Pod="calico-kube-controllers-75876f4c7b-2qdpw" WorkloadEndpoint="ci--4284--0--0--n--c0828c9b46-k8s-calico--kube--controllers--75876f4c7b--2qdpw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284--0--0--n--c0828c9b46-k8s-calico--kube--controllers--75876f4c7b--2qdpw-eth0", GenerateName:"calico-kube-controllers-75876f4c7b-", Namespace:"calico-system", SelfLink:"", UID:"b38cbe2b-a17c-45d3-bd71-db37dd71c076", ResourceVersion:"685", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 1, 21, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"75876f4c7b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284-0-0-n-c0828c9b46", ContainerID:"dcd95909ae97c31daca874a06b06946d4f93ad7dac80d2049c9eadb193244c75", Pod:"calico-kube-controllers-75876f4c7b-2qdpw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.122.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali84dc736c448", MAC:"76:85:99:15:5a:95", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 01:21:54.015991 containerd[1527]: 2025-05-14 01:21:54.009 [INFO][4328] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="dcd95909ae97c31daca874a06b06946d4f93ad7dac80d2049c9eadb193244c75" Namespace="calico-system" Pod="calico-kube-controllers-75876f4c7b-2qdpw" WorkloadEndpoint="ci--4284--0--0--n--c0828c9b46-k8s-calico--kube--controllers--75876f4c7b--2qdpw-eth0" May 14 01:21:54.047817 containerd[1527]: time="2025-05-14T01:21:54.047733695Z" level=info msg="connecting to shim dcd95909ae97c31daca874a06b06946d4f93ad7dac80d2049c9eadb193244c75" address="unix:///run/containerd/s/53e42349019c2cfd3945c57b1cd63a6f38545fa2805cf864f4a791590ca2dbaa" namespace=k8s.io protocol=ttrpc version=3 May 14 01:21:54.092744 kubelet[2830]: I0514 01:21:54.092452 2830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-7vqwz" podStartSLOduration=43.092412448 podStartE2EDuration="43.092412448s" podCreationTimestamp="2025-05-14 01:21:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 01:21:54.09047932 +0000 UTC m=+47.410292934" watchObservedRunningTime="2025-05-14 01:21:54.092412448 +0000 UTC m=+47.412226062" May 14 01:21:54.102842 systemd[1]: Started cri-containerd-dcd95909ae97c31daca874a06b06946d4f93ad7dac80d2049c9eadb193244c75.scope - libcontainer container dcd95909ae97c31daca874a06b06946d4f93ad7dac80d2049c9eadb193244c75. May 14 01:21:54.119661 kubelet[2830]: I0514 01:21:54.118337 2830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-7wggf" podStartSLOduration=43.118320247 podStartE2EDuration="43.118320247s" podCreationTimestamp="2025-05-14 01:21:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 01:21:54.117838278 +0000 UTC m=+47.437651890" watchObservedRunningTime="2025-05-14 01:21:54.118320247 +0000 UTC m=+47.438133860" May 14 01:21:54.198078 containerd[1527]: time="2025-05-14T01:21:54.198035762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75876f4c7b-2qdpw,Uid:b38cbe2b-a17c-45d3-bd71-db37dd71c076,Namespace:calico-system,Attempt:0,} returns sandbox id \"dcd95909ae97c31daca874a06b06946d4f93ad7dac80d2049c9eadb193244c75\"" May 14 01:21:54.291898 kernel: bpftool[4437]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 14 01:21:54.543844 systemd-networkd[1401]: cali10b5fad0e35: Gained IPv6LL May 14 01:21:54.551618 systemd-networkd[1401]: vxlan.calico: Link UP May 14 01:21:54.551705 systemd-networkd[1401]: vxlan.calico: Gained carrier May 14 01:21:54.671840 systemd-networkd[1401]: califb33671f221: Gained IPv6LL May 14 01:21:54.864391 systemd-networkd[1401]: cali4fda7befab3: Gained IPv6LL May 14 01:21:55.504939 systemd-networkd[1401]: cali84dc736c448: Gained IPv6LL May 14 01:21:55.811516 containerd[1527]: time="2025-05-14T01:21:55.811423082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2jzfd,Uid:1ff2619d-9812-4662-94f4-719dc464e433,Namespace:calico-system,Attempt:0,}" May 14 01:21:55.960708 systemd-networkd[1401]: calibbfad617531: Link UP May 14 01:21:55.961408 systemd-networkd[1401]: calibbfad617531: Gained carrier May 14 01:21:55.986038 containerd[1527]: 2025-05-14 01:21:55.874 [INFO][4535] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4284--0--0--n--c0828c9b46-k8s-csi--node--driver--2jzfd-eth0 csi-node-driver- calico-system 1ff2619d-9812-4662-94f4-719dc464e433 595 0 2025-05-14 01:21:28 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:5b5cc68cd5 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4284-0-0-n-c0828c9b46 csi-node-driver-2jzfd eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calibbfad617531 [] []}} ContainerID="afa53a1bbba0dff076aaadd0e011aa37dbe975f61176d26ca3bc0dd26862361a" Namespace="calico-system" Pod="csi-node-driver-2jzfd" WorkloadEndpoint="ci--4284--0--0--n--c0828c9b46-k8s-csi--node--driver--2jzfd-" May 14 01:21:55.986038 containerd[1527]: 2025-05-14 01:21:55.874 [INFO][4535] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="afa53a1bbba0dff076aaadd0e011aa37dbe975f61176d26ca3bc0dd26862361a" Namespace="calico-system" Pod="csi-node-driver-2jzfd" WorkloadEndpoint="ci--4284--0--0--n--c0828c9b46-k8s-csi--node--driver--2jzfd-eth0" May 14 01:21:55.986038 containerd[1527]: 2025-05-14 01:21:55.909 [INFO][4546] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="afa53a1bbba0dff076aaadd0e011aa37dbe975f61176d26ca3bc0dd26862361a" HandleID="k8s-pod-network.afa53a1bbba0dff076aaadd0e011aa37dbe975f61176d26ca3bc0dd26862361a" Workload="ci--4284--0--0--n--c0828c9b46-k8s-csi--node--driver--2jzfd-eth0" May 14 01:21:55.986038 containerd[1527]: 2025-05-14 01:21:55.919 [INFO][4546] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="afa53a1bbba0dff076aaadd0e011aa37dbe975f61176d26ca3bc0dd26862361a" HandleID="k8s-pod-network.afa53a1bbba0dff076aaadd0e011aa37dbe975f61176d26ca3bc0dd26862361a" Workload="ci--4284--0--0--n--c0828c9b46-k8s-csi--node--driver--2jzfd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ed830), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4284-0-0-n-c0828c9b46", "pod":"csi-node-driver-2jzfd", "timestamp":"2025-05-14 01:21:55.90992289 +0000 UTC"}, Hostname:"ci-4284-0-0-n-c0828c9b46", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 14 01:21:55.986038 containerd[1527]: 2025-05-14 01:21:55.919 [INFO][4546] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 14 01:21:55.986038 containerd[1527]: 2025-05-14 01:21:55.919 [INFO][4546] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 14 01:21:55.986038 containerd[1527]: 2025-05-14 01:21:55.919 [INFO][4546] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4284-0-0-n-c0828c9b46' May 14 01:21:55.986038 containerd[1527]: 2025-05-14 01:21:55.923 [INFO][4546] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.afa53a1bbba0dff076aaadd0e011aa37dbe975f61176d26ca3bc0dd26862361a" host="ci-4284-0-0-n-c0828c9b46" May 14 01:21:55.986038 containerd[1527]: 2025-05-14 01:21:55.928 [INFO][4546] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4284-0-0-n-c0828c9b46" May 14 01:21:55.986038 containerd[1527]: 2025-05-14 01:21:55.935 [INFO][4546] ipam/ipam.go 489: Trying affinity for 192.168.122.192/26 host="ci-4284-0-0-n-c0828c9b46" May 14 01:21:55.986038 containerd[1527]: 2025-05-14 01:21:55.937 [INFO][4546] ipam/ipam.go 155: Attempting to load block cidr=192.168.122.192/26 host="ci-4284-0-0-n-c0828c9b46" May 14 01:21:55.986038 containerd[1527]: 2025-05-14 01:21:55.941 [INFO][4546] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.122.192/26 host="ci-4284-0-0-n-c0828c9b46" May 14 01:21:55.986038 containerd[1527]: 2025-05-14 01:21:55.941 [INFO][4546] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.122.192/26 handle="k8s-pod-network.afa53a1bbba0dff076aaadd0e011aa37dbe975f61176d26ca3bc0dd26862361a" host="ci-4284-0-0-n-c0828c9b46" May 14 01:21:55.986038 containerd[1527]: 2025-05-14 01:21:55.944 [INFO][4546] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.afa53a1bbba0dff076aaadd0e011aa37dbe975f61176d26ca3bc0dd26862361a May 14 01:21:55.986038 containerd[1527]: 2025-05-14 01:21:55.948 [INFO][4546] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.122.192/26 handle="k8s-pod-network.afa53a1bbba0dff076aaadd0e011aa37dbe975f61176d26ca3bc0dd26862361a" host="ci-4284-0-0-n-c0828c9b46" May 14 01:21:55.986038 containerd[1527]: 2025-05-14 01:21:55.955 [INFO][4546] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.122.198/26] block=192.168.122.192/26 handle="k8s-pod-network.afa53a1bbba0dff076aaadd0e011aa37dbe975f61176d26ca3bc0dd26862361a" host="ci-4284-0-0-n-c0828c9b46" May 14 01:21:55.986038 containerd[1527]: 2025-05-14 01:21:55.955 [INFO][4546] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.122.198/26] handle="k8s-pod-network.afa53a1bbba0dff076aaadd0e011aa37dbe975f61176d26ca3bc0dd26862361a" host="ci-4284-0-0-n-c0828c9b46" May 14 01:21:55.986038 containerd[1527]: 2025-05-14 01:21:55.956 [INFO][4546] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 14 01:21:55.986038 containerd[1527]: 2025-05-14 01:21:55.956 [INFO][4546] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.122.198/26] IPv6=[] ContainerID="afa53a1bbba0dff076aaadd0e011aa37dbe975f61176d26ca3bc0dd26862361a" HandleID="k8s-pod-network.afa53a1bbba0dff076aaadd0e011aa37dbe975f61176d26ca3bc0dd26862361a" Workload="ci--4284--0--0--n--c0828c9b46-k8s-csi--node--driver--2jzfd-eth0" May 14 01:21:55.986660 containerd[1527]: 2025-05-14 01:21:55.957 [INFO][4535] cni-plugin/k8s.go 386: Populated endpoint ContainerID="afa53a1bbba0dff076aaadd0e011aa37dbe975f61176d26ca3bc0dd26862361a" Namespace="calico-system" Pod="csi-node-driver-2jzfd" WorkloadEndpoint="ci--4284--0--0--n--c0828c9b46-k8s-csi--node--driver--2jzfd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284--0--0--n--c0828c9b46-k8s-csi--node--driver--2jzfd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1ff2619d-9812-4662-94f4-719dc464e433", ResourceVersion:"595", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 1, 21, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284-0-0-n-c0828c9b46", ContainerID:"", Pod:"csi-node-driver-2jzfd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.122.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibbfad617531", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 01:21:55.986660 containerd[1527]: 2025-05-14 01:21:55.958 [INFO][4535] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.122.198/32] ContainerID="afa53a1bbba0dff076aaadd0e011aa37dbe975f61176d26ca3bc0dd26862361a" Namespace="calico-system" Pod="csi-node-driver-2jzfd" WorkloadEndpoint="ci--4284--0--0--n--c0828c9b46-k8s-csi--node--driver--2jzfd-eth0" May 14 01:21:55.986660 containerd[1527]: 2025-05-14 01:21:55.958 [INFO][4535] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibbfad617531 ContainerID="afa53a1bbba0dff076aaadd0e011aa37dbe975f61176d26ca3bc0dd26862361a" Namespace="calico-system" Pod="csi-node-driver-2jzfd" WorkloadEndpoint="ci--4284--0--0--n--c0828c9b46-k8s-csi--node--driver--2jzfd-eth0" May 14 01:21:55.986660 containerd[1527]: 2025-05-14 01:21:55.960 [INFO][4535] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="afa53a1bbba0dff076aaadd0e011aa37dbe975f61176d26ca3bc0dd26862361a" Namespace="calico-system" Pod="csi-node-driver-2jzfd" WorkloadEndpoint="ci--4284--0--0--n--c0828c9b46-k8s-csi--node--driver--2jzfd-eth0" May 14 01:21:55.986660 containerd[1527]: 2025-05-14 01:21:55.961 [INFO][4535] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="afa53a1bbba0dff076aaadd0e011aa37dbe975f61176d26ca3bc0dd26862361a" Namespace="calico-system" Pod="csi-node-driver-2jzfd" WorkloadEndpoint="ci--4284--0--0--n--c0828c9b46-k8s-csi--node--driver--2jzfd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284--0--0--n--c0828c9b46-k8s-csi--node--driver--2jzfd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1ff2619d-9812-4662-94f4-719dc464e433", ResourceVersion:"595", Generation:0, CreationTimestamp:time.Date(2025, time.May, 14, 1, 21, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5b5cc68cd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284-0-0-n-c0828c9b46", ContainerID:"afa53a1bbba0dff076aaadd0e011aa37dbe975f61176d26ca3bc0dd26862361a", Pod:"csi-node-driver-2jzfd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.122.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibbfad617531", MAC:"06:c5:51:e4:c4:a4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 14 01:21:55.986660 containerd[1527]: 2025-05-14 01:21:55.983 [INFO][4535] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="afa53a1bbba0dff076aaadd0e011aa37dbe975f61176d26ca3bc0dd26862361a" Namespace="calico-system" Pod="csi-node-driver-2jzfd" WorkloadEndpoint="ci--4284--0--0--n--c0828c9b46-k8s-csi--node--driver--2jzfd-eth0" May 14 01:21:56.016500 containerd[1527]: time="2025-05-14T01:21:56.016421921Z" level=info msg="connecting to shim afa53a1bbba0dff076aaadd0e011aa37dbe975f61176d26ca3bc0dd26862361a" address="unix:///run/containerd/s/3e9a3323532342dc524f38b0a8cb34730807a256a7f6a524fa0ce43c68ccf23a" namespace=k8s.io protocol=ttrpc version=3 May 14 01:21:56.051849 systemd[1]: Started cri-containerd-afa53a1bbba0dff076aaadd0e011aa37dbe975f61176d26ca3bc0dd26862361a.scope - libcontainer container afa53a1bbba0dff076aaadd0e011aa37dbe975f61176d26ca3bc0dd26862361a. May 14 01:21:56.087565 containerd[1527]: time="2025-05-14T01:21:56.087503085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2jzfd,Uid:1ff2619d-9812-4662-94f4-719dc464e433,Namespace:calico-system,Attempt:0,} returns sandbox id \"afa53a1bbba0dff076aaadd0e011aa37dbe975f61176d26ca3bc0dd26862361a\"" May 14 01:21:56.337921 systemd-networkd[1401]: vxlan.calico: Gained IPv6LL May 14 01:21:56.854548 containerd[1527]: time="2025-05-14T01:21:56.854454811Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:21:56.856076 containerd[1527]: time="2025-05-14T01:21:56.856014125Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=43021437" May 14 01:21:56.857868 containerd[1527]: time="2025-05-14T01:21:56.857468410Z" level=info msg="ImageCreate event name:\"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:21:56.861115 containerd[1527]: time="2025-05-14T01:21:56.861008233Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:21:56.862943 containerd[1527]: time="2025-05-14T01:21:56.862912937Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 4.50568354s" May 14 01:21:56.863073 containerd[1527]: time="2025-05-14T01:21:56.863049326Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 14 01:21:56.864555 containerd[1527]: time="2025-05-14T01:21:56.864186757Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 14 01:21:56.869240 containerd[1527]: time="2025-05-14T01:21:56.869145216Z" level=info msg="CreateContainer within sandbox \"14d925657b0241b65f107800d27054b1b5d2e8ba1103edc0b7ec2bec874f7def\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 14 01:21:56.881662 containerd[1527]: time="2025-05-14T01:21:56.880219493Z" level=info msg="Container 8ba7436810e730202e1163b72a456e41cf59839ef38686e660487af0ed5c4ad4: CDI devices from CRI Config.CDIDevices: []" May 14 01:21:56.889936 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3761499389.mount: Deactivated successfully. May 14 01:21:56.895662 containerd[1527]: time="2025-05-14T01:21:56.895600576Z" level=info msg="CreateContainer within sandbox \"14d925657b0241b65f107800d27054b1b5d2e8ba1103edc0b7ec2bec874f7def\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8ba7436810e730202e1163b72a456e41cf59839ef38686e660487af0ed5c4ad4\"" May 14 01:21:56.898189 containerd[1527]: time="2025-05-14T01:21:56.897654063Z" level=info msg="StartContainer for \"8ba7436810e730202e1163b72a456e41cf59839ef38686e660487af0ed5c4ad4\"" May 14 01:21:56.899084 containerd[1527]: time="2025-05-14T01:21:56.899051620Z" level=info msg="connecting to shim 8ba7436810e730202e1163b72a456e41cf59839ef38686e660487af0ed5c4ad4" address="unix:///run/containerd/s/40dd6b00563b748f3452b16b712b47b6c681d9c0e50a30256e2cb2b587c32bfd" protocol=ttrpc version=3 May 14 01:21:56.925841 systemd[1]: Started cri-containerd-8ba7436810e730202e1163b72a456e41cf59839ef38686e660487af0ed5c4ad4.scope - libcontainer container 8ba7436810e730202e1163b72a456e41cf59839ef38686e660487af0ed5c4ad4. May 14 01:21:56.979801 containerd[1527]: time="2025-05-14T01:21:56.979757665Z" level=info msg="StartContainer for \"8ba7436810e730202e1163b72a456e41cf59839ef38686e660487af0ed5c4ad4\" returns successfully" May 14 01:21:57.370161 containerd[1527]: time="2025-05-14T01:21:57.370088464Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:21:57.371664 containerd[1527]: time="2025-05-14T01:21:57.371543109Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" May 14 01:21:57.376223 containerd[1527]: time="2025-05-14T01:21:57.376139508Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"44514075\" in 511.916493ms" May 14 01:21:57.376289 containerd[1527]: time="2025-05-14T01:21:57.376229490Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 14 01:21:57.379007 containerd[1527]: time="2025-05-14T01:21:57.378804222Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 14 01:21:57.380129 containerd[1527]: time="2025-05-14T01:21:57.380076840Z" level=info msg="CreateContainer within sandbox \"a6c95c712d67ec1d8e9a9e05725977a44b4125928c3abc669dc8c2b82a9d6390\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 14 01:21:57.392619 containerd[1527]: time="2025-05-14T01:21:57.391953339Z" level=info msg="Container 4c12405b92f2720ac713c26279b68a4a8e154e2139692b0b2aba95b324c61f57: CDI devices from CRI Config.CDIDevices: []" May 14 01:21:57.398595 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1739048373.mount: Deactivated successfully. May 14 01:21:57.403893 containerd[1527]: time="2025-05-14T01:21:57.403843583Z" level=info msg="CreateContainer within sandbox \"a6c95c712d67ec1d8e9a9e05725977a44b4125928c3abc669dc8c2b82a9d6390\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4c12405b92f2720ac713c26279b68a4a8e154e2139692b0b2aba95b324c61f57\"" May 14 01:21:57.406122 containerd[1527]: time="2025-05-14T01:21:57.406061315Z" level=info msg="StartContainer for \"4c12405b92f2720ac713c26279b68a4a8e154e2139692b0b2aba95b324c61f57\"" May 14 01:21:57.408207 containerd[1527]: time="2025-05-14T01:21:57.408173665Z" level=info msg="connecting to shim 4c12405b92f2720ac713c26279b68a4a8e154e2139692b0b2aba95b324c61f57" address="unix:///run/containerd/s/f72eaa9b52d9d4452ea56e944bcb4dcb7ac83e1cbfa129ba65b68ea155f83dc0" protocol=ttrpc version=3 May 14 01:21:57.436778 systemd[1]: Started cri-containerd-4c12405b92f2720ac713c26279b68a4a8e154e2139692b0b2aba95b324c61f57.scope - libcontainer container 4c12405b92f2720ac713c26279b68a4a8e154e2139692b0b2aba95b324c61f57. May 14 01:21:57.496090 containerd[1527]: time="2025-05-14T01:21:57.496043346Z" level=info msg="StartContainer for \"4c12405b92f2720ac713c26279b68a4a8e154e2139692b0b2aba95b324c61f57\" returns successfully" May 14 01:21:57.552427 systemd-networkd[1401]: calibbfad617531: Gained IPv6LL May 14 01:21:58.097079 kubelet[2830]: I0514 01:21:58.097017 2830 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 01:21:58.115335 kubelet[2830]: I0514 01:21:58.115231 2830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-76bc6945bb-vnsck" podStartSLOduration=26.279497029 podStartE2EDuration="30.11520773s" podCreationTimestamp="2025-05-14 01:21:28 +0000 UTC" firstStartedPulling="2025-05-14 01:21:53.542180569 +0000 UTC m=+46.861994183" lastFinishedPulling="2025-05-14 01:21:57.37789127 +0000 UTC m=+50.697704884" observedRunningTime="2025-05-14 01:21:58.114228191 +0000 UTC m=+51.434041814" watchObservedRunningTime="2025-05-14 01:21:58.11520773 +0000 UTC m=+51.435021364" May 14 01:21:58.115697 kubelet[2830]: I0514 01:21:58.115375 2830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-76bc6945bb-sc7fw" podStartSLOduration=25.608087279 podStartE2EDuration="30.115369679s" podCreationTimestamp="2025-05-14 01:21:28 +0000 UTC" firstStartedPulling="2025-05-14 01:21:52.356528078 +0000 UTC m=+45.676341692" lastFinishedPulling="2025-05-14 01:21:56.863810458 +0000 UTC m=+50.183624092" observedRunningTime="2025-05-14 01:21:57.103591488 +0000 UTC m=+50.423405101" watchObservedRunningTime="2025-05-14 01:21:58.115369679 +0000 UTC m=+51.435183302" May 14 01:21:59.141926 sshd[3699]: Connection closed by invalid user 83.222.191.218 port 57058 [preauth] May 14 01:21:59.145304 systemd[1]: sshd@9-37.27.220.42:22-83.222.191.218:57058.service: Deactivated successfully. May 14 01:22:00.530715 containerd[1527]: time="2025-05-14T01:22:00.530670234Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:22:00.531693 containerd[1527]: time="2025-05-14T01:22:00.531645755Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=34789138" May 14 01:22:00.532788 containerd[1527]: time="2025-05-14T01:22:00.532755553Z" level=info msg="ImageCreate event name:\"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:22:00.534592 containerd[1527]: time="2025-05-14T01:22:00.534569104Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:22:00.535137 containerd[1527]: time="2025-05-14T01:22:00.535038398Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"36281728\" in 3.15594224s" May 14 01:22:00.535137 containerd[1527]: time="2025-05-14T01:22:00.535062424Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\"" May 14 01:22:00.535893 containerd[1527]: time="2025-05-14T01:22:00.535876328Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 14 01:22:00.550154 containerd[1527]: time="2025-05-14T01:22:00.550049382Z" level=info msg="CreateContainer within sandbox \"dcd95909ae97c31daca874a06b06946d4f93ad7dac80d2049c9eadb193244c75\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 14 01:22:00.559681 containerd[1527]: time="2025-05-14T01:22:00.559604796Z" level=info msg="Container e42efdc556bc2376d710923bc1757f1b1156a716861b080a486a06394d34edd6: CDI devices from CRI Config.CDIDevices: []" May 14 01:22:00.573150 containerd[1527]: time="2025-05-14T01:22:00.573094857Z" level=info msg="CreateContainer within sandbox \"dcd95909ae97c31daca874a06b06946d4f93ad7dac80d2049c9eadb193244c75\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"e42efdc556bc2376d710923bc1757f1b1156a716861b080a486a06394d34edd6\"" May 14 01:22:00.575185 containerd[1527]: time="2025-05-14T01:22:00.573629898Z" level=info msg="StartContainer for \"e42efdc556bc2376d710923bc1757f1b1156a716861b080a486a06394d34edd6\"" May 14 01:22:00.575185 containerd[1527]: time="2025-05-14T01:22:00.574829768Z" level=info msg="connecting to shim e42efdc556bc2376d710923bc1757f1b1156a716861b080a486a06394d34edd6" address="unix:///run/containerd/s/53e42349019c2cfd3945c57b1cd63a6f38545fa2805cf864f4a791590ca2dbaa" protocol=ttrpc version=3 May 14 01:22:00.601111 systemd[1]: Started cri-containerd-e42efdc556bc2376d710923bc1757f1b1156a716861b080a486a06394d34edd6.scope - libcontainer container e42efdc556bc2376d710923bc1757f1b1156a716861b080a486a06394d34edd6. May 14 01:22:00.650735 containerd[1527]: time="2025-05-14T01:22:00.650692728Z" level=info msg="StartContainer for \"e42efdc556bc2376d710923bc1757f1b1156a716861b080a486a06394d34edd6\" returns successfully" May 14 01:22:01.191167 containerd[1527]: time="2025-05-14T01:22:01.191076767Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e42efdc556bc2376d710923bc1757f1b1156a716861b080a486a06394d34edd6\" id:\"5fd4c19b4aea7120c437f7c9f338b0734e33fc108e9d95082d86fafed66fae9f\" pid:4753 exited_at:{seconds:1747185721 nanos:190494206}" May 14 01:22:01.216283 kubelet[2830]: I0514 01:22:01.216216 2830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-75876f4c7b-2qdpw" podStartSLOduration=26.881394974 podStartE2EDuration="33.216192756s" podCreationTimestamp="2025-05-14 01:21:28 +0000 UTC" firstStartedPulling="2025-05-14 01:21:54.200978365 +0000 UTC m=+47.520791978" lastFinishedPulling="2025-05-14 01:22:00.535776146 +0000 UTC m=+53.855589760" observedRunningTime="2025-05-14 01:22:01.141248471 +0000 UTC m=+54.461062124" watchObservedRunningTime="2025-05-14 01:22:01.216192756 +0000 UTC m=+54.536006399" May 14 01:22:01.433293 kubelet[2830]: I0514 01:22:01.432620 2830 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 01:22:02.844858 containerd[1527]: time="2025-05-14T01:22:02.844794671Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:22:02.846235 containerd[1527]: time="2025-05-14T01:22:02.846155388Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7912898" May 14 01:22:02.847605 containerd[1527]: time="2025-05-14T01:22:02.847547926Z" level=info msg="ImageCreate event name:\"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:22:02.850266 containerd[1527]: time="2025-05-14T01:22:02.850196350Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:22:02.851024 containerd[1527]: time="2025-05-14T01:22:02.850882890Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"9405520\" in 2.314980433s" May 14 01:22:02.851024 containerd[1527]: time="2025-05-14T01:22:02.850918528Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\"" May 14 01:22:02.854024 containerd[1527]: time="2025-05-14T01:22:02.853882935Z" level=info msg="CreateContainer within sandbox \"afa53a1bbba0dff076aaadd0e011aa37dbe975f61176d26ca3bc0dd26862361a\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 14 01:22:02.882470 containerd[1527]: time="2025-05-14T01:22:02.882409295Z" level=info msg="Container 3715e1570d97139ae55a44f4bbe3c1a0fdcbe8269e142c3da3ec7e74ea193c74: CDI devices from CRI Config.CDIDevices: []" May 14 01:22:02.892485 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount773375904.mount: Deactivated successfully. May 14 01:22:02.897922 containerd[1527]: time="2025-05-14T01:22:02.897877714Z" level=info msg="CreateContainer within sandbox \"afa53a1bbba0dff076aaadd0e011aa37dbe975f61176d26ca3bc0dd26862361a\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"3715e1570d97139ae55a44f4bbe3c1a0fdcbe8269e142c3da3ec7e74ea193c74\"" May 14 01:22:02.898438 containerd[1527]: time="2025-05-14T01:22:02.898277217Z" level=info msg="StartContainer for \"3715e1570d97139ae55a44f4bbe3c1a0fdcbe8269e142c3da3ec7e74ea193c74\"" May 14 01:22:02.899973 containerd[1527]: time="2025-05-14T01:22:02.899933137Z" level=info msg="connecting to shim 3715e1570d97139ae55a44f4bbe3c1a0fdcbe8269e142c3da3ec7e74ea193c74" address="unix:///run/containerd/s/3e9a3323532342dc524f38b0a8cb34730807a256a7f6a524fa0ce43c68ccf23a" protocol=ttrpc version=3 May 14 01:22:02.920805 systemd[1]: Started cri-containerd-3715e1570d97139ae55a44f4bbe3c1a0fdcbe8269e142c3da3ec7e74ea193c74.scope - libcontainer container 3715e1570d97139ae55a44f4bbe3c1a0fdcbe8269e142c3da3ec7e74ea193c74. May 14 01:22:02.964747 containerd[1527]: time="2025-05-14T01:22:02.964687402Z" level=info msg="StartContainer for \"3715e1570d97139ae55a44f4bbe3c1a0fdcbe8269e142c3da3ec7e74ea193c74\" returns successfully" May 14 01:22:02.966601 containerd[1527]: time="2025-05-14T01:22:02.966577240Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 14 01:22:05.885930 containerd[1527]: time="2025-05-14T01:22:05.885860589Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:22:05.887036 containerd[1527]: time="2025-05-14T01:22:05.886979364Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13991773" May 14 01:22:05.888270 containerd[1527]: time="2025-05-14T01:22:05.888217838Z" level=info msg="ImageCreate event name:\"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:22:05.890096 containerd[1527]: time="2025-05-14T01:22:05.890055525Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 01:22:05.890687 containerd[1527]: time="2025-05-14T01:22:05.890505945Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"15484347\" in 2.923902265s" May 14 01:22:05.890687 containerd[1527]: time="2025-05-14T01:22:05.890542595Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\"" May 14 01:22:05.898566 containerd[1527]: time="2025-05-14T01:22:05.898522887Z" level=info msg="CreateContainer within sandbox \"afa53a1bbba0dff076aaadd0e011aa37dbe975f61176d26ca3bc0dd26862361a\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 14 01:22:05.906943 containerd[1527]: time="2025-05-14T01:22:05.906757895Z" level=info msg="Container a5d5ee00d837214ec939e2eb329df46d844f49c63a8648444ed46cfbbe6c762e: CDI devices from CRI Config.CDIDevices: []" May 14 01:22:05.928940 containerd[1527]: time="2025-05-14T01:22:05.928872604Z" level=info msg="CreateContainer within sandbox \"afa53a1bbba0dff076aaadd0e011aa37dbe975f61176d26ca3bc0dd26862361a\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"a5d5ee00d837214ec939e2eb329df46d844f49c63a8648444ed46cfbbe6c762e\"" May 14 01:22:05.931219 containerd[1527]: time="2025-05-14T01:22:05.930127970Z" level=info msg="StartContainer for \"a5d5ee00d837214ec939e2eb329df46d844f49c63a8648444ed46cfbbe6c762e\"" May 14 01:22:05.932061 containerd[1527]: time="2025-05-14T01:22:05.932011846Z" level=info msg="connecting to shim a5d5ee00d837214ec939e2eb329df46d844f49c63a8648444ed46cfbbe6c762e" address="unix:///run/containerd/s/3e9a3323532342dc524f38b0a8cb34730807a256a7f6a524fa0ce43c68ccf23a" protocol=ttrpc version=3 May 14 01:22:05.958749 systemd[1]: Started cri-containerd-a5d5ee00d837214ec939e2eb329df46d844f49c63a8648444ed46cfbbe6c762e.scope - libcontainer container a5d5ee00d837214ec939e2eb329df46d844f49c63a8648444ed46cfbbe6c762e. May 14 01:22:06.009938 containerd[1527]: time="2025-05-14T01:22:06.009163874Z" level=info msg="StartContainer for \"a5d5ee00d837214ec939e2eb329df46d844f49c63a8648444ed46cfbbe6c762e\" returns successfully" May 14 01:22:06.156051 kubelet[2830]: I0514 01:22:06.155863 2830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-2jzfd" podStartSLOduration=28.352814127 podStartE2EDuration="38.155846144s" podCreationTimestamp="2025-05-14 01:21:28 +0000 UTC" firstStartedPulling="2025-05-14 01:21:56.088390457 +0000 UTC m=+49.408204100" lastFinishedPulling="2025-05-14 01:22:05.891422494 +0000 UTC m=+59.211236117" observedRunningTime="2025-05-14 01:22:06.153471622 +0000 UTC m=+59.473285225" watchObservedRunningTime="2025-05-14 01:22:06.155846144 +0000 UTC m=+59.475659777" May 14 01:22:07.116954 kubelet[2830]: I0514 01:22:07.116865 2830 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 14 01:22:07.129868 kubelet[2830]: I0514 01:22:07.129810 2830 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 14 01:22:22.122530 containerd[1527]: time="2025-05-14T01:22:22.122479015Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7b7d873459b2bb5453ece80f283134606a65d1de41345c88671a50bcf3aaddb1\" id:\"d6e8bb149d0f78496078a92746fa363b5261e0fcd2dce1a821c8209be6fc6969\" pid:4867 exited_at:{seconds:1747185742 nanos:121848711}" May 14 01:22:24.899055 containerd[1527]: time="2025-05-14T01:22:24.899004124Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e42efdc556bc2376d710923bc1757f1b1156a716861b080a486a06394d34edd6\" id:\"5b49a84ea1b5daf90270fcfd33e73f387b2e481f06d4b5071739588e73809d89\" pid:4893 exited_at:{seconds:1747185744 nanos:898594422}" May 14 01:22:31.179041 containerd[1527]: time="2025-05-14T01:22:31.178961394Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e42efdc556bc2376d710923bc1757f1b1156a716861b080a486a06394d34edd6\" id:\"b3225fa56d3afaa3aba1d1e352cb2c800f8a3987a7e83326481e2d765b71dee3\" pid:4914 exited_at:{seconds:1747185751 nanos:177175715}" May 14 01:22:52.120504 containerd[1527]: time="2025-05-14T01:22:52.120445870Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7b7d873459b2bb5453ece80f283134606a65d1de41345c88671a50bcf3aaddb1\" id:\"ea28f73432675ed49b05f74d2d81d4758029a3b088c83a5806a8efc80b7ebc3d\" pid:4945 exited_at:{seconds:1747185772 nanos:119358270}" May 14 01:23:01.178088 containerd[1527]: time="2025-05-14T01:23:01.177961321Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e42efdc556bc2376d710923bc1757f1b1156a716861b080a486a06394d34edd6\" id:\"d9eedc868901702cb094571d4cefebb97cd67758c9cc68117262d984c013284b\" pid:4970 exited_at:{seconds:1747185781 nanos:177217050}" May 14 01:23:22.110282 containerd[1527]: time="2025-05-14T01:23:22.109941381Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7b7d873459b2bb5453ece80f283134606a65d1de41345c88671a50bcf3aaddb1\" id:\"69a8aa796b11ae0b11870d805c312b8e86b286846ef521bed14d203266e00264\" pid:5003 exited_at:{seconds:1747185802 nanos:109142789}" May 14 01:23:24.893033 containerd[1527]: time="2025-05-14T01:23:24.892910077Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e42efdc556bc2376d710923bc1757f1b1156a716861b080a486a06394d34edd6\" id:\"dd1fce1bd5a807e457e26527159f83e76d33cb0d5540833cd11f6c4883ca1530\" pid:5027 exited_at:{seconds:1747185804 nanos:892585119}" May 14 01:23:31.178250 containerd[1527]: time="2025-05-14T01:23:31.178191118Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e42efdc556bc2376d710923bc1757f1b1156a716861b080a486a06394d34edd6\" id:\"d4570b77634ab5a524e5fbff014c274c30f2fe54054e00a526943f7ac5aafc26\" pid:5060 exited_at:{seconds:1747185811 nanos:177249526}" May 14 01:23:52.135019 containerd[1527]: time="2025-05-14T01:23:52.134915491Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7b7d873459b2bb5453ece80f283134606a65d1de41345c88671a50bcf3aaddb1\" id:\"2adb508855e3b9f5e27ea46b3adba3f2d4421991dc4eb2e60430f56afaeccb19\" pid:5090 exited_at:{seconds:1747185832 nanos:134481816}" May 14 01:24:01.179271 containerd[1527]: time="2025-05-14T01:24:01.179187195Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e42efdc556bc2376d710923bc1757f1b1156a716861b080a486a06394d34edd6\" id:\"b12f59cc8d61e67ec00beea4989acd482eca01f79cbc99f5dad8ebdc1a538a80\" pid:5120 exited_at:{seconds:1747185841 nanos:178607774}" May 14 01:24:22.117835 containerd[1527]: time="2025-05-14T01:24:22.117773376Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7b7d873459b2bb5453ece80f283134606a65d1de41345c88671a50bcf3aaddb1\" id:\"68c254c25768e2fdbd40cd93d5c23f8f03a90a131e898e0f21190423ec99c68d\" pid:5149 exited_at:{seconds:1747185862 nanos:116254938}" May 14 01:24:24.905394 containerd[1527]: time="2025-05-14T01:24:24.905339118Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e42efdc556bc2376d710923bc1757f1b1156a716861b080a486a06394d34edd6\" id:\"798d4a5fdef4586baae9667503e60c7cf5591f24191a8aa2e088a0d12a7c4e66\" pid:5174 exited_at:{seconds:1747185864 nanos:905050843}" May 14 01:24:31.181170 containerd[1527]: time="2025-05-14T01:24:31.181098764Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e42efdc556bc2376d710923bc1757f1b1156a716861b080a486a06394d34edd6\" id:\"f5570bd0123f0bcda5e9cae0ebe2d36284c352dc77f52f14c55ee5ae75b9760e\" pid:5196 exited_at:{seconds:1747185871 nanos:180778509}" May 14 01:24:32.753890 systemd[1]: Started sshd@10-37.27.220.42:22-192.81.213.83:43664.service - OpenSSH per-connection server daemon (192.81.213.83:43664). May 14 01:24:33.412407 sshd[5206]: Invalid user mojtaba from 192.81.213.83 port 43664 May 14 01:24:33.529727 sshd[5206]: Received disconnect from 192.81.213.83 port 43664:11: Bye Bye [preauth] May 14 01:24:33.529727 sshd[5206]: Disconnected from invalid user mojtaba 192.81.213.83 port 43664 [preauth] May 14 01:24:33.533568 systemd[1]: sshd@10-37.27.220.42:22-192.81.213.83:43664.service: Deactivated successfully. May 14 01:24:52.128443 containerd[1527]: time="2025-05-14T01:24:52.128287177Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7b7d873459b2bb5453ece80f283134606a65d1de41345c88671a50bcf3aaddb1\" id:\"9e8946bc92445a318a10785c6002543ec84d07e77982d7c2a2071c680a3211bd\" pid:5232 exited_at:{seconds:1747185892 nanos:127619944}" May 14 01:25:01.184313 containerd[1527]: time="2025-05-14T01:25:01.184258345Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e42efdc556bc2376d710923bc1757f1b1156a716861b080a486a06394d34edd6\" id:\"27a5f0a9c000c859f96b893ccc2ae9f091b43f5cda8bc956dd63555c241f542a\" pid:5256 exited_at:{seconds:1747185901 nanos:183487003}" May 14 01:25:22.117115 containerd[1527]: time="2025-05-14T01:25:22.117065029Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7b7d873459b2bb5453ece80f283134606a65d1de41345c88671a50bcf3aaddb1\" id:\"876a8fce047bb612fea49609c6b4ea5c109ec9b580e85cbf365224a9cf6bc6d2\" pid:5300 exited_at:{seconds:1747185922 nanos:116720194}" May 14 01:25:24.896282 containerd[1527]: time="2025-05-14T01:25:24.896193179Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e42efdc556bc2376d710923bc1757f1b1156a716861b080a486a06394d34edd6\" id:\"8f145677fc4a86375a436c841bce7717293f441a4932895a498b89284f45d865\" pid:5323 exited_at:{seconds:1747185924 nanos:895536344}" May 14 01:25:31.180950 containerd[1527]: time="2025-05-14T01:25:31.180838548Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e42efdc556bc2376d710923bc1757f1b1156a716861b080a486a06394d34edd6\" id:\"544214ca761120a63639699d1c717bb0cea735d5f704081c2388a1866d4bd1b5\" pid:5343 exited_at:{seconds:1747185931 nanos:179308365}" May 14 01:25:40.068378 systemd[1]: Started sshd@11-37.27.220.42:22-121.229.10.68:10286.service - OpenSSH per-connection server daemon (121.229.10.68:10286). May 14 01:25:42.611462 sshd[5353]: Received disconnect from 121.229.10.68 port 10286:11: Bye Bye [preauth] May 14 01:25:42.611462 sshd[5353]: Disconnected from authenticating user root 121.229.10.68 port 10286 [preauth] May 14 01:25:42.612925 systemd[1]: sshd@11-37.27.220.42:22-121.229.10.68:10286.service: Deactivated successfully. May 14 01:25:47.951014 systemd[1]: Started sshd@12-37.27.220.42:22-139.178.89.65:36306.service - OpenSSH per-connection server daemon (139.178.89.65:36306). May 14 01:25:48.960738 sshd[5365]: Accepted publickey for core from 139.178.89.65 port 36306 ssh2: RSA SHA256:wzXoKaIc/s6rgd+lrAvPV4Ayc93C3Z1S3pv9QsITRgg May 14 01:25:48.963733 sshd-session[5365]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:25:48.974743 systemd-logind[1504]: New session 8 of user core. May 14 01:25:48.984007 systemd[1]: Started session-8.scope - Session 8 of User core. May 14 01:25:50.302836 sshd[5367]: Connection closed by 139.178.89.65 port 36306 May 14 01:25:50.306767 sshd-session[5365]: pam_unix(sshd:session): session closed for user core May 14 01:25:50.318559 systemd[1]: sshd@12-37.27.220.42:22-139.178.89.65:36306.service: Deactivated successfully. May 14 01:25:50.322749 systemd[1]: session-8.scope: Deactivated successfully. May 14 01:25:50.325215 systemd-logind[1504]: Session 8 logged out. Waiting for processes to exit. May 14 01:25:50.328535 systemd-logind[1504]: Removed session 8. May 14 01:25:52.116611 containerd[1527]: time="2025-05-14T01:25:52.116507311Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7b7d873459b2bb5453ece80f283134606a65d1de41345c88671a50bcf3aaddb1\" id:\"425608dced7bca146ae4e57c6e1ce232e83b99ee2ea5589535935e965655eb25\" pid:5393 exited_at:{seconds:1747185952 nanos:115495911}" May 14 01:25:55.476421 systemd[1]: Started sshd@13-37.27.220.42:22-139.178.89.65:36322.service - OpenSSH per-connection server daemon (139.178.89.65:36322). May 14 01:25:56.495554 sshd[5406]: Accepted publickey for core from 139.178.89.65 port 36322 ssh2: RSA SHA256:wzXoKaIc/s6rgd+lrAvPV4Ayc93C3Z1S3pv9QsITRgg May 14 01:25:56.497827 sshd-session[5406]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:25:56.506531 systemd-logind[1504]: New session 9 of user core. May 14 01:25:56.511402 systemd[1]: Started session-9.scope - Session 9 of User core. May 14 01:25:57.307122 sshd[5408]: Connection closed by 139.178.89.65 port 36322 May 14 01:25:57.310371 sshd-session[5406]: pam_unix(sshd:session): session closed for user core May 14 01:25:57.317063 systemd[1]: sshd@13-37.27.220.42:22-139.178.89.65:36322.service: Deactivated successfully. May 14 01:25:57.321624 systemd[1]: session-9.scope: Deactivated successfully. May 14 01:25:57.323278 systemd-logind[1504]: Session 9 logged out. Waiting for processes to exit. May 14 01:25:57.325409 systemd-logind[1504]: Removed session 9. May 14 01:26:01.183842 containerd[1527]: time="2025-05-14T01:26:01.183752892Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e42efdc556bc2376d710923bc1757f1b1156a716861b080a486a06394d34edd6\" id:\"d34f6e7068c38e6a6b2c702134bcbfaedc34e88bd0024918440f5f40bca281c8\" pid:5433 exited_at:{seconds:1747185961 nanos:182982118}" May 14 01:26:01.657892 containerd[1527]: time="2025-05-14T01:26:01.655416031Z" level=warning msg="container event discarded" container=454c159fcd3f69ac81b9f21a2ccee0c7c4bfeed8cf63ac7b73bd9a7a470b2d1d type=CONTAINER_CREATED_EVENT May 14 01:26:01.691183 containerd[1527]: time="2025-05-14T01:26:01.691106126Z" level=warning msg="container event discarded" container=454c159fcd3f69ac81b9f21a2ccee0c7c4bfeed8cf63ac7b73bd9a7a470b2d1d type=CONTAINER_STARTED_EVENT May 14 01:26:01.691183 containerd[1527]: time="2025-05-14T01:26:01.691165940Z" level=warning msg="container event discarded" container=8c01f47515a5d7532286d6090545adff5fd65e39639b0f0f14616de4983377ca type=CONTAINER_CREATED_EVENT May 14 01:26:01.691183 containerd[1527]: time="2025-05-14T01:26:01.691181980Z" level=warning msg="container event discarded" container=8c01f47515a5d7532286d6090545adff5fd65e39639b0f0f14616de4983377ca type=CONTAINER_STARTED_EVENT May 14 01:26:01.691452 containerd[1527]: time="2025-05-14T01:26:01.691195737Z" level=warning msg="container event discarded" container=a40ea0bfe470afc30de2c5a527eca5bd6a94263660f1e2aca4cb7f36cf4f9c8d type=CONTAINER_CREATED_EVENT May 14 01:26:01.691452 containerd[1527]: time="2025-05-14T01:26:01.691208621Z" level=warning msg="container event discarded" container=a40ea0bfe470afc30de2c5a527eca5bd6a94263660f1e2aca4cb7f36cf4f9c8d type=CONTAINER_STARTED_EVENT May 14 01:26:01.715786 containerd[1527]: time="2025-05-14T01:26:01.715714487Z" level=warning msg="container event discarded" container=cf0e75c98920fc7d84f2936b577dc5ae11233ff02f1ee2d0de6fcca3e73241f0 type=CONTAINER_CREATED_EVENT May 14 01:26:01.715786 containerd[1527]: time="2025-05-14T01:26:01.715773851Z" level=warning msg="container event discarded" container=656de50a1c7a624dd04387dd914d40e888ec50f743ec0a8376ac7dc30bbe9f45 type=CONTAINER_CREATED_EVENT May 14 01:26:01.715949 containerd[1527]: time="2025-05-14T01:26:01.715790291Z" level=warning msg="container event discarded" container=d8014d557706e64c8bf0fc1376d7ccfaee7b14bf57f8dab453a8ee121dd14b96 type=CONTAINER_CREATED_EVENT May 14 01:26:01.800119 containerd[1527]: time="2025-05-14T01:26:01.800016457Z" level=warning msg="container event discarded" container=cf0e75c98920fc7d84f2936b577dc5ae11233ff02f1ee2d0de6fcca3e73241f0 type=CONTAINER_STARTED_EVENT May 14 01:26:01.836381 containerd[1527]: time="2025-05-14T01:26:01.836272237Z" level=warning msg="container event discarded" container=d8014d557706e64c8bf0fc1376d7ccfaee7b14bf57f8dab453a8ee121dd14b96 type=CONTAINER_STARTED_EVENT May 14 01:26:01.836381 containerd[1527]: time="2025-05-14T01:26:01.836324526Z" level=warning msg="container event discarded" container=656de50a1c7a624dd04387dd914d40e888ec50f743ec0a8376ac7dc30bbe9f45 type=CONTAINER_STARTED_EVENT May 14 01:26:02.480738 systemd[1]: Started sshd@14-37.27.220.42:22-139.178.89.65:37158.service - OpenSSH per-connection server daemon (139.178.89.65:37158). May 14 01:26:03.489071 sshd[5443]: Accepted publickey for core from 139.178.89.65 port 37158 ssh2: RSA SHA256:wzXoKaIc/s6rgd+lrAvPV4Ayc93C3Z1S3pv9QsITRgg May 14 01:26:03.491525 sshd-session[5443]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:26:03.502738 systemd-logind[1504]: New session 10 of user core. May 14 01:26:03.504834 systemd[1]: Started session-10.scope - Session 10 of User core. May 14 01:26:04.272304 sshd[5445]: Connection closed by 139.178.89.65 port 37158 May 14 01:26:04.272984 sshd-session[5443]: pam_unix(sshd:session): session closed for user core May 14 01:26:04.280529 systemd-logind[1504]: Session 10 logged out. Waiting for processes to exit. May 14 01:26:04.282274 systemd[1]: sshd@14-37.27.220.42:22-139.178.89.65:37158.service: Deactivated successfully. May 14 01:26:04.287150 systemd[1]: session-10.scope: Deactivated successfully. May 14 01:26:04.289868 systemd-logind[1504]: Removed session 10. May 14 01:26:04.452492 systemd[1]: Started sshd@15-37.27.220.42:22-139.178.89.65:37160.service - OpenSSH per-connection server daemon (139.178.89.65:37160). May 14 01:26:05.460352 sshd[5458]: Accepted publickey for core from 139.178.89.65 port 37160 ssh2: RSA SHA256:wzXoKaIc/s6rgd+lrAvPV4Ayc93C3Z1S3pv9QsITRgg May 14 01:26:05.462466 sshd-session[5458]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:26:05.470722 systemd-logind[1504]: New session 11 of user core. May 14 01:26:05.476899 systemd[1]: Started session-11.scope - Session 11 of User core. May 14 01:26:06.292738 sshd[5460]: Connection closed by 139.178.89.65 port 37160 May 14 01:26:06.301426 sshd-session[5458]: pam_unix(sshd:session): session closed for user core May 14 01:26:06.309706 systemd[1]: sshd@15-37.27.220.42:22-139.178.89.65:37160.service: Deactivated successfully. May 14 01:26:06.313901 systemd[1]: session-11.scope: Deactivated successfully. May 14 01:26:06.315553 systemd-logind[1504]: Session 11 logged out. Waiting for processes to exit. May 14 01:26:06.317742 systemd-logind[1504]: Removed session 11. May 14 01:26:06.472021 systemd[1]: Started sshd@16-37.27.220.42:22-139.178.89.65:37176.service - OpenSSH per-connection server daemon (139.178.89.65:37176). May 14 01:26:07.510844 sshd[5470]: Accepted publickey for core from 139.178.89.65 port 37176 ssh2: RSA SHA256:wzXoKaIc/s6rgd+lrAvPV4Ayc93C3Z1S3pv9QsITRgg May 14 01:26:07.513099 sshd-session[5470]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:26:07.521748 systemd-logind[1504]: New session 12 of user core. May 14 01:26:07.528883 systemd[1]: Started session-12.scope - Session 12 of User core. May 14 01:26:08.326158 sshd[5474]: Connection closed by 139.178.89.65 port 37176 May 14 01:26:08.328497 sshd-session[5470]: pam_unix(sshd:session): session closed for user core May 14 01:26:08.338149 systemd-logind[1504]: Session 12 logged out. Waiting for processes to exit. May 14 01:26:08.338396 systemd[1]: sshd@16-37.27.220.42:22-139.178.89.65:37176.service: Deactivated successfully. May 14 01:26:08.343583 systemd[1]: session-12.scope: Deactivated successfully. May 14 01:26:08.346393 systemd-logind[1504]: Removed session 12. May 14 01:26:11.971458 containerd[1527]: time="2025-05-14T01:26:11.971335431Z" level=warning msg="container event discarded" container=1753dd8af2c9586042440710ea6306a5fe86d45c38dd972143b35bfa304cf49b type=CONTAINER_CREATED_EVENT May 14 01:26:11.971458 containerd[1527]: time="2025-05-14T01:26:11.971419831Z" level=warning msg="container event discarded" container=1753dd8af2c9586042440710ea6306a5fe86d45c38dd972143b35bfa304cf49b type=CONTAINER_STARTED_EVENT May 14 01:26:12.003805 containerd[1527]: time="2025-05-14T01:26:12.003721216Z" level=warning msg="container event discarded" container=0b2b9e7a8e812a51ded7968c88b4ad6f89564a815b3b7783a0b874fc441872cc type=CONTAINER_CREATED_EVENT May 14 01:26:12.080842 containerd[1527]: time="2025-05-14T01:26:12.080712877Z" level=warning msg="container event discarded" container=0b2b9e7a8e812a51ded7968c88b4ad6f89564a815b3b7783a0b874fc441872cc type=CONTAINER_STARTED_EVENT May 14 01:26:12.358402 containerd[1527]: time="2025-05-14T01:26:12.358276845Z" level=warning msg="container event discarded" container=a0fb3db6c3c2ef566a4db7a65902d29c184c9c5b07d10938b1c1536393fe43c7 type=CONTAINER_CREATED_EVENT May 14 01:26:12.358402 containerd[1527]: time="2025-05-14T01:26:12.358361547Z" level=warning msg="container event discarded" container=a0fb3db6c3c2ef566a4db7a65902d29c184c9c5b07d10938b1c1536393fe43c7 type=CONTAINER_STARTED_EVENT May 14 01:26:13.499031 systemd[1]: Started sshd@17-37.27.220.42:22-139.178.89.65:46272.service - OpenSSH per-connection server daemon (139.178.89.65:46272). May 14 01:26:14.506979 sshd[5492]: Accepted publickey for core from 139.178.89.65 port 46272 ssh2: RSA SHA256:wzXoKaIc/s6rgd+lrAvPV4Ayc93C3Z1S3pv9QsITRgg May 14 01:26:14.509553 sshd-session[5492]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:26:14.519694 systemd-logind[1504]: New session 13 of user core. May 14 01:26:14.528904 systemd[1]: Started session-13.scope - Session 13 of User core. May 14 01:26:15.310145 sshd[5494]: Connection closed by 139.178.89.65 port 46272 May 14 01:26:15.311226 sshd-session[5492]: pam_unix(sshd:session): session closed for user core May 14 01:26:15.316772 systemd[1]: sshd@17-37.27.220.42:22-139.178.89.65:46272.service: Deactivated successfully. May 14 01:26:15.320390 systemd[1]: session-13.scope: Deactivated successfully. May 14 01:26:15.322050 systemd-logind[1504]: Session 13 logged out. Waiting for processes to exit. May 14 01:26:15.324141 systemd-logind[1504]: Removed session 13. May 14 01:26:20.481119 systemd[1]: Started sshd@18-37.27.220.42:22-139.178.89.65:57254.service - OpenSSH per-connection server daemon (139.178.89.65:57254). May 14 01:26:21.467177 sshd[5506]: Accepted publickey for core from 139.178.89.65 port 57254 ssh2: RSA SHA256:wzXoKaIc/s6rgd+lrAvPV4Ayc93C3Z1S3pv9QsITRgg May 14 01:26:21.469130 sshd-session[5506]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:26:21.476802 systemd-logind[1504]: New session 14 of user core. May 14 01:26:21.484893 systemd[1]: Started session-14.scope - Session 14 of User core. May 14 01:26:22.137413 containerd[1527]: time="2025-05-14T01:26:22.137355719Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7b7d873459b2bb5453ece80f283134606a65d1de41345c88671a50bcf3aaddb1\" id:\"6b7c1fa113abea54564ef52215ca9e78af68cd8c6472a38eac3e022489a2f577\" pid:5529 exited_at:{seconds:1747185982 nanos:136928477}" May 14 01:26:22.268175 sshd[5508]: Connection closed by 139.178.89.65 port 57254 May 14 01:26:22.269227 sshd-session[5506]: pam_unix(sshd:session): session closed for user core May 14 01:26:22.274922 systemd-logind[1504]: Session 14 logged out. Waiting for processes to exit. May 14 01:26:22.276313 systemd[1]: sshd@18-37.27.220.42:22-139.178.89.65:57254.service: Deactivated successfully. May 14 01:26:22.280340 systemd[1]: session-14.scope: Deactivated successfully. May 14 01:26:22.283482 systemd-logind[1504]: Removed session 14. May 14 01:26:24.897440 containerd[1527]: time="2025-05-14T01:26:24.897334082Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e42efdc556bc2376d710923bc1757f1b1156a716861b080a486a06394d34edd6\" id:\"4c7c7430bcae212b0d1f99e084dc04bbf671484f5515bd43f5bc835544362e48\" pid:5557 exited_at:{seconds:1747185984 nanos:897030706}" May 14 01:26:24.973126 containerd[1527]: time="2025-05-14T01:26:24.972994647Z" level=warning msg="container event discarded" container=7852863450d3adb36cb50bc762437c523332805c4a23e1b49ca6d57112f2f2bb type=CONTAINER_CREATED_EVENT May 14 01:26:25.033515 containerd[1527]: time="2025-05-14T01:26:25.033320427Z" level=warning msg="container event discarded" container=7852863450d3adb36cb50bc762437c523332805c4a23e1b49ca6d57112f2f2bb type=CONTAINER_STARTED_EVENT May 14 01:26:27.442942 systemd[1]: Started sshd@19-37.27.220.42:22-139.178.89.65:50532.service - OpenSSH per-connection server daemon (139.178.89.65:50532). May 14 01:26:28.464476 sshd[5568]: Accepted publickey for core from 139.178.89.65 port 50532 ssh2: RSA SHA256:wzXoKaIc/s6rgd+lrAvPV4Ayc93C3Z1S3pv9QsITRgg May 14 01:26:28.466699 sshd-session[5568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:26:28.475495 systemd-logind[1504]: New session 15 of user core. May 14 01:26:28.479920 systemd[1]: Started session-15.scope - Session 15 of User core. May 14 01:26:28.687949 containerd[1527]: time="2025-05-14T01:26:28.687776754Z" level=warning msg="container event discarded" container=fa2b548e032839e944629722933a48e2d14a69178af5a9ec196ecaf7a39f5b33 type=CONTAINER_CREATED_EVENT May 14 01:26:28.687949 containerd[1527]: time="2025-05-14T01:26:28.687907031Z" level=warning msg="container event discarded" container=fa2b548e032839e944629722933a48e2d14a69178af5a9ec196ecaf7a39f5b33 type=CONTAINER_STARTED_EVENT May 14 01:26:28.715524 containerd[1527]: time="2025-05-14T01:26:28.715298815Z" level=warning msg="container event discarded" container=d0d3dee6febe6d6504a1b4d84499593abf604423adbd7f3be7e8380d26b76c30 type=CONTAINER_CREATED_EVENT May 14 01:26:28.715524 containerd[1527]: time="2025-05-14T01:26:28.715368948Z" level=warning msg="container event discarded" container=d0d3dee6febe6d6504a1b4d84499593abf604423adbd7f3be7e8380d26b76c30 type=CONTAINER_STARTED_EVENT May 14 01:26:29.237591 sshd[5570]: Connection closed by 139.178.89.65 port 50532 May 14 01:26:29.238517 sshd-session[5568]: pam_unix(sshd:session): session closed for user core May 14 01:26:29.242693 systemd[1]: sshd@19-37.27.220.42:22-139.178.89.65:50532.service: Deactivated successfully. May 14 01:26:29.246252 systemd[1]: session-15.scope: Deactivated successfully. May 14 01:26:29.248415 systemd-logind[1504]: Session 15 logged out. Waiting for processes to exit. May 14 01:26:29.250307 systemd-logind[1504]: Removed session 15. May 14 01:26:29.405249 update_engine[1508]: I20250514 01:26:29.405169 1508 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs May 14 01:26:29.405884 update_engine[1508]: I20250514 01:26:29.405795 1508 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs May 14 01:26:29.408006 update_engine[1508]: I20250514 01:26:29.407800 1508 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs May 14 01:26:29.411692 update_engine[1508]: I20250514 01:26:29.410123 1508 omaha_request_params.cc:62] Current group set to alpha May 14 01:26:29.411692 update_engine[1508]: I20250514 01:26:29.410297 1508 update_attempter.cc:499] Already updated boot flags. Skipping. May 14 01:26:29.411692 update_engine[1508]: I20250514 01:26:29.410310 1508 update_attempter.cc:643] Scheduling an action processor start. May 14 01:26:29.411692 update_engine[1508]: I20250514 01:26:29.410338 1508 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 14 01:26:29.411692 update_engine[1508]: I20250514 01:26:29.410398 1508 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs May 14 01:26:29.411692 update_engine[1508]: I20250514 01:26:29.410482 1508 omaha_request_action.cc:271] Posting an Omaha request to disabled May 14 01:26:29.411692 update_engine[1508]: I20250514 01:26:29.410494 1508 omaha_request_action.cc:272] Request: May 14 01:26:29.411692 update_engine[1508]: May 14 01:26:29.411692 update_engine[1508]: May 14 01:26:29.411692 update_engine[1508]: May 14 01:26:29.411692 update_engine[1508]: May 14 01:26:29.411692 update_engine[1508]: May 14 01:26:29.411692 update_engine[1508]: May 14 01:26:29.411692 update_engine[1508]: May 14 01:26:29.411692 update_engine[1508]: May 14 01:26:29.411692 update_engine[1508]: I20250514 01:26:29.410504 1508 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 14 01:26:29.414274 systemd[1]: Started sshd@20-37.27.220.42:22-139.178.89.65:50546.service - OpenSSH per-connection server daemon (139.178.89.65:50546). May 14 01:26:29.431242 update_engine[1508]: I20250514 01:26:29.430624 1508 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 14 01:26:29.431242 update_engine[1508]: I20250514 01:26:29.431104 1508 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 14 01:26:29.432280 update_engine[1508]: E20250514 01:26:29.432235 1508 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 14 01:26:29.432350 update_engine[1508]: I20250514 01:26:29.432307 1508 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 May 14 01:26:29.445956 locksmithd[1536]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 May 14 01:26:30.444037 sshd[5582]: Accepted publickey for core from 139.178.89.65 port 50546 ssh2: RSA SHA256:wzXoKaIc/s6rgd+lrAvPV4Ayc93C3Z1S3pv9QsITRgg May 14 01:26:30.447733 sshd-session[5582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:26:30.456763 systemd-logind[1504]: New session 16 of user core. May 14 01:26:30.462858 systemd[1]: Started session-16.scope - Session 16 of User core. May 14 01:26:31.008372 containerd[1527]: time="2025-05-14T01:26:31.008203772Z" level=warning msg="container event discarded" container=17017cc081439c9d44a54e006ac8219afbb01790a8cfae7630b118796a07f517 type=CONTAINER_CREATED_EVENT May 14 01:26:31.087020 containerd[1527]: time="2025-05-14T01:26:31.086906703Z" level=warning msg="container event discarded" container=17017cc081439c9d44a54e006ac8219afbb01790a8cfae7630b118796a07f517 type=CONTAINER_STARTED_EVENT May 14 01:26:31.169199 containerd[1527]: time="2025-05-14T01:26:31.169124602Z" level=warning msg="container event discarded" container=17017cc081439c9d44a54e006ac8219afbb01790a8cfae7630b118796a07f517 type=CONTAINER_STOPPED_EVENT May 14 01:26:31.201745 containerd[1527]: time="2025-05-14T01:26:31.201396100Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e42efdc556bc2376d710923bc1757f1b1156a716861b080a486a06394d34edd6\" id:\"f0e086f6e3458349e4496b9bec3c3e2fb4a18fce690418b16fd021cfb2e774b5\" pid:5603 exited_at:{seconds:1747185991 nanos:200993904}" May 14 01:26:31.459927 sshd[5584]: Connection closed by 139.178.89.65 port 50546 May 14 01:26:31.466396 sshd-session[5582]: pam_unix(sshd:session): session closed for user core May 14 01:26:31.473300 systemd[1]: sshd@20-37.27.220.42:22-139.178.89.65:50546.service: Deactivated successfully. May 14 01:26:31.478096 systemd[1]: session-16.scope: Deactivated successfully. May 14 01:26:31.481138 systemd-logind[1504]: Session 16 logged out. Waiting for processes to exit. May 14 01:26:31.483512 systemd-logind[1504]: Removed session 16. May 14 01:26:31.633920 systemd[1]: Started sshd@21-37.27.220.42:22-139.178.89.65:50560.service - OpenSSH per-connection server daemon (139.178.89.65:50560). May 14 01:26:32.661344 sshd[5616]: Accepted publickey for core from 139.178.89.65 port 50560 ssh2: RSA SHA256:wzXoKaIc/s6rgd+lrAvPV4Ayc93C3Z1S3pv9QsITRgg May 14 01:26:32.663511 sshd-session[5616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:26:32.672906 systemd-logind[1504]: New session 17 of user core. May 14 01:26:32.681941 systemd[1]: Started session-17.scope - Session 17 of User core. May 14 01:26:33.827811 containerd[1527]: time="2025-05-14T01:26:33.827671380Z" level=warning msg="container event discarded" container=0d6adeb94069bbf8040a07ed24b8ea85bbd2f68d5a3f60c265c2113d8022be95 type=CONTAINER_CREATED_EVENT May 14 01:26:33.890320 containerd[1527]: time="2025-05-14T01:26:33.890175223Z" level=warning msg="container event discarded" container=0d6adeb94069bbf8040a07ed24b8ea85bbd2f68d5a3f60c265c2113d8022be95 type=CONTAINER_STARTED_EVENT May 14 01:26:34.719082 sshd[5630]: Connection closed by 139.178.89.65 port 50560 May 14 01:26:34.721022 sshd-session[5616]: pam_unix(sshd:session): session closed for user core May 14 01:26:34.725949 systemd[1]: sshd@21-37.27.220.42:22-139.178.89.65:50560.service: Deactivated successfully. May 14 01:26:34.729321 systemd[1]: session-17.scope: Deactivated successfully. May 14 01:26:34.730871 systemd-logind[1504]: Session 17 logged out. Waiting for processes to exit. May 14 01:26:34.732691 systemd-logind[1504]: Removed session 17. May 14 01:26:34.891534 systemd[1]: Started sshd@22-37.27.220.42:22-139.178.89.65:50566.service - OpenSSH per-connection server daemon (139.178.89.65:50566). May 14 01:26:35.915564 sshd[5647]: Accepted publickey for core from 139.178.89.65 port 50566 ssh2: RSA SHA256:wzXoKaIc/s6rgd+lrAvPV4Ayc93C3Z1S3pv9QsITRgg May 14 01:26:35.917877 sshd-session[5647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:26:35.925418 systemd-logind[1504]: New session 18 of user core. May 14 01:26:35.936979 systemd[1]: Started session-18.scope - Session 18 of User core. May 14 01:26:36.934812 sshd[5649]: Connection closed by 139.178.89.65 port 50566 May 14 01:26:36.935751 sshd-session[5647]: pam_unix(sshd:session): session closed for user core May 14 01:26:36.940277 systemd[1]: sshd@22-37.27.220.42:22-139.178.89.65:50566.service: Deactivated successfully. May 14 01:26:36.943730 systemd[1]: session-18.scope: Deactivated successfully. May 14 01:26:36.945929 systemd-logind[1504]: Session 18 logged out. Waiting for processes to exit. May 14 01:26:36.948103 systemd-logind[1504]: Removed session 18. May 14 01:26:37.109251 systemd[1]: Started sshd@23-37.27.220.42:22-139.178.89.65:58598.service - OpenSSH per-connection server daemon (139.178.89.65:58598). May 14 01:26:38.123846 sshd[5660]: Accepted publickey for core from 139.178.89.65 port 58598 ssh2: RSA SHA256:wzXoKaIc/s6rgd+lrAvPV4Ayc93C3Z1S3pv9QsITRgg May 14 01:26:38.126201 sshd-session[5660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:26:38.135329 systemd-logind[1504]: New session 19 of user core. May 14 01:26:38.138864 systemd[1]: Started session-19.scope - Session 19 of User core. May 14 01:26:38.935366 sshd[5662]: Connection closed by 139.178.89.65 port 58598 May 14 01:26:38.936309 sshd-session[5660]: pam_unix(sshd:session): session closed for user core May 14 01:26:38.941122 systemd[1]: sshd@23-37.27.220.42:22-139.178.89.65:58598.service: Deactivated successfully. May 14 01:26:38.945475 systemd[1]: session-19.scope: Deactivated successfully. May 14 01:26:38.948308 systemd-logind[1504]: Session 19 logged out. Waiting for processes to exit. May 14 01:26:38.951091 systemd-logind[1504]: Removed session 19. May 14 01:26:39.338226 update_engine[1508]: I20250514 01:26:39.338106 1508 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 14 01:26:39.338803 update_engine[1508]: I20250514 01:26:39.338506 1508 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 14 01:26:39.342994 update_engine[1508]: I20250514 01:26:39.342823 1508 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 14 01:26:39.343090 update_engine[1508]: E20250514 01:26:39.343015 1508 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 14 01:26:39.343145 update_engine[1508]: I20250514 01:26:39.343089 1508 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 May 14 01:26:39.762002 containerd[1527]: time="2025-05-14T01:26:39.761741412Z" level=warning msg="container event discarded" container=b7c34e24bd1d93fe0c7045669c48b16df7d9848dda312f07eb3d2b986862d124 type=CONTAINER_CREATED_EVENT May 14 01:26:39.850371 containerd[1527]: time="2025-05-14T01:26:39.850256715Z" level=warning msg="container event discarded" container=b7c34e24bd1d93fe0c7045669c48b16df7d9848dda312f07eb3d2b986862d124 type=CONTAINER_STARTED_EVENT May 14 01:26:40.400792 containerd[1527]: time="2025-05-14T01:26:40.400682377Z" level=warning msg="container event discarded" container=b7c34e24bd1d93fe0c7045669c48b16df7d9848dda312f07eb3d2b986862d124 type=CONTAINER_STOPPED_EVENT May 14 01:26:44.108381 systemd[1]: Started sshd@24-37.27.220.42:22-139.178.89.65:58608.service - OpenSSH per-connection server daemon (139.178.89.65:58608). May 14 01:26:45.112081 sshd[5678]: Accepted publickey for core from 139.178.89.65 port 58608 ssh2: RSA SHA256:wzXoKaIc/s6rgd+lrAvPV4Ayc93C3Z1S3pv9QsITRgg May 14 01:26:45.114407 sshd-session[5678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:26:45.124017 systemd-logind[1504]: New session 20 of user core. May 14 01:26:45.128885 systemd[1]: Started session-20.scope - Session 20 of User core. May 14 01:26:45.873581 sshd[5680]: Connection closed by 139.178.89.65 port 58608 May 14 01:26:45.874797 sshd-session[5678]: pam_unix(sshd:session): session closed for user core May 14 01:26:45.881503 systemd[1]: sshd@24-37.27.220.42:22-139.178.89.65:58608.service: Deactivated successfully. May 14 01:26:45.886446 systemd[1]: session-20.scope: Deactivated successfully. May 14 01:26:45.888443 systemd-logind[1504]: Session 20 logged out. Waiting for processes to exit. May 14 01:26:45.890976 systemd-logind[1504]: Removed session 20. May 14 01:26:48.959052 containerd[1527]: time="2025-05-14T01:26:48.958871506Z" level=warning msg="container event discarded" container=7b7d873459b2bb5453ece80f283134606a65d1de41345c88671a50bcf3aaddb1 type=CONTAINER_CREATED_EVENT May 14 01:26:49.228613 containerd[1527]: time="2025-05-14T01:26:49.228357187Z" level=warning msg="container event discarded" container=7b7d873459b2bb5453ece80f283134606a65d1de41345c88671a50bcf3aaddb1 type=CONTAINER_STARTED_EVENT May 14 01:26:49.341099 update_engine[1508]: I20250514 01:26:49.340984 1508 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 14 01:26:49.341703 update_engine[1508]: I20250514 01:26:49.341359 1508 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 14 01:26:49.341829 update_engine[1508]: I20250514 01:26:49.341764 1508 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 14 01:26:49.342419 update_engine[1508]: E20250514 01:26:49.342298 1508 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 14 01:26:49.342419 update_engine[1508]: I20250514 01:26:49.342367 1508 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 May 14 01:26:51.064019 systemd[1]: Started sshd@25-37.27.220.42:22-139.178.89.65:50806.service - OpenSSH per-connection server daemon (139.178.89.65:50806). May 14 01:26:51.069084 systemd[1]: Started sshd@26-37.27.220.42:22-62.201.212.52:47002.service - OpenSSH per-connection server daemon (62.201.212.52:47002). May 14 01:26:52.088740 sshd[5698]: Accepted publickey for core from 139.178.89.65 port 50806 ssh2: RSA SHA256:wzXoKaIc/s6rgd+lrAvPV4Ayc93C3Z1S3pv9QsITRgg May 14 01:26:52.089168 sshd-session[5698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 01:26:52.097365 systemd-logind[1504]: New session 21 of user core. May 14 01:26:52.105857 systemd[1]: Started session-21.scope - Session 21 of User core. May 14 01:26:52.178147 containerd[1527]: time="2025-05-14T01:26:52.178091129Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7b7d873459b2bb5453ece80f283134606a65d1de41345c88671a50bcf3aaddb1\" id:\"a5e8c092f18ca06791a9994c8de7d0bd6d84787cb1d6fdbb5bcc6275c65d63c2\" pid:5715 exited_at:{seconds:1747186012 nanos:177632724}" May 14 01:26:52.360846 containerd[1527]: time="2025-05-14T01:26:52.360588208Z" level=warning msg="container event discarded" container=14d925657b0241b65f107800d27054b1b5d2e8ba1103edc0b7ec2bec874f7def type=CONTAINER_CREATED_EVENT May 14 01:26:52.360846 containerd[1527]: time="2025-05-14T01:26:52.360688502Z" level=warning msg="container event discarded" container=14d925657b0241b65f107800d27054b1b5d2e8ba1103edc0b7ec2bec874f7def type=CONTAINER_STARTED_EVENT May 14 01:26:52.633598 sshd[5699]: Invalid user debian from 62.201.212.52 port 47002 May 14 01:26:52.905730 sshd-session[5734]: pam_faillock(sshd:auth): User unknown May 14 01:26:52.912032 sshd[5699]: Postponed keyboard-interactive for invalid user debian from 62.201.212.52 port 47002 ssh2 [preauth] May 14 01:26:52.939276 sshd[5710]: Connection closed by 139.178.89.65 port 50806 May 14 01:26:52.941045 sshd-session[5698]: pam_unix(sshd:session): session closed for user core May 14 01:26:52.943978 systemd-logind[1504]: Session 21 logged out. Waiting for processes to exit. May 14 01:26:52.944747 systemd[1]: sshd@25-37.27.220.42:22-139.178.89.65:50806.service: Deactivated successfully. May 14 01:26:52.947127 systemd[1]: session-21.scope: Deactivated successfully. May 14 01:26:52.949347 systemd-logind[1504]: Removed session 21. May 14 01:26:53.203397 containerd[1527]: time="2025-05-14T01:26:53.203154906Z" level=warning msg="container event discarded" container=bd319cba7efb3b2e2a20704a7b94401f0bfead92094fce25e0bc8d4f08638e5e type=CONTAINER_CREATED_EVENT May 14 01:26:53.203397 containerd[1527]: time="2025-05-14T01:26:53.203224304Z" level=warning msg="container event discarded" container=bd319cba7efb3b2e2a20704a7b94401f0bfead92094fce25e0bc8d4f08638e5e type=CONTAINER_STARTED_EVENT May 14 01:26:53.274675 containerd[1527]: time="2025-05-14T01:26:53.274516814Z" level=warning msg="container event discarded" container=04b73af5dec4d5c2c19b79ed731c6901f7017ab1ed904b300282d3d762d631c3 type=CONTAINER_CREATED_EVENT May 14 01:26:53.280959 sshd-session[5734]: pam_unix(sshd:auth): check pass; user unknown May 14 01:26:53.281002 sshd-session[5734]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=62.201.212.52 May 14 01:26:53.281811 sshd-session[5734]: pam_faillock(sshd:auth): User unknown May 14 01:26:53.382184 containerd[1527]: time="2025-05-14T01:26:53.382097149Z" level=warning msg="container event discarded" container=3451c93d7ce1e413be03b13f8069d29431cbca5b06f697df0d28d7e62a7c8c2a type=CONTAINER_CREATED_EVENT May 14 01:26:53.382184 containerd[1527]: time="2025-05-14T01:26:53.382161668Z" level=warning msg="container event discarded" container=3451c93d7ce1e413be03b13f8069d29431cbca5b06f697df0d28d7e62a7c8c2a type=CONTAINER_STARTED_EVENT May 14 01:26:53.443560 containerd[1527]: time="2025-05-14T01:26:53.443437740Z" level=warning msg="container event discarded" container=04b73af5dec4d5c2c19b79ed731c6901f7017ab1ed904b300282d3d762d631c3 type=CONTAINER_STARTED_EVENT May 14 01:26:53.454370 containerd[1527]: time="2025-05-14T01:26:53.454231023Z" level=warning msg="container event discarded" container=599d59d2de206ff95e2e13e1301aa2e5dd3b4c475b04a80234d12d7b6aa4926b type=CONTAINER_CREATED_EVENT May 14 01:26:53.520785 containerd[1527]: time="2025-05-14T01:26:53.520676431Z" level=warning msg="container event discarded" container=599d59d2de206ff95e2e13e1301aa2e5dd3b4c475b04a80234d12d7b6aa4926b type=CONTAINER_STARTED_EVENT May 14 01:26:53.541300 containerd[1527]: time="2025-05-14T01:26:53.541239526Z" level=warning msg="container event discarded" container=a6c95c712d67ec1d8e9a9e05725977a44b4125928c3abc669dc8c2b82a9d6390 type=CONTAINER_CREATED_EVENT May 14 01:26:53.541300 containerd[1527]: time="2025-05-14T01:26:53.541285110Z" level=warning msg="container event discarded" container=a6c95c712d67ec1d8e9a9e05725977a44b4125928c3abc669dc8c2b82a9d6390 type=CONTAINER_STARTED_EVENT May 14 01:26:54.209070 containerd[1527]: time="2025-05-14T01:26:54.208917853Z" level=warning msg="container event discarded" container=dcd95909ae97c31daca874a06b06946d4f93ad7dac80d2049c9eadb193244c75 type=CONTAINER_CREATED_EVENT May 14 01:26:54.209070 containerd[1527]: time="2025-05-14T01:26:54.209007939Z" level=warning msg="container event discarded" container=dcd95909ae97c31daca874a06b06946d4f93ad7dac80d2049c9eadb193244c75 type=CONTAINER_STARTED_EVENT May 14 01:26:54.900492 sshd[5699]: PAM: Permission denied for illegal user debian from 62.201.212.52 May 14 01:26:54.900492 sshd[5699]: Failed keyboard-interactive/pam for invalid user debian from 62.201.212.52 port 47002 ssh2 May 14 01:26:55.142205 sshd[5699]: Connection closed by invalid user debian 62.201.212.52 port 47002 [preauth] May 14 01:26:55.145868 systemd[1]: sshd@26-37.27.220.42:22-62.201.212.52:47002.service: Deactivated successfully. May 14 01:26:56.098033 containerd[1527]: time="2025-05-14T01:26:56.097939821Z" level=warning msg="container event discarded" container=afa53a1bbba0dff076aaadd0e011aa37dbe975f61176d26ca3bc0dd26862361a type=CONTAINER_CREATED_EVENT May 14 01:26:56.098033 containerd[1527]: time="2025-05-14T01:26:56.097999742Z" level=warning msg="container event discarded" container=afa53a1bbba0dff076aaadd0e011aa37dbe975f61176d26ca3bc0dd26862361a type=CONTAINER_STARTED_EVENT May 14 01:26:56.904913 containerd[1527]: time="2025-05-14T01:26:56.904827087Z" level=warning msg="container event discarded" container=8ba7436810e730202e1163b72a456e41cf59839ef38686e660487af0ed5c4ad4 type=CONTAINER_CREATED_EVENT May 14 01:26:56.988192 containerd[1527]: time="2025-05-14T01:26:56.988135281Z" level=warning msg="container event discarded" container=8ba7436810e730202e1163b72a456e41cf59839ef38686e660487af0ed5c4ad4 type=CONTAINER_STARTED_EVENT May 14 01:26:57.412354 containerd[1527]: time="2025-05-14T01:26:57.412263896Z" level=warning msg="container event discarded" container=4c12405b92f2720ac713c26279b68a4a8e154e2139692b0b2aba95b324c61f57 type=CONTAINER_CREATED_EVENT May 14 01:26:57.503810 containerd[1527]: time="2025-05-14T01:26:57.503717375Z" level=warning msg="container event discarded" container=4c12405b92f2720ac713c26279b68a4a8e154e2139692b0b2aba95b324c61f57 type=CONTAINER_STARTED_EVENT May 14 01:26:58.468943 systemd[1]: Started sshd@27-37.27.220.42:22-211.196.31.2:45461.service - OpenSSH per-connection server daemon (211.196.31.2:45461). May 14 01:26:59.343629 update_engine[1508]: I20250514 01:26:59.343510 1508 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 14 01:26:59.344286 update_engine[1508]: I20250514 01:26:59.343941 1508 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 14 01:26:59.344342 update_engine[1508]: I20250514 01:26:59.344305 1508 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 14 01:26:59.344931 update_engine[1508]: E20250514 01:26:59.344856 1508 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 14 01:26:59.344931 update_engine[1508]: I20250514 01:26:59.344924 1508 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 14 01:26:59.345097 update_engine[1508]: I20250514 01:26:59.344936 1508 omaha_request_action.cc:617] Omaha request response: May 14 01:26:59.345097 update_engine[1508]: E20250514 01:26:59.345070 1508 omaha_request_action.cc:636] Omaha request network transfer failed. May 14 01:26:59.355058 update_engine[1508]: I20250514 01:26:59.352766 1508 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. May 14 01:26:59.355058 update_engine[1508]: I20250514 01:26:59.352816 1508 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 14 01:26:59.355058 update_engine[1508]: I20250514 01:26:59.352830 1508 update_attempter.cc:306] Processing Done. May 14 01:26:59.355058 update_engine[1508]: E20250514 01:26:59.352864 1508 update_attempter.cc:619] Update failed. May 14 01:26:59.355058 update_engine[1508]: I20250514 01:26:59.352875 1508 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse May 14 01:26:59.355058 update_engine[1508]: I20250514 01:26:59.352887 1508 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) May 14 01:26:59.355058 update_engine[1508]: I20250514 01:26:59.352900 1508 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. May 14 01:26:59.355058 update_engine[1508]: I20250514 01:26:59.353037 1508 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 14 01:26:59.355058 update_engine[1508]: I20250514 01:26:59.353111 1508 omaha_request_action.cc:271] Posting an Omaha request to disabled May 14 01:26:59.355058 update_engine[1508]: I20250514 01:26:59.353125 1508 omaha_request_action.cc:272] Request: May 14 01:26:59.355058 update_engine[1508]: May 14 01:26:59.355058 update_engine[1508]: May 14 01:26:59.355058 update_engine[1508]: May 14 01:26:59.355058 update_engine[1508]: May 14 01:26:59.355058 update_engine[1508]: May 14 01:26:59.355058 update_engine[1508]: May 14 01:26:59.355058 update_engine[1508]: I20250514 01:26:59.353139 1508 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 14 01:26:59.355058 update_engine[1508]: I20250514 01:26:59.353475 1508 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 14 01:26:59.355957 update_engine[1508]: I20250514 01:26:59.353920 1508 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 14 01:26:59.355957 update_engine[1508]: E20250514 01:26:59.355731 1508 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 14 01:26:59.355957 update_engine[1508]: I20250514 01:26:59.355793 1508 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 14 01:26:59.355957 update_engine[1508]: I20250514 01:26:59.355805 1508 omaha_request_action.cc:617] Omaha request response: May 14 01:26:59.355957 update_engine[1508]: I20250514 01:26:59.355816 1508 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 14 01:26:59.355957 update_engine[1508]: I20250514 01:26:59.355824 1508 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 14 01:26:59.355957 update_engine[1508]: I20250514 01:26:59.355834 1508 update_attempter.cc:306] Processing Done. May 14 01:26:59.355957 update_engine[1508]: I20250514 01:26:59.355844 1508 update_attempter.cc:310] Error event sent. May 14 01:26:59.355957 update_engine[1508]: I20250514 01:26:59.355857 1508 update_check_scheduler.cc:74] Next update check in 49m10s May 14 01:26:59.356300 locksmithd[1536]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 May 14 01:26:59.356747 locksmithd[1536]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 May 14 01:27:00.391900 sshd[5743]: Connection closed by 211.196.31.2 port 45461 May 14 01:27:00.393140 systemd[1]: sshd@27-37.27.220.42:22-211.196.31.2:45461.service: Deactivated successfully. May 14 01:27:00.583264 containerd[1527]: time="2025-05-14T01:27:00.583160332Z" level=warning msg="container event discarded" container=e42efdc556bc2376d710923bc1757f1b1156a716861b080a486a06394d34edd6 type=CONTAINER_CREATED_EVENT May 14 01:27:00.660881 containerd[1527]: time="2025-05-14T01:27:00.660545265Z" level=warning msg="container event discarded" container=e42efdc556bc2376d710923bc1757f1b1156a716861b080a486a06394d34edd6 type=CONTAINER_STARTED_EVENT May 14 01:27:01.178468 containerd[1527]: time="2025-05-14T01:27:01.178394052Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e42efdc556bc2376d710923bc1757f1b1156a716861b080a486a06394d34edd6\" id:\"c6a28da3f55a9f5975e9a585216b6886f3f5708ac46aeed9e7bb227b96f26337\" pid:5758 exited_at:{seconds:1747186021 nanos:177897824}" May 14 01:27:02.907224 containerd[1527]: time="2025-05-14T01:27:02.907067135Z" level=warning msg="container event discarded" container=3715e1570d97139ae55a44f4bbe3c1a0fdcbe8269e142c3da3ec7e74ea193c74 type=CONTAINER_CREATED_EVENT May 14 01:27:02.974685 containerd[1527]: time="2025-05-14T01:27:02.974504779Z" level=warning msg="container event discarded" container=3715e1570d97139ae55a44f4bbe3c1a0fdcbe8269e142c3da3ec7e74ea193c74 type=CONTAINER_STARTED_EVENT May 14 01:27:05.938690 containerd[1527]: time="2025-05-14T01:27:05.938524004Z" level=warning msg="container event discarded" container=a5d5ee00d837214ec939e2eb329df46d844f49c63a8648444ed46cfbbe6c762e type=CONTAINER_CREATED_EVENT May 14 01:27:06.017607 containerd[1527]: time="2025-05-14T01:27:06.017448799Z" level=warning msg="container event discarded" container=a5d5ee00d837214ec939e2eb329df46d844f49c63a8648444ed46cfbbe6c762e type=CONTAINER_STARTED_EVENT May 14 01:27:09.108192 systemd[1]: cri-containerd-7852863450d3adb36cb50bc762437c523332805c4a23e1b49ca6d57112f2f2bb.scope: Deactivated successfully. May 14 01:27:09.108691 systemd[1]: cri-containerd-7852863450d3adb36cb50bc762437c523332805c4a23e1b49ca6d57112f2f2bb.scope: Consumed 5.894s CPU time, 61.8M memory peak, 36.3M read from disk. May 14 01:27:09.164966 containerd[1527]: time="2025-05-14T01:27:09.164899286Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7852863450d3adb36cb50bc762437c523332805c4a23e1b49ca6d57112f2f2bb\" id:\"7852863450d3adb36cb50bc762437c523332805c4a23e1b49ca6d57112f2f2bb\" pid:3184 exit_status:1 exited_at:{seconds:1747186029 nanos:164220920}" May 14 01:27:09.207318 containerd[1527]: time="2025-05-14T01:27:09.207231106Z" level=info msg="received exit event container_id:\"7852863450d3adb36cb50bc762437c523332805c4a23e1b49ca6d57112f2f2bb\" id:\"7852863450d3adb36cb50bc762437c523332805c4a23e1b49ca6d57112f2f2bb\" pid:3184 exit_status:1 exited_at:{seconds:1747186029 nanos:164220920}" May 14 01:27:09.314742 systemd[1]: cri-containerd-656de50a1c7a624dd04387dd914d40e888ec50f743ec0a8376ac7dc30bbe9f45.scope: Deactivated successfully. May 14 01:27:09.315061 systemd[1]: cri-containerd-656de50a1c7a624dd04387dd914d40e888ec50f743ec0a8376ac7dc30bbe9f45.scope: Consumed 4.485s CPU time, 42.6M memory peak, 32.4M read from disk. May 14 01:27:09.323200 containerd[1527]: time="2025-05-14T01:27:09.323106323Z" level=info msg="received exit event container_id:\"656de50a1c7a624dd04387dd914d40e888ec50f743ec0a8376ac7dc30bbe9f45\" id:\"656de50a1c7a624dd04387dd914d40e888ec50f743ec0a8376ac7dc30bbe9f45\" pid:2683 exit_status:1 exited_at:{seconds:1747186029 nanos:322599014}" May 14 01:27:09.325050 containerd[1527]: time="2025-05-14T01:27:09.323302236Z" level=info msg="TaskExit event in podsandbox handler container_id:\"656de50a1c7a624dd04387dd914d40e888ec50f743ec0a8376ac7dc30bbe9f45\" id:\"656de50a1c7a624dd04387dd914d40e888ec50f743ec0a8376ac7dc30bbe9f45\" pid:2683 exit_status:1 exited_at:{seconds:1747186029 nanos:322599014}" May 14 01:27:09.334809 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7852863450d3adb36cb50bc762437c523332805c4a23e1b49ca6d57112f2f2bb-rootfs.mount: Deactivated successfully. May 14 01:27:09.359902 kubelet[2830]: E0514 01:27:09.345633 2830 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:54944->10.0.0.2:2379: read: connection timed out" May 14 01:27:09.364905 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-656de50a1c7a624dd04387dd914d40e888ec50f743ec0a8376ac7dc30bbe9f45-rootfs.mount: Deactivated successfully. May 14 01:27:09.399036 systemd[1]: cri-containerd-d8014d557706e64c8bf0fc1376d7ccfaee7b14bf57f8dab453a8ee121dd14b96.scope: Deactivated successfully. May 14 01:27:09.399767 systemd[1]: cri-containerd-d8014d557706e64c8bf0fc1376d7ccfaee7b14bf57f8dab453a8ee121dd14b96.scope: Consumed 6.796s CPU time, 86.8M memory peak, 58.5M read from disk. May 14 01:27:09.403147 containerd[1527]: time="2025-05-14T01:27:09.402789636Z" level=info msg="received exit event container_id:\"d8014d557706e64c8bf0fc1376d7ccfaee7b14bf57f8dab453a8ee121dd14b96\" id:\"d8014d557706e64c8bf0fc1376d7ccfaee7b14bf57f8dab453a8ee121dd14b96\" pid:2690 exit_status:1 exited_at:{seconds:1747186029 nanos:402329813}" May 14 01:27:09.403147 containerd[1527]: time="2025-05-14T01:27:09.402805045Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d8014d557706e64c8bf0fc1376d7ccfaee7b14bf57f8dab453a8ee121dd14b96\" id:\"d8014d557706e64c8bf0fc1376d7ccfaee7b14bf57f8dab453a8ee121dd14b96\" pid:2690 exit_status:1 exited_at:{seconds:1747186029 nanos:402329813}" May 14 01:27:09.432421 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d8014d557706e64c8bf0fc1376d7ccfaee7b14bf57f8dab453a8ee121dd14b96-rootfs.mount: Deactivated successfully. May 14 01:27:10.218575 kubelet[2830]: I0514 01:27:10.218511 2830 scope.go:117] "RemoveContainer" containerID="656de50a1c7a624dd04387dd914d40e888ec50f743ec0a8376ac7dc30bbe9f45" May 14 01:27:10.223982 kubelet[2830]: I0514 01:27:10.223788 2830 scope.go:117] "RemoveContainer" containerID="d8014d557706e64c8bf0fc1376d7ccfaee7b14bf57f8dab453a8ee121dd14b96" May 14 01:27:10.230193 kubelet[2830]: I0514 01:27:10.230104 2830 scope.go:117] "RemoveContainer" containerID="7852863450d3adb36cb50bc762437c523332805c4a23e1b49ca6d57112f2f2bb" May 14 01:27:10.270850 containerd[1527]: time="2025-05-14T01:27:10.270697688Z" level=info msg="CreateContainer within sandbox \"a40ea0bfe470afc30de2c5a527eca5bd6a94263660f1e2aca4cb7f36cf4f9c8d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" May 14 01:27:10.271407 containerd[1527]: time="2025-05-14T01:27:10.271349615Z" level=info msg="CreateContainer within sandbox \"8c01f47515a5d7532286d6090545adff5fd65e39639b0f0f14616de4983377ca\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" May 14 01:27:10.344129 containerd[1527]: time="2025-05-14T01:27:10.344087566Z" level=info msg="CreateContainer within sandbox \"a0fb3db6c3c2ef566a4db7a65902d29c184c9c5b07d10938b1c1536393fe43c7\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" May 14 01:27:10.420011 containerd[1527]: time="2025-05-14T01:27:10.419853157Z" level=info msg="Container d3392e7743cabbcf8ead5e08b623a955d9b744faa6605ffaf4ddf4299adbd0c1: CDI devices from CRI Config.CDIDevices: []" May 14 01:27:10.420011 containerd[1527]: time="2025-05-14T01:27:10.419885917Z" level=info msg="Container bd03eb8f655a4e2f3a3767a319972ceccb4c267cd1d0e6f87cb6a294ce7d92e9: CDI devices from CRI Config.CDIDevices: []" May 14 01:27:10.422661 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3811385952.mount: Deactivated successfully. May 14 01:27:10.466805 containerd[1527]: time="2025-05-14T01:27:10.466760016Z" level=info msg="Container ae59c851bf43203f78075c2e2b649b4fd620fb856afb51eeed5bda04156314ce: CDI devices from CRI Config.CDIDevices: []" May 14 01:27:10.472000 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1162367952.mount: Deactivated successfully. May 14 01:27:10.483501 containerd[1527]: time="2025-05-14T01:27:10.483441304Z" level=info msg="CreateContainer within sandbox \"a0fb3db6c3c2ef566a4db7a65902d29c184c9c5b07d10938b1c1536393fe43c7\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"ae59c851bf43203f78075c2e2b649b4fd620fb856afb51eeed5bda04156314ce\"" May 14 01:27:10.485774 containerd[1527]: time="2025-05-14T01:27:10.485741466Z" level=info msg="StartContainer for \"ae59c851bf43203f78075c2e2b649b4fd620fb856afb51eeed5bda04156314ce\"" May 14 01:27:10.489701 containerd[1527]: time="2025-05-14T01:27:10.489590969Z" level=info msg="CreateContainer within sandbox \"8c01f47515a5d7532286d6090545adff5fd65e39639b0f0f14616de4983377ca\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"bd03eb8f655a4e2f3a3767a319972ceccb4c267cd1d0e6f87cb6a294ce7d92e9\"" May 14 01:27:10.490786 containerd[1527]: time="2025-05-14T01:27:10.490695715Z" level=info msg="CreateContainer within sandbox \"a40ea0bfe470afc30de2c5a527eca5bd6a94263660f1e2aca4cb7f36cf4f9c8d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"d3392e7743cabbcf8ead5e08b623a955d9b744faa6605ffaf4ddf4299adbd0c1\"" May 14 01:27:10.491702 containerd[1527]: time="2025-05-14T01:27:10.491296438Z" level=info msg="StartContainer for \"bd03eb8f655a4e2f3a3767a319972ceccb4c267cd1d0e6f87cb6a294ce7d92e9\"" May 14 01:27:10.491702 containerd[1527]: time="2025-05-14T01:27:10.491432029Z" level=info msg="connecting to shim ae59c851bf43203f78075c2e2b649b4fd620fb856afb51eeed5bda04156314ce" address="unix:///run/containerd/s/4df6e0b3267a2df4859f0b379233e8ba465a78031520d0c4ed6576108aa80ace" protocol=ttrpc version=3 May 14 01:27:10.491849 containerd[1527]: time="2025-05-14T01:27:10.491715225Z" level=info msg="StartContainer for \"d3392e7743cabbcf8ead5e08b623a955d9b744faa6605ffaf4ddf4299adbd0c1\"" May 14 01:27:10.492437 containerd[1527]: time="2025-05-14T01:27:10.492405263Z" level=info msg="connecting to shim d3392e7743cabbcf8ead5e08b623a955d9b744faa6605ffaf4ddf4299adbd0c1" address="unix:///run/containerd/s/b708869e27e69f179ecfb4f2ae026567c827252927ae8879f9f494e8b83ecd7b" protocol=ttrpc version=3 May 14 01:27:10.495327 containerd[1527]: time="2025-05-14T01:27:10.495300127Z" level=info msg="connecting to shim bd03eb8f655a4e2f3a3767a319972ceccb4c267cd1d0e6f87cb6a294ce7d92e9" address="unix:///run/containerd/s/1d6056b51e18d46ba6e52b0aa2b12e2c1fcb9595c507472f6322d21a48ca6e63" protocol=ttrpc version=3 May 14 01:27:10.524747 systemd[1]: Started cri-containerd-d3392e7743cabbcf8ead5e08b623a955d9b744faa6605ffaf4ddf4299adbd0c1.scope - libcontainer container d3392e7743cabbcf8ead5e08b623a955d9b744faa6605ffaf4ddf4299adbd0c1. May 14 01:27:10.528411 systemd[1]: Started cri-containerd-bd03eb8f655a4e2f3a3767a319972ceccb4c267cd1d0e6f87cb6a294ce7d92e9.scope - libcontainer container bd03eb8f655a4e2f3a3767a319972ceccb4c267cd1d0e6f87cb6a294ce7d92e9. May 14 01:27:10.532564 systemd[1]: Started cri-containerd-ae59c851bf43203f78075c2e2b649b4fd620fb856afb51eeed5bda04156314ce.scope - libcontainer container ae59c851bf43203f78075c2e2b649b4fd620fb856afb51eeed5bda04156314ce. May 14 01:27:10.608996 containerd[1527]: time="2025-05-14T01:27:10.608938749Z" level=info msg="StartContainer for \"d3392e7743cabbcf8ead5e08b623a955d9b744faa6605ffaf4ddf4299adbd0c1\" returns successfully" May 14 01:27:10.616007 containerd[1527]: time="2025-05-14T01:27:10.615970167Z" level=info msg="StartContainer for \"bd03eb8f655a4e2f3a3767a319972ceccb4c267cd1d0e6f87cb6a294ce7d92e9\" returns successfully" May 14 01:27:10.621037 containerd[1527]: time="2025-05-14T01:27:10.621004785Z" level=info msg="StartContainer for \"ae59c851bf43203f78075c2e2b649b4fd620fb856afb51eeed5bda04156314ce\" returns successfully" May 14 01:27:13.337466 kubelet[2830]: E0514 01:27:13.333208 2830 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:54734->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4284-0-0-n-c0828c9b46.183f406e45cf55a1 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4284-0-0-n-c0828c9b46,UID:1527dbd9d2e0e5bd676f6e700ed79080,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4284-0-0-n-c0828c9b46,},FirstTimestamp:2025-05-14 01:27:02.804837793 +0000 UTC m=+356.124651476,LastTimestamp:2025-05-14 01:27:02.804837793 +0000 UTC m=+356.124651476,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4284-0-0-n-c0828c9b46,}"