Mar 25 02:32:47.068745 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Mon Mar 24 23:38:35 -00 2025 Mar 25 02:32:47.068776 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=e7a00b7ee8d97e8d255663e9d3fa92277da8316702fb7f6d664fd7b137c307e9 Mar 25 02:32:47.068787 kernel: BIOS-provided physical RAM map: Mar 25 02:32:47.068795 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 25 02:32:47.068803 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 25 02:32:47.068813 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 25 02:32:47.068822 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable Mar 25 02:32:47.068831 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved Mar 25 02:32:47.068839 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 25 02:32:47.068847 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 25 02:32:47.068855 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable Mar 25 02:32:47.068863 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 25 02:32:47.068871 kernel: NX (Execute Disable) protection: active Mar 25 02:32:47.068880 kernel: APIC: Static calls initialized Mar 25 02:32:47.068892 kernel: SMBIOS 3.0.0 present. Mar 25 02:32:47.068901 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 Mar 25 02:32:47.068909 kernel: Hypervisor detected: KVM Mar 25 02:32:47.068918 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 25 02:32:47.068926 kernel: kvm-clock: using sched offset of 3693216301 cycles Mar 25 02:32:47.068935 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 25 02:32:47.068946 kernel: tsc: Detected 1996.249 MHz processor Mar 25 02:32:47.068955 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 25 02:32:47.068964 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 25 02:32:47.068973 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 Mar 25 02:32:47.068982 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 25 02:32:47.068991 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 25 02:32:47.069000 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 Mar 25 02:32:47.069009 kernel: ACPI: Early table checksum verification disabled Mar 25 02:32:47.069019 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) Mar 25 02:32:47.069028 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 25 02:32:47.069037 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 25 02:32:47.069418 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 25 02:32:47.069434 kernel: ACPI: FACS 0x00000000BFFE0000 000040 Mar 25 02:32:47.069443 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 25 02:32:47.069452 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 25 02:32:47.069461 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] Mar 25 02:32:47.069470 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] Mar 25 02:32:47.069482 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] Mar 25 02:32:47.069491 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] Mar 25 02:32:47.069500 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] Mar 25 02:32:47.069512 kernel: No NUMA configuration found Mar 25 02:32:47.069521 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] Mar 25 02:32:47.069531 kernel: NODE_DATA(0) allocated [mem 0x13fffa000-0x13fffffff] Mar 25 02:32:47.069540 kernel: Zone ranges: Mar 25 02:32:47.069551 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 25 02:32:47.069560 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Mar 25 02:32:47.069570 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] Mar 25 02:32:47.069579 kernel: Movable zone start for each node Mar 25 02:32:47.069588 kernel: Early memory node ranges Mar 25 02:32:47.069597 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 25 02:32:47.069606 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] Mar 25 02:32:47.069615 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] Mar 25 02:32:47.069626 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] Mar 25 02:32:47.069635 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 25 02:32:47.069645 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 25 02:32:47.069654 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Mar 25 02:32:47.069663 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 25 02:32:47.069672 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 25 02:32:47.069681 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 25 02:32:47.069690 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 25 02:32:47.069700 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 25 02:32:47.069711 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 25 02:32:47.069721 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 25 02:32:47.069730 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 25 02:32:47.069739 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 25 02:32:47.069749 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Mar 25 02:32:47.069759 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 25 02:32:47.069768 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices Mar 25 02:32:47.069776 kernel: Booting paravirtualized kernel on KVM Mar 25 02:32:47.069785 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 25 02:32:47.069796 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Mar 25 02:32:47.069804 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Mar 25 02:32:47.069813 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Mar 25 02:32:47.069821 kernel: pcpu-alloc: [0] 0 1 Mar 25 02:32:47.069830 kernel: kvm-guest: PV spinlocks disabled, no host support Mar 25 02:32:47.069840 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=e7a00b7ee8d97e8d255663e9d3fa92277da8316702fb7f6d664fd7b137c307e9 Mar 25 02:32:47.069849 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 25 02:32:47.069858 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 25 02:32:47.069868 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 25 02:32:47.069877 kernel: Fallback order for Node 0: 0 Mar 25 02:32:47.069885 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Mar 25 02:32:47.069894 kernel: Policy zone: Normal Mar 25 02:32:47.069902 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 25 02:32:47.069911 kernel: software IO TLB: area num 2. Mar 25 02:32:47.069920 kernel: Memory: 3962120K/4193772K available (14336K kernel code, 2304K rwdata, 25060K rodata, 43592K init, 1472K bss, 231392K reserved, 0K cma-reserved) Mar 25 02:32:47.069928 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 25 02:32:47.069937 kernel: ftrace: allocating 37985 entries in 149 pages Mar 25 02:32:47.069947 kernel: ftrace: allocated 149 pages with 4 groups Mar 25 02:32:47.069956 kernel: Dynamic Preempt: voluntary Mar 25 02:32:47.069964 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 25 02:32:47.069974 kernel: rcu: RCU event tracing is enabled. Mar 25 02:32:47.069983 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 25 02:32:47.069992 kernel: Trampoline variant of Tasks RCU enabled. Mar 25 02:32:47.070001 kernel: Rude variant of Tasks RCU enabled. Mar 25 02:32:47.070009 kernel: Tracing variant of Tasks RCU enabled. Mar 25 02:32:47.070018 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 25 02:32:47.070028 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 25 02:32:47.070037 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Mar 25 02:32:47.070045 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 25 02:32:47.070054 kernel: Console: colour VGA+ 80x25 Mar 25 02:32:47.070062 kernel: printk: console [tty0] enabled Mar 25 02:32:47.070070 kernel: printk: console [ttyS0] enabled Mar 25 02:32:47.070079 kernel: ACPI: Core revision 20230628 Mar 25 02:32:47.070088 kernel: APIC: Switch to symmetric I/O mode setup Mar 25 02:32:47.070096 kernel: x2apic enabled Mar 25 02:32:47.070106 kernel: APIC: Switched APIC routing to: physical x2apic Mar 25 02:32:47.070115 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 25 02:32:47.070123 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 25 02:32:47.070132 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Mar 25 02:32:47.070141 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Mar 25 02:32:47.070150 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Mar 25 02:32:47.070158 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 25 02:32:47.070167 kernel: Spectre V2 : Mitigation: Retpolines Mar 25 02:32:47.070175 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Mar 25 02:32:47.070186 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Mar 25 02:32:47.070194 kernel: Speculative Store Bypass: Vulnerable Mar 25 02:32:47.070202 kernel: x86/fpu: x87 FPU will use FXSAVE Mar 25 02:32:47.070211 kernel: Freeing SMP alternatives memory: 32K Mar 25 02:32:47.070226 kernel: pid_max: default: 32768 minimum: 301 Mar 25 02:32:47.070237 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 25 02:32:47.070246 kernel: landlock: Up and running. Mar 25 02:32:47.070255 kernel: SELinux: Initializing. Mar 25 02:32:47.070312 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 25 02:32:47.070322 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 25 02:32:47.070331 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Mar 25 02:32:47.070341 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 25 02:32:47.070353 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 25 02:32:47.070362 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 25 02:32:47.070371 kernel: Performance Events: AMD PMU driver. Mar 25 02:32:47.070380 kernel: ... version: 0 Mar 25 02:32:47.070389 kernel: ... bit width: 48 Mar 25 02:32:47.070400 kernel: ... generic registers: 4 Mar 25 02:32:47.070409 kernel: ... value mask: 0000ffffffffffff Mar 25 02:32:47.070418 kernel: ... max period: 00007fffffffffff Mar 25 02:32:47.070427 kernel: ... fixed-purpose events: 0 Mar 25 02:32:47.070436 kernel: ... event mask: 000000000000000f Mar 25 02:32:47.070445 kernel: signal: max sigframe size: 1440 Mar 25 02:32:47.070454 kernel: rcu: Hierarchical SRCU implementation. Mar 25 02:32:47.070463 kernel: rcu: Max phase no-delay instances is 400. Mar 25 02:32:47.070472 kernel: smp: Bringing up secondary CPUs ... Mar 25 02:32:47.070482 kernel: smpboot: x86: Booting SMP configuration: Mar 25 02:32:47.070491 kernel: .... node #0, CPUs: #1 Mar 25 02:32:47.070500 kernel: smp: Brought up 1 node, 2 CPUs Mar 25 02:32:47.070509 kernel: smpboot: Max logical packages: 2 Mar 25 02:32:47.070518 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Mar 25 02:32:47.070527 kernel: devtmpfs: initialized Mar 25 02:32:47.070536 kernel: x86/mm: Memory block size: 128MB Mar 25 02:32:47.070545 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 25 02:32:47.070554 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 25 02:32:47.070565 kernel: pinctrl core: initialized pinctrl subsystem Mar 25 02:32:47.070574 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 25 02:32:47.070583 kernel: audit: initializing netlink subsys (disabled) Mar 25 02:32:47.070592 kernel: audit: type=2000 audit(1742869965.873:1): state=initialized audit_enabled=0 res=1 Mar 25 02:32:47.070600 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 25 02:32:47.070609 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 25 02:32:47.070618 kernel: cpuidle: using governor menu Mar 25 02:32:47.070627 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 25 02:32:47.070636 kernel: dca service started, version 1.12.1 Mar 25 02:32:47.070647 kernel: PCI: Using configuration type 1 for base access Mar 25 02:32:47.070656 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 25 02:32:47.070665 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 25 02:32:47.070674 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 25 02:32:47.070683 kernel: ACPI: Added _OSI(Module Device) Mar 25 02:32:47.070692 kernel: ACPI: Added _OSI(Processor Device) Mar 25 02:32:47.070701 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 25 02:32:47.070710 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 25 02:32:47.070718 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 25 02:32:47.070730 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 25 02:32:47.070738 kernel: ACPI: Interpreter enabled Mar 25 02:32:47.070747 kernel: ACPI: PM: (supports S0 S3 S5) Mar 25 02:32:47.070756 kernel: ACPI: Using IOAPIC for interrupt routing Mar 25 02:32:47.070765 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 25 02:32:47.070774 kernel: PCI: Using E820 reservations for host bridge windows Mar 25 02:32:47.070783 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Mar 25 02:32:47.070792 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 25 02:32:47.070939 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Mar 25 02:32:47.071046 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Mar 25 02:32:47.071142 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Mar 25 02:32:47.071157 kernel: acpiphp: Slot [3] registered Mar 25 02:32:47.071167 kernel: acpiphp: Slot [4] registered Mar 25 02:32:47.071177 kernel: acpiphp: Slot [5] registered Mar 25 02:32:47.071186 kernel: acpiphp: Slot [6] registered Mar 25 02:32:47.071196 kernel: acpiphp: Slot [7] registered Mar 25 02:32:47.071205 kernel: acpiphp: Slot [8] registered Mar 25 02:32:47.071218 kernel: acpiphp: Slot [9] registered Mar 25 02:32:47.071228 kernel: acpiphp: Slot [10] registered Mar 25 02:32:47.071237 kernel: acpiphp: Slot [11] registered Mar 25 02:32:47.071246 kernel: acpiphp: Slot [12] registered Mar 25 02:32:47.071256 kernel: acpiphp: Slot [13] registered Mar 25 02:32:47.071297 kernel: acpiphp: Slot [14] registered Mar 25 02:32:47.071307 kernel: acpiphp: Slot [15] registered Mar 25 02:32:47.071316 kernel: acpiphp: Slot [16] registered Mar 25 02:32:47.071326 kernel: acpiphp: Slot [17] registered Mar 25 02:32:47.071339 kernel: acpiphp: Slot [18] registered Mar 25 02:32:47.071348 kernel: acpiphp: Slot [19] registered Mar 25 02:32:47.071358 kernel: acpiphp: Slot [20] registered Mar 25 02:32:47.071367 kernel: acpiphp: Slot [21] registered Mar 25 02:32:47.071376 kernel: acpiphp: Slot [22] registered Mar 25 02:32:47.071386 kernel: acpiphp: Slot [23] registered Mar 25 02:32:47.071395 kernel: acpiphp: Slot [24] registered Mar 25 02:32:47.071421 kernel: acpiphp: Slot [25] registered Mar 25 02:32:47.071433 kernel: acpiphp: Slot [26] registered Mar 25 02:32:47.071447 kernel: acpiphp: Slot [27] registered Mar 25 02:32:47.071457 kernel: acpiphp: Slot [28] registered Mar 25 02:32:47.071468 kernel: acpiphp: Slot [29] registered Mar 25 02:32:47.071479 kernel: acpiphp: Slot [30] registered Mar 25 02:32:47.071489 kernel: acpiphp: Slot [31] registered Mar 25 02:32:47.071500 kernel: PCI host bridge to bus 0000:00 Mar 25 02:32:47.072335 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 25 02:32:47.072445 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 25 02:32:47.072544 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 25 02:32:47.072648 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 25 02:32:47.072744 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] Mar 25 02:32:47.072940 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 25 02:32:47.073095 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Mar 25 02:32:47.073211 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Mar 25 02:32:47.074381 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Mar 25 02:32:47.074496 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Mar 25 02:32:47.074598 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Mar 25 02:32:47.074696 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Mar 25 02:32:47.074795 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Mar 25 02:32:47.074889 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Mar 25 02:32:47.074992 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Mar 25 02:32:47.075088 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Mar 25 02:32:47.075187 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Mar 25 02:32:47.075309 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Mar 25 02:32:47.075419 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Mar 25 02:32:47.075517 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc000000000-0xc000003fff 64bit pref] Mar 25 02:32:47.075612 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Mar 25 02:32:47.075809 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Mar 25 02:32:47.077253 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 25 02:32:47.077418 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Mar 25 02:32:47.077515 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Mar 25 02:32:47.077611 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Mar 25 02:32:47.077707 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xc000004000-0xc000007fff 64bit pref] Mar 25 02:32:47.077848 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Mar 25 02:32:47.077985 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Mar 25 02:32:47.078087 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Mar 25 02:32:47.078183 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Mar 25 02:32:47.080319 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xc000008000-0xc00000bfff 64bit pref] Mar 25 02:32:47.080446 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Mar 25 02:32:47.080549 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Mar 25 02:32:47.080649 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xc00000c000-0xc00000ffff 64bit pref] Mar 25 02:32:47.080758 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Mar 25 02:32:47.080862 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Mar 25 02:32:47.080955 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfeb93000-0xfeb93fff] Mar 25 02:32:47.081048 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xc000010000-0xc000013fff 64bit pref] Mar 25 02:32:47.081062 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 25 02:32:47.081072 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 25 02:32:47.081081 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 25 02:32:47.081091 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 25 02:32:47.081100 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Mar 25 02:32:47.081113 kernel: iommu: Default domain type: Translated Mar 25 02:32:47.081122 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 25 02:32:47.081132 kernel: PCI: Using ACPI for IRQ routing Mar 25 02:32:47.081141 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 25 02:32:47.081150 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 25 02:32:47.081160 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] Mar 25 02:32:47.081254 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Mar 25 02:32:47.081370 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Mar 25 02:32:47.081464 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 25 02:32:47.081482 kernel: vgaarb: loaded Mar 25 02:32:47.081491 kernel: clocksource: Switched to clocksource kvm-clock Mar 25 02:32:47.081501 kernel: VFS: Disk quotas dquot_6.6.0 Mar 25 02:32:47.081510 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 25 02:32:47.081519 kernel: pnp: PnP ACPI init Mar 25 02:32:47.081619 kernel: pnp 00:03: [dma 2] Mar 25 02:32:47.081634 kernel: pnp: PnP ACPI: found 5 devices Mar 25 02:32:47.081644 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 25 02:32:47.081656 kernel: NET: Registered PF_INET protocol family Mar 25 02:32:47.081666 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 25 02:32:47.081675 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 25 02:32:47.081685 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 25 02:32:47.081694 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 25 02:32:47.081704 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 25 02:32:47.081713 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 25 02:32:47.081722 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 25 02:32:47.081732 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 25 02:32:47.081743 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 25 02:32:47.081752 kernel: NET: Registered PF_XDP protocol family Mar 25 02:32:47.081852 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 25 02:32:47.081937 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 25 02:32:47.082074 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 25 02:32:47.082168 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] Mar 25 02:32:47.082252 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] Mar 25 02:32:47.085488 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Mar 25 02:32:47.085596 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Mar 25 02:32:47.085612 kernel: PCI: CLS 0 bytes, default 64 Mar 25 02:32:47.085622 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Mar 25 02:32:47.085632 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) Mar 25 02:32:47.085641 kernel: Initialise system trusted keyrings Mar 25 02:32:47.085651 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 25 02:32:47.085660 kernel: Key type asymmetric registered Mar 25 02:32:47.085669 kernel: Asymmetric key parser 'x509' registered Mar 25 02:32:47.085679 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 25 02:32:47.085691 kernel: io scheduler mq-deadline registered Mar 25 02:32:47.085701 kernel: io scheduler kyber registered Mar 25 02:32:47.085710 kernel: io scheduler bfq registered Mar 25 02:32:47.085719 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 25 02:32:47.085729 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Mar 25 02:32:47.085739 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Mar 25 02:32:47.085748 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Mar 25 02:32:47.085758 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Mar 25 02:32:47.085767 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 25 02:32:47.085778 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 25 02:32:47.085788 kernel: random: crng init done Mar 25 02:32:47.085797 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 25 02:32:47.085807 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 25 02:32:47.085816 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 25 02:32:47.085909 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 25 02:32:47.085926 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 25 02:32:47.086006 kernel: rtc_cmos 00:04: registered as rtc0 Mar 25 02:32:47.086095 kernel: rtc_cmos 00:04: setting system clock to 2025-03-25T02:32:46 UTC (1742869966) Mar 25 02:32:47.086178 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Mar 25 02:32:47.086191 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 25 02:32:47.086201 kernel: NET: Registered PF_INET6 protocol family Mar 25 02:32:47.086210 kernel: Segment Routing with IPv6 Mar 25 02:32:47.086219 kernel: In-situ OAM (IOAM) with IPv6 Mar 25 02:32:47.086229 kernel: NET: Registered PF_PACKET protocol family Mar 25 02:32:47.086238 kernel: Key type dns_resolver registered Mar 25 02:32:47.086247 kernel: IPI shorthand broadcast: enabled Mar 25 02:32:47.086274 kernel: sched_clock: Marking stable (1015012069, 176461056)->(1235815552, -44342427) Mar 25 02:32:47.086310 kernel: registered taskstats version 1 Mar 25 02:32:47.086319 kernel: Loading compiled-in X.509 certificates Mar 25 02:32:47.086329 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: eff01054e94a599f8e404b9a9482f4e2220f5386' Mar 25 02:32:47.086338 kernel: Key type .fscrypt registered Mar 25 02:32:47.086347 kernel: Key type fscrypt-provisioning registered Mar 25 02:32:47.086356 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 25 02:32:47.086365 kernel: ima: Allocated hash algorithm: sha1 Mar 25 02:32:47.086378 kernel: ima: No architecture policies found Mar 25 02:32:47.086387 kernel: clk: Disabling unused clocks Mar 25 02:32:47.086396 kernel: Freeing unused kernel image (initmem) memory: 43592K Mar 25 02:32:47.086405 kernel: Write protecting the kernel read-only data: 40960k Mar 25 02:32:47.086414 kernel: Freeing unused kernel image (rodata/data gap) memory: 1564K Mar 25 02:32:47.086423 kernel: Run /init as init process Mar 25 02:32:47.086432 kernel: with arguments: Mar 25 02:32:47.086441 kernel: /init Mar 25 02:32:47.086450 kernel: with environment: Mar 25 02:32:47.086459 kernel: HOME=/ Mar 25 02:32:47.086471 kernel: TERM=linux Mar 25 02:32:47.086480 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 25 02:32:47.086490 systemd[1]: Successfully made /usr/ read-only. Mar 25 02:32:47.086505 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 25 02:32:47.086516 systemd[1]: Detected virtualization kvm. Mar 25 02:32:47.086525 systemd[1]: Detected architecture x86-64. Mar 25 02:32:47.086535 systemd[1]: Running in initrd. Mar 25 02:32:47.086547 systemd[1]: No hostname configured, using default hostname. Mar 25 02:32:47.086558 systemd[1]: Hostname set to . Mar 25 02:32:47.086568 systemd[1]: Initializing machine ID from VM UUID. Mar 25 02:32:47.086577 systemd[1]: Queued start job for default target initrd.target. Mar 25 02:32:47.086587 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 25 02:32:47.086598 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 25 02:32:47.086618 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 25 02:32:47.086632 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 25 02:32:47.086642 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 25 02:32:47.086654 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 25 02:32:47.086665 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 25 02:32:47.086676 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 25 02:32:47.086688 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 25 02:32:47.086698 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 25 02:32:47.086709 systemd[1]: Reached target paths.target - Path Units. Mar 25 02:32:47.086719 systemd[1]: Reached target slices.target - Slice Units. Mar 25 02:32:47.086730 systemd[1]: Reached target swap.target - Swaps. Mar 25 02:32:47.086740 systemd[1]: Reached target timers.target - Timer Units. Mar 25 02:32:47.086750 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 25 02:32:47.086761 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 25 02:32:47.086771 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 25 02:32:47.086783 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 25 02:32:47.086794 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 25 02:32:47.086804 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 25 02:32:47.086814 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 25 02:32:47.086824 systemd[1]: Reached target sockets.target - Socket Units. Mar 25 02:32:47.086835 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 25 02:32:47.086845 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 25 02:32:47.086855 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 25 02:32:47.086867 systemd[1]: Starting systemd-fsck-usr.service... Mar 25 02:32:47.086878 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 25 02:32:47.086888 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 25 02:32:47.086898 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 25 02:32:47.086909 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 25 02:32:47.086942 systemd-journald[185]: Collecting audit messages is disabled. Mar 25 02:32:47.086971 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 25 02:32:47.086985 systemd[1]: Finished systemd-fsck-usr.service. Mar 25 02:32:47.086996 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 25 02:32:47.087007 systemd-journald[185]: Journal started Mar 25 02:32:47.087031 systemd-journald[185]: Runtime Journal (/run/log/journal/dbdcee6bbb6a4c34b21f1773af0927c4) is 8M, max 78.2M, 70.2M free. Mar 25 02:32:47.077903 systemd-modules-load[186]: Inserted module 'overlay' Mar 25 02:32:47.090385 systemd[1]: Started systemd-journald.service - Journal Service. Mar 25 02:32:47.113308 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 25 02:32:47.115757 kernel: Bridge firewalling registered Mar 25 02:32:47.115227 systemd-modules-load[186]: Inserted module 'br_netfilter' Mar 25 02:32:47.151220 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 25 02:32:47.152560 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 25 02:32:47.154148 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 25 02:32:47.160497 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 25 02:32:47.163493 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 25 02:32:47.168132 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 25 02:32:47.173938 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 25 02:32:47.187513 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 25 02:32:47.193749 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 25 02:32:47.203659 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 25 02:32:47.205401 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 25 02:32:47.209422 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 25 02:32:47.224409 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 25 02:32:47.234016 dracut-cmdline[218]: dracut-dracut-053 Mar 25 02:32:47.236026 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=e7a00b7ee8d97e8d255663e9d3fa92277da8316702fb7f6d664fd7b137c307e9 Mar 25 02:32:47.280149 systemd-resolved[222]: Positive Trust Anchors: Mar 25 02:32:47.280166 systemd-resolved[222]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 25 02:32:47.280208 systemd-resolved[222]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 25 02:32:47.284153 systemd-resolved[222]: Defaulting to hostname 'linux'. Mar 25 02:32:47.286680 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 25 02:32:47.288326 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 25 02:32:47.339472 kernel: SCSI subsystem initialized Mar 25 02:32:47.351338 kernel: Loading iSCSI transport class v2.0-870. Mar 25 02:32:47.364319 kernel: iscsi: registered transport (tcp) Mar 25 02:32:47.387524 kernel: iscsi: registered transport (qla4xxx) Mar 25 02:32:47.387615 kernel: QLogic iSCSI HBA Driver Mar 25 02:32:47.428168 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 25 02:32:47.430336 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 25 02:32:47.487051 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 25 02:32:47.487167 kernel: device-mapper: uevent: version 1.0.3 Mar 25 02:32:47.490283 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 25 02:32:47.545504 kernel: raid6: sse2x4 gen() 5519 MB/s Mar 25 02:32:47.564411 kernel: raid6: sse2x2 gen() 5972 MB/s Mar 25 02:32:47.582732 kernel: raid6: sse2x1 gen() 8493 MB/s Mar 25 02:32:47.582787 kernel: raid6: using algorithm sse2x1 gen() 8493 MB/s Mar 25 02:32:47.601805 kernel: raid6: .... xor() 7348 MB/s, rmw enabled Mar 25 02:32:47.601872 kernel: raid6: using ssse3x2 recovery algorithm Mar 25 02:32:47.625867 kernel: xor: measuring software checksum speed Mar 25 02:32:47.625949 kernel: prefetch64-sse : 17293 MB/sec Mar 25 02:32:47.626396 kernel: generic_sse : 15726 MB/sec Mar 25 02:32:47.627579 kernel: xor: using function: prefetch64-sse (17293 MB/sec) Mar 25 02:32:47.801321 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 25 02:32:47.813201 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 25 02:32:47.818024 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 25 02:32:47.845922 systemd-udevd[405]: Using default interface naming scheme 'v255'. Mar 25 02:32:47.851620 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 25 02:32:47.858175 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 25 02:32:47.885316 dracut-pre-trigger[414]: rd.md=0: removing MD RAID activation Mar 25 02:32:47.931929 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 25 02:32:47.935182 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 25 02:32:48.030500 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 25 02:32:48.036531 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 25 02:32:48.073243 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 25 02:32:48.077372 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 25 02:32:48.081655 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 25 02:32:48.083927 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 25 02:32:48.089513 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 25 02:32:48.122111 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 25 02:32:48.132289 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Mar 25 02:32:48.163178 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) Mar 25 02:32:48.163341 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 25 02:32:48.163357 kernel: GPT:17805311 != 20971519 Mar 25 02:32:48.163370 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 25 02:32:48.163383 kernel: GPT:17805311 != 20971519 Mar 25 02:32:48.163422 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 25 02:32:48.163435 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 25 02:32:48.178603 kernel: libata version 3.00 loaded. Mar 25 02:32:48.179496 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 25 02:32:48.191085 kernel: ata_piix 0000:00:01.1: version 2.13 Mar 25 02:32:48.191300 kernel: scsi host0: ata_piix Mar 25 02:32:48.191467 kernel: scsi host1: ata_piix Mar 25 02:32:48.191598 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Mar 25 02:32:48.191614 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Mar 25 02:32:48.181988 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 25 02:32:48.194050 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 25 02:32:48.195881 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 25 02:32:48.196097 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 25 02:32:48.198150 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 25 02:32:48.202503 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 25 02:32:48.218166 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by (udev-worker) (452) Mar 25 02:32:48.218197 kernel: BTRFS: device fsid 6d9424cd-1432-492b-b006-b311869817e2 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (470) Mar 25 02:32:48.217868 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 25 02:32:48.261093 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 25 02:32:48.280133 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 25 02:32:48.290732 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 25 02:32:48.291383 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 25 02:32:48.303339 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 25 02:32:48.314304 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 25 02:32:48.316404 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 25 02:32:48.321387 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 25 02:32:48.336532 disk-uuid[506]: Primary Header is updated. Mar 25 02:32:48.336532 disk-uuid[506]: Secondary Entries is updated. Mar 25 02:32:48.336532 disk-uuid[506]: Secondary Header is updated. Mar 25 02:32:48.341499 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 25 02:32:48.348292 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 25 02:32:49.365849 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 25 02:32:49.366540 disk-uuid[511]: The operation has completed successfully. Mar 25 02:32:49.451108 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 25 02:32:49.451332 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 25 02:32:49.499932 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 25 02:32:49.514124 sh[526]: Success Mar 25 02:32:49.527296 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Mar 25 02:32:49.572937 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 25 02:32:49.582435 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 25 02:32:49.585299 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 25 02:32:49.621001 kernel: BTRFS info (device dm-0): first mount of filesystem 6d9424cd-1432-492b-b006-b311869817e2 Mar 25 02:32:49.621089 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 25 02:32:49.621103 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 25 02:32:49.621115 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 25 02:32:49.621143 kernel: BTRFS info (device dm-0): using free space tree Mar 25 02:32:49.636631 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 25 02:32:49.637864 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 25 02:32:49.639636 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 25 02:32:49.643425 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 25 02:32:49.672802 kernel: BTRFS info (device vda6): first mount of filesystem a72930ba-1354-475c-94df-b83a66efea67 Mar 25 02:32:49.672866 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 25 02:32:49.672879 kernel: BTRFS info (device vda6): using free space tree Mar 25 02:32:49.680315 kernel: BTRFS info (device vda6): auto enabling async discard Mar 25 02:32:49.687307 kernel: BTRFS info (device vda6): last unmount of filesystem a72930ba-1354-475c-94df-b83a66efea67 Mar 25 02:32:49.693879 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 25 02:32:49.696429 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 25 02:32:49.785572 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 25 02:32:49.791017 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 25 02:32:49.845859 systemd-networkd[705]: lo: Link UP Mar 25 02:32:49.846825 systemd-networkd[705]: lo: Gained carrier Mar 25 02:32:49.850531 systemd-networkd[705]: Enumeration completed Mar 25 02:32:49.851020 systemd-networkd[705]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 25 02:32:49.851027 systemd-networkd[705]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 25 02:32:49.852369 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 25 02:32:49.853414 systemd[1]: Reached target network.target - Network. Mar 25 02:32:49.854810 systemd-networkd[705]: eth0: Link UP Mar 25 02:32:49.854815 systemd-networkd[705]: eth0: Gained carrier Mar 25 02:32:49.856125 systemd-networkd[705]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 25 02:32:49.875413 systemd-networkd[705]: eth0: DHCPv4 address 172.24.4.54/24, gateway 172.24.4.1 acquired from 172.24.4.1 Mar 25 02:32:49.885471 ignition[646]: Ignition 2.20.0 Mar 25 02:32:49.885485 ignition[646]: Stage: fetch-offline Mar 25 02:32:49.887813 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 25 02:32:49.885522 ignition[646]: no configs at "/usr/lib/ignition/base.d" Mar 25 02:32:49.885533 ignition[646]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 25 02:32:49.885642 ignition[646]: parsed url from cmdline: "" Mar 25 02:32:49.890391 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 25 02:32:49.885649 ignition[646]: no config URL provided Mar 25 02:32:49.885655 ignition[646]: reading system config file "/usr/lib/ignition/user.ign" Mar 25 02:32:49.885664 ignition[646]: no config at "/usr/lib/ignition/user.ign" Mar 25 02:32:49.885669 ignition[646]: failed to fetch config: resource requires networking Mar 25 02:32:49.885874 ignition[646]: Ignition finished successfully Mar 25 02:32:49.919425 ignition[716]: Ignition 2.20.0 Mar 25 02:32:49.919440 ignition[716]: Stage: fetch Mar 25 02:32:49.919627 ignition[716]: no configs at "/usr/lib/ignition/base.d" Mar 25 02:32:49.919639 ignition[716]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 25 02:32:49.919748 ignition[716]: parsed url from cmdline: "" Mar 25 02:32:49.919752 ignition[716]: no config URL provided Mar 25 02:32:49.919758 ignition[716]: reading system config file "/usr/lib/ignition/user.ign" Mar 25 02:32:49.919768 ignition[716]: no config at "/usr/lib/ignition/user.ign" Mar 25 02:32:49.919870 ignition[716]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Mar 25 02:32:49.919897 ignition[716]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Mar 25 02:32:49.919903 ignition[716]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Mar 25 02:32:50.168163 ignition[716]: GET result: OK Mar 25 02:32:50.168402 ignition[716]: parsing config with SHA512: 845d27b783f63dff35e7083fc4ec6da7b20d8b8ee581fcfa9cdcf7892dba7ae9fa1904b45f7f7af1fce265569706167f31e4eabd3e3504bb3f15bb7f0a2de268 Mar 25 02:32:50.183351 unknown[716]: fetched base config from "system" Mar 25 02:32:50.183377 unknown[716]: fetched base config from "system" Mar 25 02:32:50.184582 ignition[716]: fetch: fetch complete Mar 25 02:32:50.183417 unknown[716]: fetched user config from "openstack" Mar 25 02:32:50.184596 ignition[716]: fetch: fetch passed Mar 25 02:32:50.183566 systemd-resolved[222]: Detected conflict on linux IN A 172.24.4.54 Mar 25 02:32:50.184689 ignition[716]: Ignition finished successfully Mar 25 02:32:50.183585 systemd-resolved[222]: Hostname conflict, changing published hostname from 'linux' to 'linux7'. Mar 25 02:32:50.189457 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 25 02:32:50.195574 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 25 02:32:50.250421 ignition[722]: Ignition 2.20.0 Mar 25 02:32:50.250448 ignition[722]: Stage: kargs Mar 25 02:32:50.250856 ignition[722]: no configs at "/usr/lib/ignition/base.d" Mar 25 02:32:50.250884 ignition[722]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 25 02:32:50.255522 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 25 02:32:50.253208 ignition[722]: kargs: kargs passed Mar 25 02:32:50.253353 ignition[722]: Ignition finished successfully Mar 25 02:32:50.262548 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 25 02:32:50.304160 ignition[728]: Ignition 2.20.0 Mar 25 02:32:50.305836 ignition[728]: Stage: disks Mar 25 02:32:50.306236 ignition[728]: no configs at "/usr/lib/ignition/base.d" Mar 25 02:32:50.306312 ignition[728]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 25 02:32:50.310736 ignition[728]: disks: disks passed Mar 25 02:32:50.313679 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 25 02:32:50.310834 ignition[728]: Ignition finished successfully Mar 25 02:32:50.315993 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 25 02:32:50.317840 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 25 02:32:50.321773 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 25 02:32:50.323065 systemd[1]: Reached target sysinit.target - System Initialization. Mar 25 02:32:50.325967 systemd[1]: Reached target basic.target - Basic System. Mar 25 02:32:50.332494 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 25 02:32:50.382407 systemd-fsck[737]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Mar 25 02:32:50.392439 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 25 02:32:50.397214 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 25 02:32:50.558610 kernel: EXT4-fs (vda9): mounted filesystem 4e6dca82-2e50-453c-be25-61f944b72008 r/w with ordered data mode. Quota mode: none. Mar 25 02:32:50.558945 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 25 02:32:50.559976 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 25 02:32:50.563349 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 25 02:32:50.567495 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 25 02:32:50.570632 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 25 02:32:50.576410 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Mar 25 02:32:50.578156 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 25 02:32:50.578191 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 25 02:32:50.587318 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 25 02:32:50.591852 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 25 02:32:50.626446 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (745) Mar 25 02:32:50.626475 kernel: BTRFS info (device vda6): first mount of filesystem a72930ba-1354-475c-94df-b83a66efea67 Mar 25 02:32:50.626488 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 25 02:32:50.626500 kernel: BTRFS info (device vda6): using free space tree Mar 25 02:32:50.626513 kernel: BTRFS info (device vda6): auto enabling async discard Mar 25 02:32:50.635605 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 25 02:32:50.718903 initrd-setup-root[773]: cut: /sysroot/etc/passwd: No such file or directory Mar 25 02:32:50.725972 initrd-setup-root[780]: cut: /sysroot/etc/group: No such file or directory Mar 25 02:32:50.732106 initrd-setup-root[787]: cut: /sysroot/etc/shadow: No such file or directory Mar 25 02:32:50.736834 initrd-setup-root[794]: cut: /sysroot/etc/gshadow: No such file or directory Mar 25 02:32:50.835035 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 25 02:32:50.836868 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 25 02:32:50.840489 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 25 02:32:50.854547 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 25 02:32:50.859537 kernel: BTRFS info (device vda6): last unmount of filesystem a72930ba-1354-475c-94df-b83a66efea67 Mar 25 02:32:50.886650 ignition[862]: INFO : Ignition 2.20.0 Mar 25 02:32:50.888177 ignition[862]: INFO : Stage: mount Mar 25 02:32:50.888177 ignition[862]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 25 02:32:50.888177 ignition[862]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 25 02:32:50.890962 ignition[862]: INFO : mount: mount passed Mar 25 02:32:50.890962 ignition[862]: INFO : Ignition finished successfully Mar 25 02:32:50.892034 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 25 02:32:50.892792 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 25 02:32:51.173689 systemd-networkd[705]: eth0: Gained IPv6LL Mar 25 02:32:57.772704 coreos-metadata[747]: Mar 25 02:32:57.772 WARN failed to locate config-drive, using the metadata service API instead Mar 25 02:32:57.813964 coreos-metadata[747]: Mar 25 02:32:57.813 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Mar 25 02:32:57.829803 coreos-metadata[747]: Mar 25 02:32:57.829 INFO Fetch successful Mar 25 02:32:57.831225 coreos-metadata[747]: Mar 25 02:32:57.830 INFO wrote hostname ci-4284-0-0-3-6c96446f48.novalocal to /sysroot/etc/hostname Mar 25 02:32:57.833853 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Mar 25 02:32:57.834098 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Mar 25 02:32:57.842688 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 25 02:32:57.873088 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 25 02:32:57.906394 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/vda6 scanned by mount (879) Mar 25 02:32:57.915410 kernel: BTRFS info (device vda6): first mount of filesystem a72930ba-1354-475c-94df-b83a66efea67 Mar 25 02:32:57.915507 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 25 02:32:57.917669 kernel: BTRFS info (device vda6): using free space tree Mar 25 02:32:57.929357 kernel: BTRFS info (device vda6): auto enabling async discard Mar 25 02:32:57.934509 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 25 02:32:57.982791 ignition[897]: INFO : Ignition 2.20.0 Mar 25 02:32:57.982791 ignition[897]: INFO : Stage: files Mar 25 02:32:57.986041 ignition[897]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 25 02:32:57.986041 ignition[897]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 25 02:32:57.986041 ignition[897]: DEBUG : files: compiled without relabeling support, skipping Mar 25 02:32:57.991939 ignition[897]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 25 02:32:57.991939 ignition[897]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 25 02:32:57.997436 ignition[897]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 25 02:32:57.997436 ignition[897]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 25 02:32:57.997436 ignition[897]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 25 02:32:57.997102 unknown[897]: wrote ssh authorized keys file for user: core Mar 25 02:32:58.006502 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 25 02:32:58.006502 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Mar 25 02:32:58.075483 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 25 02:32:58.365863 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 25 02:32:58.365863 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 25 02:32:58.370645 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 25 02:32:58.370645 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 25 02:32:58.370645 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 25 02:32:58.370645 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 25 02:32:58.370645 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 25 02:32:58.370645 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 25 02:32:58.370645 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 25 02:32:58.370645 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 25 02:32:58.370645 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 25 02:32:58.370645 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Mar 25 02:32:58.370645 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Mar 25 02:32:58.370645 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Mar 25 02:32:58.370645 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Mar 25 02:32:59.124321 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 25 02:33:01.371105 ignition[897]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Mar 25 02:33:01.371105 ignition[897]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 25 02:33:01.374948 ignition[897]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 25 02:33:01.374948 ignition[897]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 25 02:33:01.374948 ignition[897]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 25 02:33:01.377980 ignition[897]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Mar 25 02:33:01.377980 ignition[897]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Mar 25 02:33:01.377980 ignition[897]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 25 02:33:01.377980 ignition[897]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 25 02:33:01.377980 ignition[897]: INFO : files: files passed Mar 25 02:33:01.377980 ignition[897]: INFO : Ignition finished successfully Mar 25 02:33:01.381572 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 25 02:33:01.388823 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 25 02:33:01.392567 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 25 02:33:01.408345 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 25 02:33:01.408553 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 25 02:33:01.419327 initrd-setup-root-after-ignition[927]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 25 02:33:01.419327 initrd-setup-root-after-ignition[927]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 25 02:33:01.422490 initrd-setup-root-after-ignition[931]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 25 02:33:01.425130 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 25 02:33:01.427768 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 25 02:33:01.431200 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 25 02:33:01.503411 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 25 02:33:01.503524 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 25 02:33:01.505465 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 25 02:33:01.507400 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 25 02:33:01.509365 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 25 02:33:01.511396 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 25 02:33:01.535197 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 25 02:33:01.540683 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 25 02:33:01.564415 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 25 02:33:01.565860 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 25 02:33:01.567888 systemd[1]: Stopped target timers.target - Timer Units. Mar 25 02:33:01.569765 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 25 02:33:01.570112 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 25 02:33:01.572646 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 25 02:33:01.574600 systemd[1]: Stopped target basic.target - Basic System. Mar 25 02:33:01.576494 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 25 02:33:01.578627 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 25 02:33:01.580770 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 25 02:33:01.582863 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 25 02:33:01.584915 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 25 02:33:01.587160 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 25 02:33:01.588971 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 25 02:33:01.590021 systemd[1]: Stopped target swap.target - Swaps. Mar 25 02:33:01.591425 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 25 02:33:01.591635 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 25 02:33:01.592877 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 25 02:33:01.593686 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 25 02:33:01.594832 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 25 02:33:01.596440 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 25 02:33:01.597396 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 25 02:33:01.597551 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 25 02:33:01.598852 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 25 02:33:01.599018 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 25 02:33:01.601704 systemd[1]: ignition-files.service: Deactivated successfully. Mar 25 02:33:01.601884 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 25 02:33:01.603768 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 25 02:33:01.605433 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 25 02:33:01.606481 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 25 02:33:01.606616 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 25 02:33:01.608762 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 25 02:33:01.608888 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 25 02:33:01.617474 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 25 02:33:01.617564 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 25 02:33:01.630780 ignition[951]: INFO : Ignition 2.20.0 Mar 25 02:33:01.631652 ignition[951]: INFO : Stage: umount Mar 25 02:33:01.631652 ignition[951]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 25 02:33:01.631652 ignition[951]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 25 02:33:01.634330 ignition[951]: INFO : umount: umount passed Mar 25 02:33:01.634330 ignition[951]: INFO : Ignition finished successfully Mar 25 02:33:01.635561 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 25 02:33:01.635671 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 25 02:33:01.637023 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 25 02:33:01.637096 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 25 02:33:01.638987 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 25 02:33:01.639041 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 25 02:33:01.639952 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 25 02:33:01.639996 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 25 02:33:01.640634 systemd[1]: Stopped target network.target - Network. Mar 25 02:33:01.641127 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 25 02:33:01.641180 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 25 02:33:01.643464 systemd[1]: Stopped target paths.target - Path Units. Mar 25 02:33:01.644034 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 25 02:33:01.644109 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 25 02:33:01.645340 systemd[1]: Stopped target slices.target - Slice Units. Mar 25 02:33:01.645879 systemd[1]: Stopped target sockets.target - Socket Units. Mar 25 02:33:01.646957 systemd[1]: iscsid.socket: Deactivated successfully. Mar 25 02:33:01.646997 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 25 02:33:01.647994 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 25 02:33:01.648025 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 25 02:33:01.649146 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 25 02:33:01.649194 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 25 02:33:01.650330 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 25 02:33:01.650371 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 25 02:33:01.651466 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 25 02:33:01.652601 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 25 02:33:01.655012 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 25 02:33:01.655788 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 25 02:33:01.655875 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 25 02:33:01.657174 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 25 02:33:01.657244 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 25 02:33:01.661668 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 25 02:33:01.661759 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 25 02:33:01.663897 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 25 02:33:01.664081 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 25 02:33:01.664176 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 25 02:33:01.667013 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 25 02:33:01.667660 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 25 02:33:01.667932 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 25 02:33:01.671337 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 25 02:33:01.674858 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 25 02:33:01.674917 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 25 02:33:01.676642 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 25 02:33:01.676685 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 25 02:33:01.678156 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 25 02:33:01.678197 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 25 02:33:01.678863 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 25 02:33:01.678902 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 25 02:33:01.681374 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 25 02:33:01.683508 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 25 02:33:01.683569 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 25 02:33:01.693731 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 25 02:33:01.693859 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 25 02:33:01.694856 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 25 02:33:01.694943 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 25 02:33:01.696203 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 25 02:33:01.696283 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 25 02:33:01.697373 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 25 02:33:01.697402 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 25 02:33:01.698252 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 25 02:33:01.698357 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 25 02:33:01.699841 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 25 02:33:01.699882 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 25 02:33:01.700925 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 25 02:33:01.700966 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 25 02:33:01.704380 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 25 02:33:01.705046 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 25 02:33:01.705102 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 25 02:33:01.706216 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 25 02:33:01.707311 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 25 02:33:01.708021 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 25 02:33:01.708062 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 25 02:33:01.709165 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 25 02:33:01.709207 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 25 02:33:01.712382 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 25 02:33:01.712438 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 25 02:33:01.719321 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 25 02:33:01.719432 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 25 02:33:01.720835 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 25 02:33:01.723413 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 25 02:33:01.739710 systemd[1]: Switching root. Mar 25 02:33:01.771840 systemd-journald[185]: Journal stopped Mar 25 02:33:03.580604 systemd-journald[185]: Received SIGTERM from PID 1 (systemd). Mar 25 02:33:03.580671 kernel: SELinux: policy capability network_peer_controls=1 Mar 25 02:33:03.580691 kernel: SELinux: policy capability open_perms=1 Mar 25 02:33:03.580707 kernel: SELinux: policy capability extended_socket_class=1 Mar 25 02:33:03.580720 kernel: SELinux: policy capability always_check_network=0 Mar 25 02:33:03.580732 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 25 02:33:03.580744 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 25 02:33:03.580760 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 25 02:33:03.580772 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 25 02:33:03.580784 kernel: audit: type=1403 audit(1742869982.437:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 25 02:33:03.580797 systemd[1]: Successfully loaded SELinux policy in 44.255ms. Mar 25 02:33:03.580819 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 11.595ms. Mar 25 02:33:03.580836 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 25 02:33:03.580849 systemd[1]: Detected virtualization kvm. Mar 25 02:33:03.580863 systemd[1]: Detected architecture x86-64. Mar 25 02:33:03.580876 systemd[1]: Detected first boot. Mar 25 02:33:03.580889 systemd[1]: Hostname set to . Mar 25 02:33:03.580903 systemd[1]: Initializing machine ID from VM UUID. Mar 25 02:33:03.580916 zram_generator::config[997]: No configuration found. Mar 25 02:33:03.580933 kernel: Guest personality initialized and is inactive Mar 25 02:33:03.580948 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Mar 25 02:33:03.580962 kernel: Initialized host personality Mar 25 02:33:03.580974 kernel: NET: Registered PF_VSOCK protocol family Mar 25 02:33:03.580987 systemd[1]: Populated /etc with preset unit settings. Mar 25 02:33:03.581001 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 25 02:33:03.581018 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 25 02:33:03.581033 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 25 02:33:03.581048 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 25 02:33:03.581061 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 25 02:33:03.581077 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 25 02:33:03.581090 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 25 02:33:03.581103 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 25 02:33:03.581117 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 25 02:33:03.581130 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 25 02:33:03.581144 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 25 02:33:03.581157 systemd[1]: Created slice user.slice - User and Session Slice. Mar 25 02:33:03.581170 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 25 02:33:03.581187 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 25 02:33:03.581200 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 25 02:33:03.581213 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 25 02:33:03.581228 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 25 02:33:03.581242 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 25 02:33:03.581256 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 25 02:33:03.581472 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 25 02:33:03.581490 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 25 02:33:03.581504 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 25 02:33:03.581517 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 25 02:33:03.581530 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 25 02:33:03.581543 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 25 02:33:03.581556 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 25 02:33:03.581570 systemd[1]: Reached target slices.target - Slice Units. Mar 25 02:33:03.581583 systemd[1]: Reached target swap.target - Swaps. Mar 25 02:33:03.581595 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 25 02:33:03.581612 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 25 02:33:03.581625 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 25 02:33:03.581638 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 25 02:33:03.581651 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 25 02:33:03.581664 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 25 02:33:03.581678 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 25 02:33:03.581691 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 25 02:33:03.581704 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 25 02:33:03.581717 systemd[1]: Mounting media.mount - External Media Directory... Mar 25 02:33:03.581732 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 25 02:33:03.581747 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 25 02:33:03.581760 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 25 02:33:03.581773 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 25 02:33:03.581787 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 25 02:33:03.581931 systemd[1]: Reached target machines.target - Containers. Mar 25 02:33:03.581948 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 25 02:33:03.581961 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 25 02:33:03.581977 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 25 02:33:03.581990 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 25 02:33:03.582002 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 25 02:33:03.582015 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 25 02:33:03.582026 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 25 02:33:03.582039 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 25 02:33:03.582055 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 25 02:33:03.582067 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 25 02:33:03.582081 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 25 02:33:03.582093 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 25 02:33:03.582105 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 25 02:33:03.582119 systemd[1]: Stopped systemd-fsck-usr.service. Mar 25 02:33:03.582132 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 25 02:33:03.582145 kernel: fuse: init (API version 7.39) Mar 25 02:33:03.582157 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 25 02:33:03.582169 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 25 02:33:03.582181 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 25 02:33:03.582195 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 25 02:33:03.582207 kernel: loop: module loaded Mar 25 02:33:03.582220 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 25 02:33:03.582232 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 25 02:33:03.582244 systemd[1]: verity-setup.service: Deactivated successfully. Mar 25 02:33:03.582256 systemd[1]: Stopped verity-setup.service. Mar 25 02:33:03.582331 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 25 02:33:03.582357 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 25 02:33:03.582401 systemd-journald[1087]: Collecting audit messages is disabled. Mar 25 02:33:03.582444 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 25 02:33:03.582462 systemd[1]: Mounted media.mount - External Media Directory. Mar 25 02:33:03.582475 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 25 02:33:03.582487 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 25 02:33:03.582501 systemd-journald[1087]: Journal started Mar 25 02:33:03.582527 systemd-journald[1087]: Runtime Journal (/run/log/journal/dbdcee6bbb6a4c34b21f1773af0927c4) is 8M, max 78.2M, 70.2M free. Mar 25 02:33:03.257186 systemd[1]: Queued start job for default target multi-user.target. Mar 25 02:33:03.265493 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 25 02:33:03.265914 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 25 02:33:03.585335 systemd[1]: Started systemd-journald.service - Journal Service. Mar 25 02:33:03.593199 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 25 02:33:03.593928 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 25 02:33:03.594663 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 25 02:33:03.594815 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 25 02:33:03.595572 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 25 02:33:03.595731 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 25 02:33:03.596443 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 25 02:33:03.596585 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 25 02:33:03.597340 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 25 02:33:03.597482 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 25 02:33:03.598159 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 25 02:33:03.598325 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 25 02:33:03.599054 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 25 02:33:03.601920 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 25 02:33:03.602724 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 25 02:33:03.619416 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 25 02:33:03.626388 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 25 02:33:03.627982 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 25 02:33:03.632460 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 25 02:33:03.637299 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 25 02:33:03.637343 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 25 02:33:03.653797 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 25 02:33:03.660451 kernel: ACPI: bus type drm_connector registered Mar 25 02:33:03.655671 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 25 02:33:03.659451 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 25 02:33:03.660315 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 25 02:33:03.661907 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 25 02:33:03.664368 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 25 02:33:03.664977 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 25 02:33:03.667393 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 25 02:33:03.668017 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 25 02:33:03.669361 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 25 02:33:03.671433 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 25 02:33:03.678562 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 25 02:33:03.686129 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 25 02:33:03.687025 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 25 02:33:03.687168 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 25 02:33:03.688107 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 25 02:33:03.688931 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 25 02:33:03.690001 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 25 02:33:03.691892 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 25 02:33:03.711363 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 25 02:33:03.723307 kernel: loop0: detected capacity change from 0 to 205544 Mar 25 02:33:03.723486 systemd-journald[1087]: Time spent on flushing to /var/log/journal/dbdcee6bbb6a4c34b21f1773af0927c4 is 32.138ms for 965 entries. Mar 25 02:33:03.723486 systemd-journald[1087]: System Journal (/var/log/journal/dbdcee6bbb6a4c34b21f1773af0927c4) is 8M, max 584.8M, 576.8M free. Mar 25 02:33:03.815917 systemd-journald[1087]: Received client request to flush runtime journal. Mar 25 02:33:03.718316 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 25 02:33:03.733564 udevadm[1144]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 25 02:33:03.749671 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 25 02:33:03.750827 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 25 02:33:03.753517 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 25 02:33:03.769400 systemd-tmpfiles[1135]: ACLs are not supported, ignoring. Mar 25 02:33:03.772897 systemd-tmpfiles[1135]: ACLs are not supported, ignoring. Mar 25 02:33:03.784307 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 25 02:33:03.786952 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 25 02:33:03.821358 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 25 02:33:03.828807 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 25 02:33:03.837837 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 25 02:33:03.865367 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 25 02:33:03.869489 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 25 02:33:03.875295 kernel: loop1: detected capacity change from 0 to 8 Mar 25 02:33:03.892997 systemd-tmpfiles[1159]: ACLs are not supported, ignoring. Mar 25 02:33:03.893317 systemd-tmpfiles[1159]: ACLs are not supported, ignoring. Mar 25 02:33:03.899367 kernel: loop2: detected capacity change from 0 to 109808 Mar 25 02:33:03.900579 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 25 02:33:03.955303 kernel: loop3: detected capacity change from 0 to 151640 Mar 25 02:33:04.050330 kernel: loop4: detected capacity change from 0 to 205544 Mar 25 02:33:04.114642 kernel: loop5: detected capacity change from 0 to 8 Mar 25 02:33:04.125561 kernel: loop6: detected capacity change from 0 to 109808 Mar 25 02:33:04.181319 kernel: loop7: detected capacity change from 0 to 151640 Mar 25 02:33:04.223588 (sd-merge)[1166]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Mar 25 02:33:04.224567 (sd-merge)[1166]: Merged extensions into '/usr'. Mar 25 02:33:04.230574 systemd[1]: Reload requested from client PID 1134 ('systemd-sysext') (unit systemd-sysext.service)... Mar 25 02:33:04.230679 systemd[1]: Reloading... Mar 25 02:33:04.319443 zram_generator::config[1190]: No configuration found. Mar 25 02:33:04.563941 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 25 02:33:04.601309 ldconfig[1129]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 25 02:33:04.649877 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 25 02:33:04.650627 systemd[1]: Reloading finished in 418 ms. Mar 25 02:33:04.672425 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 25 02:33:04.673521 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 25 02:33:04.674539 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 25 02:33:04.683718 systemd[1]: Starting ensure-sysext.service... Mar 25 02:33:04.686891 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 25 02:33:04.689545 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 25 02:33:04.721362 systemd[1]: Reload requested from client PID 1251 ('systemctl') (unit ensure-sysext.service)... Mar 25 02:33:04.721380 systemd[1]: Reloading... Mar 25 02:33:04.735201 systemd-tmpfiles[1252]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 25 02:33:04.736578 systemd-tmpfiles[1252]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 25 02:33:04.739742 systemd-tmpfiles[1252]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 25 02:33:04.741719 systemd-tmpfiles[1252]: ACLs are not supported, ignoring. Mar 25 02:33:04.742242 systemd-tmpfiles[1252]: ACLs are not supported, ignoring. Mar 25 02:33:04.752764 systemd-tmpfiles[1252]: Detected autofs mount point /boot during canonicalization of boot. Mar 25 02:33:04.753836 systemd-tmpfiles[1252]: Skipping /boot Mar 25 02:33:04.763936 systemd-udevd[1253]: Using default interface naming scheme 'v255'. Mar 25 02:33:04.785775 systemd-tmpfiles[1252]: Detected autofs mount point /boot during canonicalization of boot. Mar 25 02:33:04.786834 systemd-tmpfiles[1252]: Skipping /boot Mar 25 02:33:04.815310 zram_generator::config[1279]: No configuration found. Mar 25 02:33:04.968300 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1302) Mar 25 02:33:04.983328 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 25 02:33:05.030286 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Mar 25 02:33:05.065461 kernel: ACPI: button: Power Button [PWRF] Mar 25 02:33:05.065488 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 25 02:33:05.061053 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 25 02:33:05.093470 kernel: mousedev: PS/2 mouse device common for all mice Mar 25 02:33:05.125856 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Mar 25 02:33:05.125938 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Mar 25 02:33:05.131946 kernel: Console: switching to colour dummy device 80x25 Mar 25 02:33:05.131996 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Mar 25 02:33:05.132015 kernel: [drm] features: -context_init Mar 25 02:33:05.134283 kernel: [drm] number of scanouts: 1 Mar 25 02:33:05.134322 kernel: [drm] number of cap sets: 0 Mar 25 02:33:05.138302 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Mar 25 02:33:05.142345 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Mar 25 02:33:05.148105 kernel: Console: switching to colour frame buffer device 160x50 Mar 25 02:33:05.154508 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Mar 25 02:33:05.178543 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 25 02:33:05.178691 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 25 02:33:05.181819 systemd[1]: Reloading finished in 460 ms. Mar 25 02:33:05.194233 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 25 02:33:05.200200 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 25 02:33:05.231660 systemd[1]: Finished ensure-sysext.service. Mar 25 02:33:05.248209 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 25 02:33:05.269369 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 25 02:33:05.271577 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 25 02:33:05.287484 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 25 02:33:05.287897 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 25 02:33:05.290440 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 25 02:33:05.294789 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 25 02:33:05.298960 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 25 02:33:05.312039 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 25 02:33:05.316984 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 25 02:33:05.319085 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 25 02:33:05.322935 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 25 02:33:05.324673 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 25 02:33:05.327010 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 25 02:33:05.332305 lvm[1375]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 25 02:33:05.332537 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 25 02:33:05.349457 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 25 02:33:05.355545 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 25 02:33:05.362212 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 25 02:33:05.371057 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 25 02:33:05.372080 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 25 02:33:05.373105 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 25 02:33:05.373431 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 25 02:33:05.373581 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 25 02:33:05.373854 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 25 02:33:05.373999 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 25 02:33:05.374258 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 25 02:33:05.376101 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 25 02:33:05.387135 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 25 02:33:05.387464 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 25 02:33:05.392841 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 25 02:33:05.403547 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 25 02:33:05.404163 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 25 02:33:05.404353 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 25 02:33:05.411894 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 25 02:33:05.421836 augenrules[1414]: No rules Mar 25 02:33:05.421519 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 25 02:33:05.425755 systemd[1]: audit-rules.service: Deactivated successfully. Mar 25 02:33:05.425948 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 25 02:33:05.432102 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 25 02:33:05.432777 lvm[1410]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 25 02:33:05.438317 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 25 02:33:05.447495 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 25 02:33:05.473676 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 25 02:33:05.482795 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 25 02:33:05.501353 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 25 02:33:05.525328 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 25 02:33:05.531976 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 25 02:33:05.547066 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 25 02:33:05.597769 systemd-networkd[1389]: lo: Link UP Mar 25 02:33:05.598070 systemd-networkd[1389]: lo: Gained carrier Mar 25 02:33:05.599505 systemd-networkd[1389]: Enumeration completed Mar 25 02:33:05.599678 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 25 02:33:05.600005 systemd-networkd[1389]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 25 02:33:05.600076 systemd-networkd[1389]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 25 02:33:05.602319 systemd-networkd[1389]: eth0: Link UP Mar 25 02:33:05.602381 systemd-networkd[1389]: eth0: Gained carrier Mar 25 02:33:05.602486 systemd-networkd[1389]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 25 02:33:05.607507 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 25 02:33:05.614398 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 25 02:33:05.618357 systemd-networkd[1389]: eth0: DHCPv4 address 172.24.4.54/24, gateway 172.24.4.1 acquired from 172.24.4.1 Mar 25 02:33:05.641532 systemd-resolved[1391]: Positive Trust Anchors: Mar 25 02:33:05.641901 systemd-resolved[1391]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 25 02:33:05.642054 systemd-resolved[1391]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 25 02:33:05.643133 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 25 02:33:05.647394 systemd-resolved[1391]: Using system hostname 'ci-4284-0-0-3-6c96446f48.novalocal'. Mar 25 02:33:05.647771 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 25 02:33:05.649987 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 25 02:33:05.650501 systemd[1]: Reached target network.target - Network. Mar 25 02:33:05.650923 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 25 02:33:05.652523 systemd[1]: Reached target sysinit.target - System Initialization. Mar 25 02:33:05.654030 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 25 02:33:05.655557 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 25 02:33:05.657020 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 25 02:33:05.658323 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 25 02:33:05.658425 systemd[1]: Reached target paths.target - Path Units. Mar 25 02:33:05.659309 systemd[1]: Reached target time-set.target - System Time Set. Mar 25 02:33:05.660654 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 25 02:33:05.661755 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 25 02:33:05.663145 systemd[1]: Reached target timers.target - Timer Units. Mar 25 02:33:05.665850 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 25 02:33:05.669105 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 25 02:33:05.673650 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 25 02:33:05.678588 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 25 02:33:05.681362 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 25 02:33:05.685760 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 25 02:33:05.689423 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 25 02:33:05.690795 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 25 02:33:05.692441 systemd[1]: Reached target sockets.target - Socket Units. Mar 25 02:33:05.692959 systemd[1]: Reached target basic.target - Basic System. Mar 25 02:33:05.693521 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 25 02:33:05.693557 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 25 02:33:05.696994 systemd[1]: Starting containerd.service - containerd container runtime... Mar 25 02:33:05.701237 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 25 02:33:05.710544 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 25 02:33:05.715327 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 25 02:33:05.721398 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 25 02:33:05.723827 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 25 02:33:05.732433 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 25 02:33:05.737513 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 25 02:33:05.742525 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 25 02:33:05.748294 jq[1450]: false Mar 25 02:33:05.748750 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 25 02:33:05.753775 extend-filesystems[1451]: Found loop4 Mar 25 02:33:05.756204 extend-filesystems[1451]: Found loop5 Mar 25 02:33:05.756204 extend-filesystems[1451]: Found loop6 Mar 25 02:33:05.756204 extend-filesystems[1451]: Found loop7 Mar 25 02:33:05.756204 extend-filesystems[1451]: Found vda Mar 25 02:33:05.756204 extend-filesystems[1451]: Found vda1 Mar 25 02:33:05.756204 extend-filesystems[1451]: Found vda2 Mar 25 02:33:05.756204 extend-filesystems[1451]: Found vda3 Mar 25 02:33:05.756204 extend-filesystems[1451]: Found usr Mar 25 02:33:05.756204 extend-filesystems[1451]: Found vda4 Mar 25 02:33:05.756204 extend-filesystems[1451]: Found vda6 Mar 25 02:33:05.756204 extend-filesystems[1451]: Found vda7 Mar 25 02:33:05.756204 extend-filesystems[1451]: Found vda9 Mar 25 02:33:05.756204 extend-filesystems[1451]: Checking size of /dev/vda9 Mar 25 02:33:05.769447 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 25 02:33:05.770667 dbus-daemon[1447]: [system] SELinux support is enabled Mar 25 02:33:05.775098 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 25 02:33:05.777752 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 25 02:33:05.787950 systemd[1]: Starting update-engine.service - Update Engine... Mar 25 02:33:05.802407 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 25 02:33:05.808545 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1286) Mar 25 02:33:05.809715 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 25 02:33:05.816501 extend-filesystems[1451]: Resized partition /dev/vda9 Mar 25 02:33:05.822818 extend-filesystems[1465]: resize2fs 1.47.2 (1-Jan-2025) Mar 25 02:33:05.833804 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 25 02:33:05.834439 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 25 02:33:05.846780 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 2014203 blocks Mar 25 02:33:05.846585 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 25 02:33:05.846795 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 25 02:33:05.871257 update_engine[1461]: I20250325 02:33:05.866225 1461 main.cc:92] Flatcar Update Engine starting Mar 25 02:33:05.871257 update_engine[1461]: I20250325 02:33:05.868676 1461 update_check_scheduler.cc:74] Next update check in 10m25s Mar 25 02:33:05.871627 (ntainerd)[1474]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 25 02:33:05.882459 kernel: EXT4-fs (vda9): resized filesystem to 2014203 Mar 25 02:33:05.964427 jq[1462]: true Mar 25 02:33:05.907608 systemd[1]: motdgen.service: Deactivated successfully. Mar 25 02:33:05.908100 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 25 02:33:05.964861 jq[1482]: true Mar 25 02:33:05.921103 systemd[1]: Started update-engine.service - Update Engine. Mar 25 02:33:05.939994 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 25 02:33:05.952387 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 25 02:33:05.952430 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 25 02:33:05.954583 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 25 02:33:05.954601 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 25 02:33:05.959310 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 25 02:33:05.969210 extend-filesystems[1465]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 25 02:33:05.969210 extend-filesystems[1465]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 25 02:33:05.969210 extend-filesystems[1465]: The filesystem on /dev/vda9 is now 2014203 (4k) blocks long. Mar 25 02:33:05.985974 extend-filesystems[1451]: Resized filesystem in /dev/vda9 Mar 25 02:33:05.970439 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 25 02:33:05.989604 tar[1470]: linux-amd64/helm Mar 25 02:33:05.970673 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 25 02:33:06.011934 systemd-logind[1458]: New seat seat0. Mar 25 02:33:06.015950 systemd-logind[1458]: Watching system buttons on /dev/input/event1 (Power Button) Mar 25 02:33:06.015975 systemd-logind[1458]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 25 02:33:06.016147 systemd[1]: Started systemd-logind.service - User Login Management. Mar 25 02:33:06.048616 bash[1507]: Updated "/home/core/.ssh/authorized_keys" Mar 25 02:33:06.047668 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 25 02:33:06.066586 systemd[1]: Starting sshkeys.service... Mar 25 02:33:06.075682 systemd-timesyncd[1396]: Contacted time server 69.30.247.121:123 (0.flatcar.pool.ntp.org). Mar 25 02:33:06.075750 systemd-timesyncd[1396]: Initial clock synchronization to Tue 2025-03-25 02:33:06.400743 UTC. Mar 25 02:33:06.104556 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 25 02:33:06.109647 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 25 02:33:06.172029 locksmithd[1490]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 25 02:33:06.319696 sshd_keygen[1483]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 25 02:33:06.352343 containerd[1474]: time="2025-03-25T02:33:06Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 25 02:33:06.352998 containerd[1474]: time="2025-03-25T02:33:06.352629401Z" level=info msg="starting containerd" revision=88aa2f531d6c2922003cc7929e51daf1c14caa0a version=v2.0.1 Mar 25 02:33:06.366132 containerd[1474]: time="2025-03-25T02:33:06.366083925Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="6.152µs" Mar 25 02:33:06.366132 containerd[1474]: time="2025-03-25T02:33:06.366125012Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 25 02:33:06.366225 containerd[1474]: time="2025-03-25T02:33:06.366146132Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 25 02:33:06.366395 containerd[1474]: time="2025-03-25T02:33:06.366369340Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 25 02:33:06.366433 containerd[1474]: time="2025-03-25T02:33:06.366394818Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 25 02:33:06.366433 containerd[1474]: time="2025-03-25T02:33:06.366425035Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 25 02:33:06.366516 containerd[1474]: time="2025-03-25T02:33:06.366489736Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 25 02:33:06.366516 containerd[1474]: time="2025-03-25T02:33:06.366510685Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 25 02:33:06.366780 containerd[1474]: time="2025-03-25T02:33:06.366750726Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 25 02:33:06.366780 containerd[1474]: time="2025-03-25T02:33:06.366774991Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 25 02:33:06.366837 containerd[1474]: time="2025-03-25T02:33:06.366788797Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 25 02:33:06.366837 containerd[1474]: time="2025-03-25T02:33:06.366799417Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 25 02:33:06.366915 containerd[1474]: time="2025-03-25T02:33:06.366891149Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 25 02:33:06.367903 containerd[1474]: time="2025-03-25T02:33:06.367112965Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 25 02:33:06.367903 containerd[1474]: time="2025-03-25T02:33:06.367155184Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 25 02:33:06.367903 containerd[1474]: time="2025-03-25T02:33:06.367168539Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 25 02:33:06.367903 containerd[1474]: time="2025-03-25T02:33:06.367207843Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 25 02:33:06.368137 containerd[1474]: time="2025-03-25T02:33:06.368097622Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 25 02:33:06.368340 containerd[1474]: time="2025-03-25T02:33:06.368188212Z" level=info msg="metadata content store policy set" policy=shared Mar 25 02:33:06.375656 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 25 02:33:06.382513 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 25 02:33:06.386942 containerd[1474]: time="2025-03-25T02:33:06.386794928Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 25 02:33:06.386942 containerd[1474]: time="2025-03-25T02:33:06.386888233Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 25 02:33:06.386942 containerd[1474]: time="2025-03-25T02:33:06.386907339Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 25 02:33:06.386942 containerd[1474]: time="2025-03-25T02:33:06.386921977Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 25 02:33:06.387059 containerd[1474]: time="2025-03-25T02:33:06.386948106Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 25 02:33:06.387059 containerd[1474]: time="2025-03-25T02:33:06.386964557Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 25 02:33:06.387059 containerd[1474]: time="2025-03-25T02:33:06.386985706Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 25 02:33:06.387059 containerd[1474]: time="2025-03-25T02:33:06.387000594Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 25 02:33:06.387059 containerd[1474]: time="2025-03-25T02:33:06.387014360Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 25 02:33:06.387059 containerd[1474]: time="2025-03-25T02:33:06.387026713Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 25 02:33:06.387059 containerd[1474]: time="2025-03-25T02:33:06.387037774Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 25 02:33:06.387059 containerd[1474]: time="2025-03-25T02:33:06.387051420Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 25 02:33:06.387229 containerd[1474]: time="2025-03-25T02:33:06.387198545Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 25 02:33:06.387229 containerd[1474]: time="2025-03-25T02:33:06.387221539Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 25 02:33:06.387293 containerd[1474]: time="2025-03-25T02:33:06.387239683Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 25 02:33:06.387293 containerd[1474]: time="2025-03-25T02:33:06.387252988Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 25 02:33:06.387372 containerd[1474]: time="2025-03-25T02:33:06.387359437Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 25 02:33:06.387402 containerd[1474]: time="2025-03-25T02:33:06.387378022Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 25 02:33:06.387402 containerd[1474]: time="2025-03-25T02:33:06.387391648Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 25 02:33:06.387448 containerd[1474]: time="2025-03-25T02:33:06.387403570Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 25 02:33:06.387448 containerd[1474]: time="2025-03-25T02:33:06.387417256Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 25 02:33:06.387448 containerd[1474]: time="2025-03-25T02:33:06.387429689Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 25 02:33:06.387448 containerd[1474]: time="2025-03-25T02:33:06.387440810Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 25 02:33:06.387534 containerd[1474]: time="2025-03-25T02:33:06.387502255Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 25 02:33:06.387534 containerd[1474]: time="2025-03-25T02:33:06.387517223Z" level=info msg="Start snapshots syncer" Mar 25 02:33:06.387581 containerd[1474]: time="2025-03-25T02:33:06.387538834Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 25 02:33:06.388519 containerd[1474]: time="2025-03-25T02:33:06.387785647Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 25 02:33:06.388519 containerd[1474]: time="2025-03-25T02:33:06.387844026Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 25 02:33:06.388665 containerd[1474]: time="2025-03-25T02:33:06.387903928Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 25 02:33:06.388665 containerd[1474]: time="2025-03-25T02:33:06.387987816Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 25 02:33:06.388665 containerd[1474]: time="2025-03-25T02:33:06.388010388Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 25 02:33:06.388665 containerd[1474]: time="2025-03-25T02:33:06.388022701Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 25 02:33:06.388665 containerd[1474]: time="2025-03-25T02:33:06.388036457Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 25 02:33:06.388665 containerd[1474]: time="2025-03-25T02:33:06.388049652Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 25 02:33:06.388665 containerd[1474]: time="2025-03-25T02:33:06.388061965Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 25 02:33:06.388665 containerd[1474]: time="2025-03-25T02:33:06.388073747Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 25 02:33:06.388665 containerd[1474]: time="2025-03-25T02:33:06.388095237Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 25 02:33:06.388665 containerd[1474]: time="2025-03-25T02:33:06.388124212Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 25 02:33:06.388665 containerd[1474]: time="2025-03-25T02:33:06.388137206Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 25 02:33:06.388665 containerd[1474]: time="2025-03-25T02:33:06.388164838Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 25 02:33:06.388665 containerd[1474]: time="2025-03-25T02:33:06.388179064Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 25 02:33:06.388665 containerd[1474]: time="2025-03-25T02:33:06.388189634Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 25 02:33:06.388964 containerd[1474]: time="2025-03-25T02:33:06.388202048Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 25 02:33:06.388964 containerd[1474]: time="2025-03-25T02:33:06.388211736Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 25 02:33:06.388964 containerd[1474]: time="2025-03-25T02:33:06.388222105Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 25 02:33:06.388964 containerd[1474]: time="2025-03-25T02:33:06.388233246Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 25 02:33:06.388964 containerd[1474]: time="2025-03-25T02:33:06.388250058Z" level=info msg="runtime interface created" Mar 25 02:33:06.394217 containerd[1474]: time="2025-03-25T02:33:06.388257782Z" level=info msg="created NRI interface" Mar 25 02:33:06.394217 containerd[1474]: time="2025-03-25T02:33:06.390543930Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 25 02:33:06.394217 containerd[1474]: time="2025-03-25T02:33:06.390614492Z" level=info msg="Connect containerd service" Mar 25 02:33:06.394217 containerd[1474]: time="2025-03-25T02:33:06.390652243Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 25 02:33:06.391322 systemd[1]: Started sshd@0-172.24.4.54:22-172.24.4.1:34102.service - OpenSSH per-connection server daemon (172.24.4.1:34102). Mar 25 02:33:06.395980 containerd[1474]: time="2025-03-25T02:33:06.395940521Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 25 02:33:06.424696 systemd[1]: issuegen.service: Deactivated successfully. Mar 25 02:33:06.425274 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 25 02:33:06.433537 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 25 02:33:06.464839 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 25 02:33:06.474736 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 25 02:33:06.484639 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 25 02:33:06.485532 systemd[1]: Reached target getty.target - Login Prompts. Mar 25 02:33:06.614158 containerd[1474]: time="2025-03-25T02:33:06.614068239Z" level=info msg="Start subscribing containerd event" Mar 25 02:33:06.614392 containerd[1474]: time="2025-03-25T02:33:06.614362031Z" level=info msg="Start recovering state" Mar 25 02:33:06.614520 containerd[1474]: time="2025-03-25T02:33:06.614505500Z" level=info msg="Start event monitor" Mar 25 02:33:06.614791 containerd[1474]: time="2025-03-25T02:33:06.614776207Z" level=info msg="Start cni network conf syncer for default" Mar 25 02:33:06.614859 containerd[1474]: time="2025-03-25T02:33:06.614846249Z" level=info msg="Start streaming server" Mar 25 02:33:06.614914 containerd[1474]: time="2025-03-25T02:33:06.614901763Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 25 02:33:06.614979 containerd[1474]: time="2025-03-25T02:33:06.614966725Z" level=info msg="runtime interface starting up..." Mar 25 02:33:06.615326 containerd[1474]: time="2025-03-25T02:33:06.615254554Z" level=info msg="starting plugins..." Mar 25 02:33:06.615488 containerd[1474]: time="2025-03-25T02:33:06.615473425Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 25 02:33:06.615612 containerd[1474]: time="2025-03-25T02:33:06.615170737Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 25 02:33:06.616285 containerd[1474]: time="2025-03-25T02:33:06.615765603Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 25 02:33:06.616420 containerd[1474]: time="2025-03-25T02:33:06.616405162Z" level=info msg="containerd successfully booted in 0.265413s" Mar 25 02:33:06.616551 systemd[1]: Started containerd.service - containerd container runtime. Mar 25 02:33:06.694335 tar[1470]: linux-amd64/LICENSE Mar 25 02:33:06.694674 tar[1470]: linux-amd64/README.md Mar 25 02:33:06.711933 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 25 02:33:06.725371 systemd-networkd[1389]: eth0: Gained IPv6LL Mar 25 02:33:06.728029 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 25 02:33:06.735111 systemd[1]: Reached target network-online.target - Network is Online. Mar 25 02:33:06.741514 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 02:33:06.748857 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 25 02:33:06.792474 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 25 02:33:07.660207 sshd[1534]: Accepted publickey for core from 172.24.4.1 port 34102 ssh2: RSA SHA256:2p5KKBBmNEwazQvcAFKs6NISXxKbrLHbHWGQ80PLawU Mar 25 02:33:07.663799 sshd-session[1534]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:33:07.704257 systemd-logind[1458]: New session 1 of user core. Mar 25 02:33:07.707491 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 25 02:33:07.713656 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 25 02:33:07.739931 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 25 02:33:07.746523 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 25 02:33:07.760850 (systemd)[1573]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 25 02:33:07.768854 systemd-logind[1458]: New session c1 of user core. Mar 25 02:33:07.942355 systemd[1573]: Queued start job for default target default.target. Mar 25 02:33:07.948221 systemd[1573]: Created slice app.slice - User Application Slice. Mar 25 02:33:07.948249 systemd[1573]: Reached target paths.target - Paths. Mar 25 02:33:07.948288 systemd[1573]: Reached target timers.target - Timers. Mar 25 02:33:07.950413 systemd[1573]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 25 02:33:07.963449 systemd[1573]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 25 02:33:07.963593 systemd[1573]: Reached target sockets.target - Sockets. Mar 25 02:33:07.963639 systemd[1573]: Reached target basic.target - Basic System. Mar 25 02:33:07.963678 systemd[1573]: Reached target default.target - Main User Target. Mar 25 02:33:07.963707 systemd[1573]: Startup finished in 185ms. Mar 25 02:33:07.963830 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 25 02:33:07.975527 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 25 02:33:08.322512 systemd[1]: Started sshd@1-172.24.4.54:22-172.24.4.1:58210.service - OpenSSH per-connection server daemon (172.24.4.1:58210). Mar 25 02:33:08.890710 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 02:33:08.909022 (kubelet)[1592]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 25 02:33:09.692075 sshd[1584]: Accepted publickey for core from 172.24.4.1 port 58210 ssh2: RSA SHA256:2p5KKBBmNEwazQvcAFKs6NISXxKbrLHbHWGQ80PLawU Mar 25 02:33:09.695631 sshd-session[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:33:09.709469 systemd-logind[1458]: New session 2 of user core. Mar 25 02:33:09.722776 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 25 02:33:10.338936 sshd[1598]: Connection closed by 172.24.4.1 port 58210 Mar 25 02:33:10.341624 sshd-session[1584]: pam_unix(sshd:session): session closed for user core Mar 25 02:33:10.360911 systemd[1]: sshd@1-172.24.4.54:22-172.24.4.1:58210.service: Deactivated successfully. Mar 25 02:33:10.365499 systemd[1]: session-2.scope: Deactivated successfully. Mar 25 02:33:10.367558 systemd-logind[1458]: Session 2 logged out. Waiting for processes to exit. Mar 25 02:33:10.372550 systemd[1]: Started sshd@2-172.24.4.54:22-172.24.4.1:58226.service - OpenSSH per-connection server daemon (172.24.4.1:58226). Mar 25 02:33:10.378679 systemd-logind[1458]: Removed session 2. Mar 25 02:33:10.645095 kubelet[1592]: E0325 02:33:10.644187 1592 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 25 02:33:10.648238 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 25 02:33:10.648621 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 25 02:33:10.649380 systemd[1]: kubelet.service: Consumed 2.086s CPU time, 236.2M memory peak. Mar 25 02:33:11.547193 login[1544]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Mar 25 02:33:11.562431 systemd-logind[1458]: New session 3 of user core. Mar 25 02:33:11.572209 login[1547]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Mar 25 02:33:11.578196 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 25 02:33:11.593650 systemd-logind[1458]: New session 4 of user core. Mar 25 02:33:11.605828 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 25 02:33:11.739620 sshd[1604]: Accepted publickey for core from 172.24.4.1 port 58226 ssh2: RSA SHA256:2p5KKBBmNEwazQvcAFKs6NISXxKbrLHbHWGQ80PLawU Mar 25 02:33:11.742443 sshd-session[1604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:33:11.754685 systemd-logind[1458]: New session 5 of user core. Mar 25 02:33:11.766766 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 25 02:33:12.396336 sshd[1632]: Connection closed by 172.24.4.1 port 58226 Mar 25 02:33:12.395210 sshd-session[1604]: pam_unix(sshd:session): session closed for user core Mar 25 02:33:12.402226 systemd[1]: sshd@2-172.24.4.54:22-172.24.4.1:58226.service: Deactivated successfully. Mar 25 02:33:12.406872 systemd[1]: session-5.scope: Deactivated successfully. Mar 25 02:33:12.409946 systemd-logind[1458]: Session 5 logged out. Waiting for processes to exit. Mar 25 02:33:12.412079 systemd-logind[1458]: Removed session 5. Mar 25 02:33:12.780735 coreos-metadata[1446]: Mar 25 02:33:12.780 WARN failed to locate config-drive, using the metadata service API instead Mar 25 02:33:12.831011 coreos-metadata[1446]: Mar 25 02:33:12.830 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Mar 25 02:33:13.089117 coreos-metadata[1446]: Mar 25 02:33:13.088 INFO Fetch successful Mar 25 02:33:13.089117 coreos-metadata[1446]: Mar 25 02:33:13.088 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Mar 25 02:33:13.104381 coreos-metadata[1446]: Mar 25 02:33:13.104 INFO Fetch successful Mar 25 02:33:13.104381 coreos-metadata[1446]: Mar 25 02:33:13.104 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Mar 25 02:33:13.121627 coreos-metadata[1446]: Mar 25 02:33:13.121 INFO Fetch successful Mar 25 02:33:13.121627 coreos-metadata[1446]: Mar 25 02:33:13.121 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Mar 25 02:33:13.137657 coreos-metadata[1446]: Mar 25 02:33:13.137 INFO Fetch successful Mar 25 02:33:13.137657 coreos-metadata[1446]: Mar 25 02:33:13.137 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Mar 25 02:33:13.152159 coreos-metadata[1446]: Mar 25 02:33:13.152 INFO Fetch successful Mar 25 02:33:13.152159 coreos-metadata[1446]: Mar 25 02:33:13.152 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Mar 25 02:33:13.167976 coreos-metadata[1446]: Mar 25 02:33:13.167 INFO Fetch successful Mar 25 02:33:13.217179 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 25 02:33:13.218614 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 25 02:33:13.226254 coreos-metadata[1510]: Mar 25 02:33:13.226 WARN failed to locate config-drive, using the metadata service API instead Mar 25 02:33:13.269939 coreos-metadata[1510]: Mar 25 02:33:13.269 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Mar 25 02:33:13.286744 coreos-metadata[1510]: Mar 25 02:33:13.286 INFO Fetch successful Mar 25 02:33:13.286744 coreos-metadata[1510]: Mar 25 02:33:13.286 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Mar 25 02:33:13.300200 coreos-metadata[1510]: Mar 25 02:33:13.300 INFO Fetch successful Mar 25 02:33:13.305087 unknown[1510]: wrote ssh authorized keys file for user: core Mar 25 02:33:13.350205 update-ssh-keys[1647]: Updated "/home/core/.ssh/authorized_keys" Mar 25 02:33:13.351562 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 25 02:33:13.355708 systemd[1]: Finished sshkeys.service. Mar 25 02:33:13.360602 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 25 02:33:13.361109 systemd[1]: Startup finished in 1.238s (kernel) + 15.645s (initrd) + 10.967s (userspace) = 27.850s. Mar 25 02:33:20.900962 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 25 02:33:20.904226 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 02:33:21.240463 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 02:33:21.254869 (kubelet)[1658]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 25 02:33:21.348957 kubelet[1658]: E0325 02:33:21.348848 1658 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 25 02:33:21.356929 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 25 02:33:21.357528 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 25 02:33:21.358523 systemd[1]: kubelet.service: Consumed 323ms CPU time, 95.7M memory peak. Mar 25 02:33:22.532149 systemd[1]: Started sshd@3-172.24.4.54:22-172.24.4.1:55896.service - OpenSSH per-connection server daemon (172.24.4.1:55896). Mar 25 02:33:24.009849 sshd[1666]: Accepted publickey for core from 172.24.4.1 port 55896 ssh2: RSA SHA256:2p5KKBBmNEwazQvcAFKs6NISXxKbrLHbHWGQ80PLawU Mar 25 02:33:24.012331 sshd-session[1666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:33:24.022861 systemd-logind[1458]: New session 6 of user core. Mar 25 02:33:24.033538 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 25 02:33:24.702379 sshd[1668]: Connection closed by 172.24.4.1 port 55896 Mar 25 02:33:24.704605 sshd-session[1666]: pam_unix(sshd:session): session closed for user core Mar 25 02:33:24.720479 systemd[1]: sshd@3-172.24.4.54:22-172.24.4.1:55896.service: Deactivated successfully. Mar 25 02:33:24.723914 systemd[1]: session-6.scope: Deactivated successfully. Mar 25 02:33:24.726986 systemd-logind[1458]: Session 6 logged out. Waiting for processes to exit. Mar 25 02:33:24.731053 systemd[1]: Started sshd@4-172.24.4.54:22-172.24.4.1:54894.service - OpenSSH per-connection server daemon (172.24.4.1:54894). Mar 25 02:33:24.733851 systemd-logind[1458]: Removed session 6. Mar 25 02:33:26.258453 sshd[1673]: Accepted publickey for core from 172.24.4.1 port 54894 ssh2: RSA SHA256:2p5KKBBmNEwazQvcAFKs6NISXxKbrLHbHWGQ80PLawU Mar 25 02:33:26.262067 sshd-session[1673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:33:26.274255 systemd-logind[1458]: New session 7 of user core. Mar 25 02:33:26.292640 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 25 02:33:26.998526 sshd[1676]: Connection closed by 172.24.4.1 port 54894 Mar 25 02:33:26.998118 sshd-session[1673]: pam_unix(sshd:session): session closed for user core Mar 25 02:33:27.015532 systemd[1]: sshd@4-172.24.4.54:22-172.24.4.1:54894.service: Deactivated successfully. Mar 25 02:33:27.018842 systemd[1]: session-7.scope: Deactivated successfully. Mar 25 02:33:27.020935 systemd-logind[1458]: Session 7 logged out. Waiting for processes to exit. Mar 25 02:33:27.024839 systemd[1]: Started sshd@5-172.24.4.54:22-172.24.4.1:54902.service - OpenSSH per-connection server daemon (172.24.4.1:54902). Mar 25 02:33:27.027182 systemd-logind[1458]: Removed session 7. Mar 25 02:33:28.318585 sshd[1681]: Accepted publickey for core from 172.24.4.1 port 54902 ssh2: RSA SHA256:2p5KKBBmNEwazQvcAFKs6NISXxKbrLHbHWGQ80PLawU Mar 25 02:33:28.321173 sshd-session[1681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:33:28.334047 systemd-logind[1458]: New session 8 of user core. Mar 25 02:33:28.340604 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 25 02:33:28.985318 sshd[1684]: Connection closed by 172.24.4.1 port 54902 Mar 25 02:33:28.986536 sshd-session[1681]: pam_unix(sshd:session): session closed for user core Mar 25 02:33:29.001485 systemd[1]: sshd@5-172.24.4.54:22-172.24.4.1:54902.service: Deactivated successfully. Mar 25 02:33:29.004212 systemd[1]: session-8.scope: Deactivated successfully. Mar 25 02:33:29.007733 systemd-logind[1458]: Session 8 logged out. Waiting for processes to exit. Mar 25 02:33:29.010684 systemd[1]: Started sshd@6-172.24.4.54:22-172.24.4.1:54910.service - OpenSSH per-connection server daemon (172.24.4.1:54910). Mar 25 02:33:29.013669 systemd-logind[1458]: Removed session 8. Mar 25 02:33:30.413880 sshd[1689]: Accepted publickey for core from 172.24.4.1 port 54910 ssh2: RSA SHA256:2p5KKBBmNEwazQvcAFKs6NISXxKbrLHbHWGQ80PLawU Mar 25 02:33:30.417036 sshd-session[1689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:33:30.428766 systemd-logind[1458]: New session 9 of user core. Mar 25 02:33:30.439556 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 25 02:33:30.944146 sudo[1693]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 25 02:33:30.945690 sudo[1693]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 25 02:33:30.967867 sudo[1693]: pam_unix(sudo:session): session closed for user root Mar 25 02:33:31.186313 sshd[1692]: Connection closed by 172.24.4.1 port 54910 Mar 25 02:33:31.187563 sshd-session[1689]: pam_unix(sshd:session): session closed for user core Mar 25 02:33:31.203760 systemd[1]: sshd@6-172.24.4.54:22-172.24.4.1:54910.service: Deactivated successfully. Mar 25 02:33:31.207003 systemd[1]: session-9.scope: Deactivated successfully. Mar 25 02:33:31.210764 systemd-logind[1458]: Session 9 logged out. Waiting for processes to exit. Mar 25 02:33:31.214239 systemd[1]: Started sshd@7-172.24.4.54:22-172.24.4.1:54912.service - OpenSSH per-connection server daemon (172.24.4.1:54912). Mar 25 02:33:31.217579 systemd-logind[1458]: Removed session 9. Mar 25 02:33:31.373433 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 25 02:33:31.377493 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 02:33:31.731040 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 02:33:31.752323 (kubelet)[1709]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 25 02:33:31.824889 kubelet[1709]: E0325 02:33:31.824785 1709 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 25 02:33:31.827788 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 25 02:33:31.828058 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 25 02:33:31.828953 systemd[1]: kubelet.service: Consumed 273ms CPU time, 97.5M memory peak. Mar 25 02:33:32.794021 sshd[1698]: Accepted publickey for core from 172.24.4.1 port 54912 ssh2: RSA SHA256:2p5KKBBmNEwazQvcAFKs6NISXxKbrLHbHWGQ80PLawU Mar 25 02:33:32.796765 sshd-session[1698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:33:32.808770 systemd-logind[1458]: New session 10 of user core. Mar 25 02:33:32.818556 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 25 02:33:33.277830 sudo[1718]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 25 02:33:33.278563 sudo[1718]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 25 02:33:33.285424 sudo[1718]: pam_unix(sudo:session): session closed for user root Mar 25 02:33:33.296953 sudo[1717]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 25 02:33:33.297634 sudo[1717]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 25 02:33:33.318004 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 25 02:33:33.394138 augenrules[1740]: No rules Mar 25 02:33:33.396153 systemd[1]: audit-rules.service: Deactivated successfully. Mar 25 02:33:33.396724 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 25 02:33:33.400481 sudo[1717]: pam_unix(sudo:session): session closed for user root Mar 25 02:33:33.584572 sshd[1716]: Connection closed by 172.24.4.1 port 54912 Mar 25 02:33:33.584179 sshd-session[1698]: pam_unix(sshd:session): session closed for user core Mar 25 02:33:33.607939 systemd[1]: sshd@7-172.24.4.54:22-172.24.4.1:54912.service: Deactivated successfully. Mar 25 02:33:33.610916 systemd[1]: session-10.scope: Deactivated successfully. Mar 25 02:33:33.612877 systemd-logind[1458]: Session 10 logged out. Waiting for processes to exit. Mar 25 02:33:33.617220 systemd[1]: Started sshd@8-172.24.4.54:22-172.24.4.1:52178.service - OpenSSH per-connection server daemon (172.24.4.1:52178). Mar 25 02:33:33.619470 systemd-logind[1458]: Removed session 10. Mar 25 02:33:34.904700 sshd[1748]: Accepted publickey for core from 172.24.4.1 port 52178 ssh2: RSA SHA256:2p5KKBBmNEwazQvcAFKs6NISXxKbrLHbHWGQ80PLawU Mar 25 02:33:34.907260 sshd-session[1748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:33:34.918779 systemd-logind[1458]: New session 11 of user core. Mar 25 02:33:34.926607 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 25 02:33:35.370757 sudo[1752]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 25 02:33:35.371426 sudo[1752]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 25 02:33:36.087027 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 25 02:33:36.103520 (dockerd)[1769]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 25 02:33:36.620601 dockerd[1769]: time="2025-03-25T02:33:36.619600225Z" level=info msg="Starting up" Mar 25 02:33:36.623408 dockerd[1769]: time="2025-03-25T02:33:36.623357772Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 25 02:33:36.677327 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2138999011-merged.mount: Deactivated successfully. Mar 25 02:33:36.713541 dockerd[1769]: time="2025-03-25T02:33:36.713504084Z" level=info msg="Loading containers: start." Mar 25 02:33:36.929461 kernel: Initializing XFRM netlink socket Mar 25 02:33:37.049009 systemd-networkd[1389]: docker0: Link UP Mar 25 02:33:37.197218 dockerd[1769]: time="2025-03-25T02:33:37.196689793Z" level=info msg="Loading containers: done." Mar 25 02:33:37.230604 dockerd[1769]: time="2025-03-25T02:33:37.230520929Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 25 02:33:37.230876 dockerd[1769]: time="2025-03-25T02:33:37.230685459Z" level=info msg="Docker daemon" commit=c710b88579fcb5e0d53f96dcae976d79323b9166 containerd-snapshotter=false storage-driver=overlay2 version=27.4.1 Mar 25 02:33:37.230947 dockerd[1769]: time="2025-03-25T02:33:37.230898787Z" level=info msg="Daemon has completed initialization" Mar 25 02:33:37.296174 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 25 02:33:37.297751 dockerd[1769]: time="2025-03-25T02:33:37.295896726Z" level=info msg="API listen on /run/docker.sock" Mar 25 02:33:38.859366 containerd[1474]: time="2025-03-25T02:33:38.859244240Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.7\"" Mar 25 02:33:39.606773 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2735015774.mount: Deactivated successfully. Mar 25 02:33:41.399964 containerd[1474]: time="2025-03-25T02:33:41.399772146Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:33:41.402327 containerd[1474]: time="2025-03-25T02:33:41.402154740Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.7: active requests=0, bytes read=27959276" Mar 25 02:33:41.404457 containerd[1474]: time="2025-03-25T02:33:41.404341492Z" level=info msg="ImageCreate event name:\"sha256:f084bc047a8cf7c8484d47c51e70e646dde3977d916f282feb99207b7b9241af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:33:41.410564 containerd[1474]: time="2025-03-25T02:33:41.410465499Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:22c19cc70fe5806d0a2cb28a6b6b33fd34e6f9e50616bdf6d53649bcfafbc277\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:33:41.413397 containerd[1474]: time="2025-03-25T02:33:41.413049479Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.7\" with image id \"sha256:f084bc047a8cf7c8484d47c51e70e646dde3977d916f282feb99207b7b9241af\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:22c19cc70fe5806d0a2cb28a6b6b33fd34e6f9e50616bdf6d53649bcfafbc277\", size \"27956068\" in 2.553680132s" Mar 25 02:33:41.413397 containerd[1474]: time="2025-03-25T02:33:41.413130269Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.7\" returns image reference \"sha256:f084bc047a8cf7c8484d47c51e70e646dde3977d916f282feb99207b7b9241af\"" Mar 25 02:33:41.417167 containerd[1474]: time="2025-03-25T02:33:41.417110446Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.7\"" Mar 25 02:33:41.874399 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 25 02:33:41.877622 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 02:33:42.067431 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 02:33:42.073549 (kubelet)[2028]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 25 02:33:42.334177 kubelet[2028]: E0325 02:33:42.332069 2028 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 25 02:33:42.334810 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 25 02:33:42.334965 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 25 02:33:42.335259 systemd[1]: kubelet.service: Consumed 246ms CPU time, 94M memory peak. Mar 25 02:33:43.528779 containerd[1474]: time="2025-03-25T02:33:43.528653884Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:33:43.539710 containerd[1474]: time="2025-03-25T02:33:43.539558808Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.7: active requests=0, bytes read=24713784" Mar 25 02:33:43.541771 containerd[1474]: time="2025-03-25T02:33:43.541640924Z" level=info msg="ImageCreate event name:\"sha256:652dcad615a9a0c252c253860d5b5b7bfebd3efe159dc033a8555bc15a6d1985\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:33:43.547741 containerd[1474]: time="2025-03-25T02:33:43.547608878Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6abe7a0accecf29db6ebab18a10f844678ffed693d79e2e51a18a6f2b4530cbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:33:43.552086 containerd[1474]: time="2025-03-25T02:33:43.550800576Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.7\" with image id \"sha256:652dcad615a9a0c252c253860d5b5b7bfebd3efe159dc033a8555bc15a6d1985\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6abe7a0accecf29db6ebab18a10f844678ffed693d79e2e51a18a6f2b4530cbb\", size \"26201384\" in 2.13361309s" Mar 25 02:33:43.552086 containerd[1474]: time="2025-03-25T02:33:43.550893030Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.7\" returns image reference \"sha256:652dcad615a9a0c252c253860d5b5b7bfebd3efe159dc033a8555bc15a6d1985\"" Mar 25 02:33:43.552086 containerd[1474]: time="2025-03-25T02:33:43.552026673Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.7\"" Mar 25 02:33:45.382810 containerd[1474]: time="2025-03-25T02:33:45.382596226Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:33:45.384034 containerd[1474]: time="2025-03-25T02:33:45.383754492Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.7: active requests=0, bytes read=18780376" Mar 25 02:33:45.385220 containerd[1474]: time="2025-03-25T02:33:45.385165451Z" level=info msg="ImageCreate event name:\"sha256:7f1f6a63d8aa14cf61d0045e912ad312b4ade24637cecccc933b163582eae68c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:33:45.388486 containerd[1474]: time="2025-03-25T02:33:45.388427013Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:fb80249bcb77ee72b1c9fa5b70bc28a83ed107c9ca71957841ad91db379963bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:33:45.389332 containerd[1474]: time="2025-03-25T02:33:45.389197642Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.7\" with image id \"sha256:7f1f6a63d8aa14cf61d0045e912ad312b4ade24637cecccc933b163582eae68c\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:fb80249bcb77ee72b1c9fa5b70bc28a83ed107c9ca71957841ad91db379963bf\", size \"20267994\" in 1.837125395s" Mar 25 02:33:45.389332 containerd[1474]: time="2025-03-25T02:33:45.389228739Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.7\" returns image reference \"sha256:7f1f6a63d8aa14cf61d0045e912ad312b4ade24637cecccc933b163582eae68c\"" Mar 25 02:33:45.390058 containerd[1474]: time="2025-03-25T02:33:45.389794183Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.7\"" Mar 25 02:33:47.043165 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1539980688.mount: Deactivated successfully. Mar 25 02:33:47.569419 containerd[1474]: time="2025-03-25T02:33:47.569376345Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:33:47.570556 containerd[1474]: time="2025-03-25T02:33:47.570523494Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.7: active requests=0, bytes read=30354638" Mar 25 02:33:47.571985 containerd[1474]: time="2025-03-25T02:33:47.571961763Z" level=info msg="ImageCreate event name:\"sha256:dcfc039c372ea285997a302d60e58a75b80905b4c4dba969993b9b22e8ac66d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:33:47.574313 containerd[1474]: time="2025-03-25T02:33:47.574289853Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e5839270c96c3ad1bea1dce4935126d3281297527f3655408d2970aa4b5cf178\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:33:47.574874 containerd[1474]: time="2025-03-25T02:33:47.574832699Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.7\" with image id \"sha256:dcfc039c372ea285997a302d60e58a75b80905b4c4dba969993b9b22e8ac66d1\", repo tag \"registry.k8s.io/kube-proxy:v1.31.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:e5839270c96c3ad1bea1dce4935126d3281297527f3655408d2970aa4b5cf178\", size \"30353649\" in 2.185012291s" Mar 25 02:33:47.574920 containerd[1474]: time="2025-03-25T02:33:47.574873727Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.7\" returns image reference \"sha256:dcfc039c372ea285997a302d60e58a75b80905b4c4dba969993b9b22e8ac66d1\"" Mar 25 02:33:47.575255 containerd[1474]: time="2025-03-25T02:33:47.575224992Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Mar 25 02:33:48.507026 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount252568097.mount: Deactivated successfully. Mar 25 02:33:51.437461 containerd[1474]: time="2025-03-25T02:33:51.436462591Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:33:51.438932 containerd[1474]: time="2025-03-25T02:33:51.438898677Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Mar 25 02:33:51.440444 containerd[1474]: time="2025-03-25T02:33:51.440397472Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:33:51.443155 containerd[1474]: time="2025-03-25T02:33:51.443131125Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:33:51.444159 containerd[1474]: time="2025-03-25T02:33:51.444135120Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 3.868881049s" Mar 25 02:33:51.444245 containerd[1474]: time="2025-03-25T02:33:51.444228345Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Mar 25 02:33:51.445316 containerd[1474]: time="2025-03-25T02:33:51.445297631Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 25 02:33:51.526082 update_engine[1461]: I20250325 02:33:51.525964 1461 update_attempter.cc:509] Updating boot flags... Mar 25 02:33:51.591631 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2104) Mar 25 02:33:51.662733 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2103) Mar 25 02:33:52.013214 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2564203668.mount: Deactivated successfully. Mar 25 02:33:52.023376 containerd[1474]: time="2025-03-25T02:33:52.023237893Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 25 02:33:52.025962 containerd[1474]: time="2025-03-25T02:33:52.025811403Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Mar 25 02:33:52.027834 containerd[1474]: time="2025-03-25T02:33:52.027703154Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 25 02:33:52.031828 containerd[1474]: time="2025-03-25T02:33:52.031733271Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 25 02:33:52.034176 containerd[1474]: time="2025-03-25T02:33:52.033909001Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 588.503111ms" Mar 25 02:33:52.034176 containerd[1474]: time="2025-03-25T02:33:52.033975442Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Mar 25 02:33:52.036131 containerd[1474]: time="2025-03-25T02:33:52.035741567Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Mar 25 02:33:52.373039 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 25 02:33:52.376240 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 02:33:52.535384 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 02:33:52.540490 (kubelet)[2123]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 25 02:33:52.776163 kubelet[2123]: E0325 02:33:52.775828 2123 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 25 02:33:52.781138 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 25 02:33:52.781537 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 25 02:33:52.782921 systemd[1]: kubelet.service: Consumed 213ms CPU time, 95.9M memory peak. Mar 25 02:33:52.984463 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2807289039.mount: Deactivated successfully. Mar 25 02:33:56.276525 containerd[1474]: time="2025-03-25T02:33:56.276414458Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:33:56.278915 containerd[1474]: time="2025-03-25T02:33:56.278730581Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56779981" Mar 25 02:33:56.280370 containerd[1474]: time="2025-03-25T02:33:56.280207770Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:33:56.289397 containerd[1474]: time="2025-03-25T02:33:56.289340514Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:33:56.292901 containerd[1474]: time="2025-03-25T02:33:56.292797863Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 4.256993192s" Mar 25 02:33:56.292901 containerd[1474]: time="2025-03-25T02:33:56.292867336Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Mar 25 02:33:59.979635 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 02:33:59.979824 systemd[1]: kubelet.service: Consumed 213ms CPU time, 95.9M memory peak. Mar 25 02:33:59.982903 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 02:34:00.028226 systemd[1]: Reload requested from client PID 2210 ('systemctl') (unit session-11.scope)... Mar 25 02:34:00.028568 systemd[1]: Reloading... Mar 25 02:34:00.148975 zram_generator::config[2259]: No configuration found. Mar 25 02:34:00.295985 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 25 02:34:00.422422 systemd[1]: Reloading finished in 392 ms. Mar 25 02:34:00.464177 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 25 02:34:00.464313 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 25 02:34:00.464608 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 02:34:00.464657 systemd[1]: kubelet.service: Consumed 105ms CPU time, 83.6M memory peak. Mar 25 02:34:00.467565 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 02:34:00.615311 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 02:34:00.622672 (kubelet)[2320]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 25 02:34:00.692740 kubelet[2320]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 25 02:34:00.692740 kubelet[2320]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 25 02:34:00.692740 kubelet[2320]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 25 02:34:00.693083 kubelet[2320]: I0325 02:34:00.692923 2320 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 25 02:34:01.155988 kubelet[2320]: I0325 02:34:01.155857 2320 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Mar 25 02:34:01.155988 kubelet[2320]: I0325 02:34:01.155905 2320 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 25 02:34:01.156392 kubelet[2320]: I0325 02:34:01.156171 2320 server.go:929] "Client rotation is on, will bootstrap in background" Mar 25 02:34:01.194146 kubelet[2320]: I0325 02:34:01.193737 2320 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 25 02:34:01.195669 kubelet[2320]: E0325 02:34:01.195559 2320 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.24.4.54:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.24.4.54:6443: connect: connection refused" logger="UnhandledError" Mar 25 02:34:01.207697 kubelet[2320]: I0325 02:34:01.207674 2320 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 25 02:34:01.214133 kubelet[2320]: I0325 02:34:01.213309 2320 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 25 02:34:01.214133 kubelet[2320]: I0325 02:34:01.213456 2320 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 25 02:34:01.214133 kubelet[2320]: I0325 02:34:01.214061 2320 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 25 02:34:01.214509 kubelet[2320]: I0325 02:34:01.214106 2320 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4284-0-0-3-6c96446f48.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 25 02:34:01.214686 kubelet[2320]: I0325 02:34:01.214673 2320 topology_manager.go:138] "Creating topology manager with none policy" Mar 25 02:34:01.214749 kubelet[2320]: I0325 02:34:01.214740 2320 container_manager_linux.go:300] "Creating device plugin manager" Mar 25 02:34:01.214902 kubelet[2320]: I0325 02:34:01.214890 2320 state_mem.go:36] "Initialized new in-memory state store" Mar 25 02:34:01.217618 kubelet[2320]: I0325 02:34:01.217604 2320 kubelet.go:408] "Attempting to sync node with API server" Mar 25 02:34:01.217701 kubelet[2320]: I0325 02:34:01.217691 2320 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 25 02:34:01.217787 kubelet[2320]: I0325 02:34:01.217777 2320 kubelet.go:314] "Adding apiserver pod source" Mar 25 02:34:01.217863 kubelet[2320]: I0325 02:34:01.217853 2320 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 25 02:34:01.232659 kubelet[2320]: W0325 02:34:01.232356 2320 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284-0-0-3-6c96446f48.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.54:6443: connect: connection refused Mar 25 02:34:01.232659 kubelet[2320]: E0325 02:34:01.232526 2320 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.24.4.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284-0-0-3-6c96446f48.novalocal&limit=500&resourceVersion=0\": dial tcp 172.24.4.54:6443: connect: connection refused" logger="UnhandledError" Mar 25 02:34:01.232809 kubelet[2320]: I0325 02:34:01.232712 2320 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" Mar 25 02:34:01.237992 kubelet[2320]: W0325 02:34:01.237874 2320 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.54:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.24.4.54:6443: connect: connection refused Mar 25 02:34:01.237992 kubelet[2320]: E0325 02:34:01.237935 2320 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.24.4.54:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.54:6443: connect: connection refused" logger="UnhandledError" Mar 25 02:34:01.238123 kubelet[2320]: I0325 02:34:01.238083 2320 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 25 02:34:01.238313 kubelet[2320]: W0325 02:34:01.238196 2320 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 25 02:34:01.239663 kubelet[2320]: I0325 02:34:01.239559 2320 server.go:1269] "Started kubelet" Mar 25 02:34:01.251526 kubelet[2320]: I0325 02:34:01.250873 2320 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 25 02:34:01.251770 kubelet[2320]: E0325 02:34:01.248364 2320 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.54:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.54:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4284-0-0-3-6c96446f48.novalocal.182feb1181c02795 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4284-0-0-3-6c96446f48.novalocal,UID:ci-4284-0-0-3-6c96446f48.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4284-0-0-3-6c96446f48.novalocal,},FirstTimestamp:2025-03-25 02:34:01.239496597 +0000 UTC m=+0.613454424,LastTimestamp:2025-03-25 02:34:01.239496597 +0000 UTC m=+0.613454424,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4284-0-0-3-6c96446f48.novalocal,}" Mar 25 02:34:01.254434 kubelet[2320]: I0325 02:34:01.254373 2320 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 25 02:34:01.255736 kubelet[2320]: I0325 02:34:01.255721 2320 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 25 02:34:01.255956 kubelet[2320]: I0325 02:34:01.255915 2320 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 25 02:34:01.256076 kubelet[2320]: E0325 02:34:01.256059 2320 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4284-0-0-3-6c96446f48.novalocal\" not found" Mar 25 02:34:01.260336 kubelet[2320]: I0325 02:34:01.260318 2320 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 25 02:34:01.262083 kubelet[2320]: I0325 02:34:01.261910 2320 reconciler.go:26] "Reconciler: start to sync state" Mar 25 02:34:01.262083 kubelet[2320]: I0325 02:34:01.261946 2320 server.go:460] "Adding debug handlers to kubelet server" Mar 25 02:34:01.263919 kubelet[2320]: I0325 02:34:01.263836 2320 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 25 02:34:01.264382 kubelet[2320]: I0325 02:34:01.264221 2320 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 25 02:34:01.265736 kubelet[2320]: W0325 02:34:01.264997 2320 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.54:6443: connect: connection refused Mar 25 02:34:01.265736 kubelet[2320]: E0325 02:34:01.265099 2320 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.24.4.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.54:6443: connect: connection refused" logger="UnhandledError" Mar 25 02:34:01.265736 kubelet[2320]: E0325 02:34:01.265232 2320 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284-0-0-3-6c96446f48.novalocal?timeout=10s\": dial tcp 172.24.4.54:6443: connect: connection refused" interval="200ms" Mar 25 02:34:01.265995 kubelet[2320]: I0325 02:34:01.265958 2320 factory.go:221] Registration of the systemd container factory successfully Mar 25 02:34:01.266132 kubelet[2320]: I0325 02:34:01.266098 2320 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 25 02:34:01.268162 kubelet[2320]: E0325 02:34:01.268125 2320 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 25 02:34:01.268965 kubelet[2320]: I0325 02:34:01.268925 2320 factory.go:221] Registration of the containerd container factory successfully Mar 25 02:34:01.279577 kubelet[2320]: I0325 02:34:01.279465 2320 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 25 02:34:01.280979 kubelet[2320]: I0325 02:34:01.280672 2320 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 25 02:34:01.280979 kubelet[2320]: I0325 02:34:01.280701 2320 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 25 02:34:01.280979 kubelet[2320]: I0325 02:34:01.280720 2320 kubelet.go:2321] "Starting kubelet main sync loop" Mar 25 02:34:01.280979 kubelet[2320]: E0325 02:34:01.280763 2320 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 25 02:34:01.292574 kubelet[2320]: W0325 02:34:01.292527 2320 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.54:6443: connect: connection refused Mar 25 02:34:01.292744 kubelet[2320]: E0325 02:34:01.292715 2320 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.24.4.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.24.4.54:6443: connect: connection refused" logger="UnhandledError" Mar 25 02:34:01.303378 kubelet[2320]: I0325 02:34:01.303360 2320 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 25 02:34:01.303513 kubelet[2320]: I0325 02:34:01.303502 2320 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 25 02:34:01.303618 kubelet[2320]: I0325 02:34:01.303608 2320 state_mem.go:36] "Initialized new in-memory state store" Mar 25 02:34:01.307074 kubelet[2320]: I0325 02:34:01.307061 2320 policy_none.go:49] "None policy: Start" Mar 25 02:34:01.307786 kubelet[2320]: I0325 02:34:01.307773 2320 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 25 02:34:01.307896 kubelet[2320]: I0325 02:34:01.307884 2320 state_mem.go:35] "Initializing new in-memory state store" Mar 25 02:34:01.316142 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 25 02:34:01.329645 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 25 02:34:01.334055 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 25 02:34:01.342082 kubelet[2320]: I0325 02:34:01.342058 2320 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 25 02:34:01.342595 kubelet[2320]: I0325 02:34:01.342239 2320 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 25 02:34:01.342595 kubelet[2320]: I0325 02:34:01.342257 2320 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 25 02:34:01.342595 kubelet[2320]: I0325 02:34:01.342488 2320 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 25 02:34:01.345198 kubelet[2320]: E0325 02:34:01.345179 2320 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4284-0-0-3-6c96446f48.novalocal\" not found" Mar 25 02:34:01.394920 systemd[1]: Created slice kubepods-burstable-pode9fe12a8a24dd3bc02a67a6d8f248f2a.slice - libcontainer container kubepods-burstable-pode9fe12a8a24dd3bc02a67a6d8f248f2a.slice. Mar 25 02:34:01.412062 systemd[1]: Created slice kubepods-burstable-podc143363b661bfa9d2e5b90ddba7ee671.slice - libcontainer container kubepods-burstable-podc143363b661bfa9d2e5b90ddba7ee671.slice. Mar 25 02:34:01.418792 systemd[1]: Created slice kubepods-burstable-pod2311b0f5079ea0024d2a40d08de83932.slice - libcontainer container kubepods-burstable-pod2311b0f5079ea0024d2a40d08de83932.slice. Mar 25 02:34:01.445483 kubelet[2320]: I0325 02:34:01.445153 2320 kubelet_node_status.go:72] "Attempting to register node" node="ci-4284-0-0-3-6c96446f48.novalocal" Mar 25 02:34:01.445767 kubelet[2320]: E0325 02:34:01.445720 2320 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.54:6443/api/v1/nodes\": dial tcp 172.24.4.54:6443: connect: connection refused" node="ci-4284-0-0-3-6c96446f48.novalocal" Mar 25 02:34:01.463465 kubelet[2320]: I0325 02:34:01.463229 2320 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9fe12a8a24dd3bc02a67a6d8f248f2a-ca-certs\") pod \"kube-apiserver-ci-4284-0-0-3-6c96446f48.novalocal\" (UID: \"e9fe12a8a24dd3bc02a67a6d8f248f2a\") " pod="kube-system/kube-apiserver-ci-4284-0-0-3-6c96446f48.novalocal" Mar 25 02:34:01.463465 kubelet[2320]: I0325 02:34:01.463287 2320 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c143363b661bfa9d2e5b90ddba7ee671-flexvolume-dir\") pod \"kube-controller-manager-ci-4284-0-0-3-6c96446f48.novalocal\" (UID: \"c143363b661bfa9d2e5b90ddba7ee671\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-3-6c96446f48.novalocal" Mar 25 02:34:01.463465 kubelet[2320]: I0325 02:34:01.463313 2320 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c143363b661bfa9d2e5b90ddba7ee671-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4284-0-0-3-6c96446f48.novalocal\" (UID: \"c143363b661bfa9d2e5b90ddba7ee671\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-3-6c96446f48.novalocal" Mar 25 02:34:01.463465 kubelet[2320]: I0325 02:34:01.463335 2320 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2311b0f5079ea0024d2a40d08de83932-kubeconfig\") pod \"kube-scheduler-ci-4284-0-0-3-6c96446f48.novalocal\" (UID: \"2311b0f5079ea0024d2a40d08de83932\") " pod="kube-system/kube-scheduler-ci-4284-0-0-3-6c96446f48.novalocal" Mar 25 02:34:01.463465 kubelet[2320]: I0325 02:34:01.463392 2320 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9fe12a8a24dd3bc02a67a6d8f248f2a-k8s-certs\") pod \"kube-apiserver-ci-4284-0-0-3-6c96446f48.novalocal\" (UID: \"e9fe12a8a24dd3bc02a67a6d8f248f2a\") " pod="kube-system/kube-apiserver-ci-4284-0-0-3-6c96446f48.novalocal" Mar 25 02:34:01.463886 kubelet[2320]: I0325 02:34:01.463427 2320 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9fe12a8a24dd3bc02a67a6d8f248f2a-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4284-0-0-3-6c96446f48.novalocal\" (UID: \"e9fe12a8a24dd3bc02a67a6d8f248f2a\") " pod="kube-system/kube-apiserver-ci-4284-0-0-3-6c96446f48.novalocal" Mar 25 02:34:01.463886 kubelet[2320]: I0325 02:34:01.463447 2320 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c143363b661bfa9d2e5b90ddba7ee671-ca-certs\") pod \"kube-controller-manager-ci-4284-0-0-3-6c96446f48.novalocal\" (UID: \"c143363b661bfa9d2e5b90ddba7ee671\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-3-6c96446f48.novalocal" Mar 25 02:34:01.463886 kubelet[2320]: I0325 02:34:01.463464 2320 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c143363b661bfa9d2e5b90ddba7ee671-k8s-certs\") pod \"kube-controller-manager-ci-4284-0-0-3-6c96446f48.novalocal\" (UID: \"c143363b661bfa9d2e5b90ddba7ee671\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-3-6c96446f48.novalocal" Mar 25 02:34:01.463886 kubelet[2320]: I0325 02:34:01.463482 2320 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c143363b661bfa9d2e5b90ddba7ee671-kubeconfig\") pod \"kube-controller-manager-ci-4284-0-0-3-6c96446f48.novalocal\" (UID: \"c143363b661bfa9d2e5b90ddba7ee671\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-3-6c96446f48.novalocal" Mar 25 02:34:01.465984 kubelet[2320]: E0325 02:34:01.465907 2320 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284-0-0-3-6c96446f48.novalocal?timeout=10s\": dial tcp 172.24.4.54:6443: connect: connection refused" interval="400ms" Mar 25 02:34:01.649253 kubelet[2320]: I0325 02:34:01.649201 2320 kubelet_node_status.go:72] "Attempting to register node" node="ci-4284-0-0-3-6c96446f48.novalocal" Mar 25 02:34:01.649789 kubelet[2320]: E0325 02:34:01.649727 2320 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.54:6443/api/v1/nodes\": dial tcp 172.24.4.54:6443: connect: connection refused" node="ci-4284-0-0-3-6c96446f48.novalocal" Mar 25 02:34:01.710472 containerd[1474]: time="2025-03-25T02:34:01.710312693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4284-0-0-3-6c96446f48.novalocal,Uid:e9fe12a8a24dd3bc02a67a6d8f248f2a,Namespace:kube-system,Attempt:0,}" Mar 25 02:34:01.719727 containerd[1474]: time="2025-03-25T02:34:01.719318127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4284-0-0-3-6c96446f48.novalocal,Uid:c143363b661bfa9d2e5b90ddba7ee671,Namespace:kube-system,Attempt:0,}" Mar 25 02:34:01.723153 containerd[1474]: time="2025-03-25T02:34:01.722842295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4284-0-0-3-6c96446f48.novalocal,Uid:2311b0f5079ea0024d2a40d08de83932,Namespace:kube-system,Attempt:0,}" Mar 25 02:34:01.866600 kubelet[2320]: E0325 02:34:01.866525 2320 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284-0-0-3-6c96446f48.novalocal?timeout=10s\": dial tcp 172.24.4.54:6443: connect: connection refused" interval="800ms" Mar 25 02:34:02.052495 kubelet[2320]: I0325 02:34:02.052346 2320 kubelet_node_status.go:72] "Attempting to register node" node="ci-4284-0-0-3-6c96446f48.novalocal" Mar 25 02:34:02.052948 kubelet[2320]: E0325 02:34:02.052897 2320 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.54:6443/api/v1/nodes\": dial tcp 172.24.4.54:6443: connect: connection refused" node="ci-4284-0-0-3-6c96446f48.novalocal" Mar 25 02:34:02.241411 kubelet[2320]: W0325 02:34:02.241163 2320 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.54:6443: connect: connection refused Mar 25 02:34:02.241411 kubelet[2320]: E0325 02:34:02.241346 2320 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.24.4.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.54:6443: connect: connection refused" logger="UnhandledError" Mar 25 02:34:02.571817 containerd[1474]: time="2025-03-25T02:34:02.571612100Z" level=info msg="connecting to shim 98c42918800837d964a196c20039c1f70712d20736675b29f5a8736b288ba3b0" address="unix:///run/containerd/s/c862a291fa53e1d47d7ce0a8795f149a30371eea680c24b9848bcf0e6691285a" namespace=k8s.io protocol=ttrpc version=3 Mar 25 02:34:02.574417 containerd[1474]: time="2025-03-25T02:34:02.574375065Z" level=info msg="connecting to shim e45afc717807be5d6007b14ad3fe64084651fc6a547118dc681455feb19f9d53" address="unix:///run/containerd/s/c0ecf87b3b21bbbb81d66ba47def7864b287db603d6728bc3b29ad3bb8f52bb7" namespace=k8s.io protocol=ttrpc version=3 Mar 25 02:34:02.575310 containerd[1474]: time="2025-03-25T02:34:02.575282732Z" level=info msg="connecting to shim 071d856c01692748d24c5d8420fb94b497de26b87a1d70a87c01f5d8fd70f5eb" address="unix:///run/containerd/s/8b79fc77b166c1cbd9bfcf9b4e1605c03f76b204c657db3fa6d41733659ebf7d" namespace=k8s.io protocol=ttrpc version=3 Mar 25 02:34:02.610485 kubelet[2320]: W0325 02:34:02.610183 2320 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.54:6443: connect: connection refused Mar 25 02:34:02.610485 kubelet[2320]: E0325 02:34:02.610251 2320 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.24.4.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.24.4.54:6443: connect: connection refused" logger="UnhandledError" Mar 25 02:34:02.610485 kubelet[2320]: W0325 02:34:02.610345 2320 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284-0-0-3-6c96446f48.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.54:6443: connect: connection refused Mar 25 02:34:02.610485 kubelet[2320]: E0325 02:34:02.610380 2320 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.24.4.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284-0-0-3-6c96446f48.novalocal&limit=500&resourceVersion=0\": dial tcp 172.24.4.54:6443: connect: connection refused" logger="UnhandledError" Mar 25 02:34:02.624501 systemd[1]: Started cri-containerd-e45afc717807be5d6007b14ad3fe64084651fc6a547118dc681455feb19f9d53.scope - libcontainer container e45afc717807be5d6007b14ad3fe64084651fc6a547118dc681455feb19f9d53. Mar 25 02:34:02.632240 systemd[1]: Started cri-containerd-071d856c01692748d24c5d8420fb94b497de26b87a1d70a87c01f5d8fd70f5eb.scope - libcontainer container 071d856c01692748d24c5d8420fb94b497de26b87a1d70a87c01f5d8fd70f5eb. Mar 25 02:34:02.633637 systemd[1]: Started cri-containerd-98c42918800837d964a196c20039c1f70712d20736675b29f5a8736b288ba3b0.scope - libcontainer container 98c42918800837d964a196c20039c1f70712d20736675b29f5a8736b288ba3b0. Mar 25 02:34:02.667234 kubelet[2320]: E0325 02:34:02.667165 2320 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284-0-0-3-6c96446f48.novalocal?timeout=10s\": dial tcp 172.24.4.54:6443: connect: connection refused" interval="1.6s" Mar 25 02:34:02.692566 kubelet[2320]: W0325 02:34:02.692479 2320 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.54:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.24.4.54:6443: connect: connection refused Mar 25 02:34:02.692670 kubelet[2320]: E0325 02:34:02.692573 2320 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.24.4.54:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.54:6443: connect: connection refused" logger="UnhandledError" Mar 25 02:34:02.703874 containerd[1474]: time="2025-03-25T02:34:02.703803139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4284-0-0-3-6c96446f48.novalocal,Uid:c143363b661bfa9d2e5b90ddba7ee671,Namespace:kube-system,Attempt:0,} returns sandbox id \"e45afc717807be5d6007b14ad3fe64084651fc6a547118dc681455feb19f9d53\"" Mar 25 02:34:02.710431 containerd[1474]: time="2025-03-25T02:34:02.710316856Z" level=info msg="CreateContainer within sandbox \"e45afc717807be5d6007b14ad3fe64084651fc6a547118dc681455feb19f9d53\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 25 02:34:02.722641 containerd[1474]: time="2025-03-25T02:34:02.722594653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4284-0-0-3-6c96446f48.novalocal,Uid:e9fe12a8a24dd3bc02a67a6d8f248f2a,Namespace:kube-system,Attempt:0,} returns sandbox id \"98c42918800837d964a196c20039c1f70712d20736675b29f5a8736b288ba3b0\"" Mar 25 02:34:02.726302 containerd[1474]: time="2025-03-25T02:34:02.726240843Z" level=info msg="CreateContainer within sandbox \"98c42918800837d964a196c20039c1f70712d20736675b29f5a8736b288ba3b0\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 25 02:34:02.728310 containerd[1474]: time="2025-03-25T02:34:02.728110321Z" level=info msg="Container 6c459087b7dcaff9ef6ff81d37584a67a2c82764f6b3c7ef74b6bd4370cbc6fd: CDI devices from CRI Config.CDIDevices: []" Mar 25 02:34:02.750593 containerd[1474]: time="2025-03-25T02:34:02.750551764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4284-0-0-3-6c96446f48.novalocal,Uid:2311b0f5079ea0024d2a40d08de83932,Namespace:kube-system,Attempt:0,} returns sandbox id \"071d856c01692748d24c5d8420fb94b497de26b87a1d70a87c01f5d8fd70f5eb\"" Mar 25 02:34:02.756788 containerd[1474]: time="2025-03-25T02:34:02.756674398Z" level=info msg="CreateContainer within sandbox \"071d856c01692748d24c5d8420fb94b497de26b87a1d70a87c01f5d8fd70f5eb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 25 02:34:02.775068 containerd[1474]: time="2025-03-25T02:34:02.774286175Z" level=info msg="Container dc1d7934e2208de364aa8c828c8868d610b1c4a711f676a06250236f8a778687: CDI devices from CRI Config.CDIDevices: []" Mar 25 02:34:02.775972 containerd[1474]: time="2025-03-25T02:34:02.775945556Z" level=info msg="CreateContainer within sandbox \"e45afc717807be5d6007b14ad3fe64084651fc6a547118dc681455feb19f9d53\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6c459087b7dcaff9ef6ff81d37584a67a2c82764f6b3c7ef74b6bd4370cbc6fd\"" Mar 25 02:34:02.778408 containerd[1474]: time="2025-03-25T02:34:02.778345837Z" level=info msg="StartContainer for \"6c459087b7dcaff9ef6ff81d37584a67a2c82764f6b3c7ef74b6bd4370cbc6fd\"" Mar 25 02:34:02.780677 containerd[1474]: time="2025-03-25T02:34:02.780623506Z" level=info msg="connecting to shim 6c459087b7dcaff9ef6ff81d37584a67a2c82764f6b3c7ef74b6bd4370cbc6fd" address="unix:///run/containerd/s/c0ecf87b3b21bbbb81d66ba47def7864b287db603d6728bc3b29ad3bb8f52bb7" protocol=ttrpc version=3 Mar 25 02:34:02.786697 containerd[1474]: time="2025-03-25T02:34:02.786595519Z" level=info msg="CreateContainer within sandbox \"98c42918800837d964a196c20039c1f70712d20736675b29f5a8736b288ba3b0\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"dc1d7934e2208de364aa8c828c8868d610b1c4a711f676a06250236f8a778687\"" Mar 25 02:34:02.787315 containerd[1474]: time="2025-03-25T02:34:02.787207284Z" level=info msg="StartContainer for \"dc1d7934e2208de364aa8c828c8868d610b1c4a711f676a06250236f8a778687\"" Mar 25 02:34:02.789028 containerd[1474]: time="2025-03-25T02:34:02.788996592Z" level=info msg="connecting to shim dc1d7934e2208de364aa8c828c8868d610b1c4a711f676a06250236f8a778687" address="unix:///run/containerd/s/c862a291fa53e1d47d7ce0a8795f149a30371eea680c24b9848bcf0e6691285a" protocol=ttrpc version=3 Mar 25 02:34:02.791432 containerd[1474]: time="2025-03-25T02:34:02.791407115Z" level=info msg="Container 32cfc266a2e9db3caa7f70870e5cc737fe1e33c62a6afb72c1db8e520f909422: CDI devices from CRI Config.CDIDevices: []" Mar 25 02:34:02.803208 containerd[1474]: time="2025-03-25T02:34:02.803173991Z" level=info msg="CreateContainer within sandbox \"071d856c01692748d24c5d8420fb94b497de26b87a1d70a87c01f5d8fd70f5eb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"32cfc266a2e9db3caa7f70870e5cc737fe1e33c62a6afb72c1db8e520f909422\"" Mar 25 02:34:02.804313 containerd[1474]: time="2025-03-25T02:34:02.803997307Z" level=info msg="StartContainer for \"32cfc266a2e9db3caa7f70870e5cc737fe1e33c62a6afb72c1db8e520f909422\"" Mar 25 02:34:02.806248 containerd[1474]: time="2025-03-25T02:34:02.805719933Z" level=info msg="connecting to shim 32cfc266a2e9db3caa7f70870e5cc737fe1e33c62a6afb72c1db8e520f909422" address="unix:///run/containerd/s/8b79fc77b166c1cbd9bfcf9b4e1605c03f76b204c657db3fa6d41733659ebf7d" protocol=ttrpc version=3 Mar 25 02:34:02.815463 systemd[1]: Started cri-containerd-6c459087b7dcaff9ef6ff81d37584a67a2c82764f6b3c7ef74b6bd4370cbc6fd.scope - libcontainer container 6c459087b7dcaff9ef6ff81d37584a67a2c82764f6b3c7ef74b6bd4370cbc6fd. Mar 25 02:34:02.829807 systemd[1]: Started cri-containerd-dc1d7934e2208de364aa8c828c8868d610b1c4a711f676a06250236f8a778687.scope - libcontainer container dc1d7934e2208de364aa8c828c8868d610b1c4a711f676a06250236f8a778687. Mar 25 02:34:02.842428 systemd[1]: Started cri-containerd-32cfc266a2e9db3caa7f70870e5cc737fe1e33c62a6afb72c1db8e520f909422.scope - libcontainer container 32cfc266a2e9db3caa7f70870e5cc737fe1e33c62a6afb72c1db8e520f909422. Mar 25 02:34:02.854838 kubelet[2320]: I0325 02:34:02.854814 2320 kubelet_node_status.go:72] "Attempting to register node" node="ci-4284-0-0-3-6c96446f48.novalocal" Mar 25 02:34:02.855515 kubelet[2320]: E0325 02:34:02.855441 2320 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.54:6443/api/v1/nodes\": dial tcp 172.24.4.54:6443: connect: connection refused" node="ci-4284-0-0-3-6c96446f48.novalocal" Mar 25 02:34:02.915827 containerd[1474]: time="2025-03-25T02:34:02.915776745Z" level=info msg="StartContainer for \"6c459087b7dcaff9ef6ff81d37584a67a2c82764f6b3c7ef74b6bd4370cbc6fd\" returns successfully" Mar 25 02:34:02.916004 containerd[1474]: time="2025-03-25T02:34:02.915975439Z" level=info msg="StartContainer for \"dc1d7934e2208de364aa8c828c8868d610b1c4a711f676a06250236f8a778687\" returns successfully" Mar 25 02:34:02.955814 containerd[1474]: time="2025-03-25T02:34:02.955643837Z" level=info msg="StartContainer for \"32cfc266a2e9db3caa7f70870e5cc737fe1e33c62a6afb72c1db8e520f909422\" returns successfully" Mar 25 02:34:04.460541 kubelet[2320]: I0325 02:34:04.459220 2320 kubelet_node_status.go:72] "Attempting to register node" node="ci-4284-0-0-3-6c96446f48.novalocal" Mar 25 02:34:04.586672 kubelet[2320]: E0325 02:34:04.586606 2320 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4284-0-0-3-6c96446f48.novalocal\" not found" node="ci-4284-0-0-3-6c96446f48.novalocal" Mar 25 02:34:04.685935 kubelet[2320]: I0325 02:34:04.685901 2320 kubelet_node_status.go:75] "Successfully registered node" node="ci-4284-0-0-3-6c96446f48.novalocal" Mar 25 02:34:04.685935 kubelet[2320]: E0325 02:34:04.685940 2320 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4284-0-0-3-6c96446f48.novalocal\": node \"ci-4284-0-0-3-6c96446f48.novalocal\" not found" Mar 25 02:34:04.712822 kubelet[2320]: E0325 02:34:04.712571 2320 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4284-0-0-3-6c96446f48.novalocal\" not found" Mar 25 02:34:04.812704 kubelet[2320]: E0325 02:34:04.812674 2320 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4284-0-0-3-6c96446f48.novalocal\" not found" Mar 25 02:34:04.912986 kubelet[2320]: E0325 02:34:04.912861 2320 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4284-0-0-3-6c96446f48.novalocal\" not found" Mar 25 02:34:05.014518 kubelet[2320]: E0325 02:34:05.013364 2320 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4284-0-0-3-6c96446f48.novalocal\" not found" Mar 25 02:34:05.113665 kubelet[2320]: E0325 02:34:05.113561 2320 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4284-0-0-3-6c96446f48.novalocal\" not found" Mar 25 02:34:05.214385 kubelet[2320]: E0325 02:34:05.214323 2320 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4284-0-0-3-6c96446f48.novalocal\" not found" Mar 25 02:34:05.315507 kubelet[2320]: E0325 02:34:05.315331 2320 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4284-0-0-3-6c96446f48.novalocal\" not found" Mar 25 02:34:05.415832 kubelet[2320]: E0325 02:34:05.415743 2320 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4284-0-0-3-6c96446f48.novalocal\" not found" Mar 25 02:34:06.237367 kubelet[2320]: I0325 02:34:06.236686 2320 apiserver.go:52] "Watching apiserver" Mar 25 02:34:06.261221 kubelet[2320]: I0325 02:34:06.261020 2320 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 25 02:34:07.382993 systemd[1]: Reload requested from client PID 2587 ('systemctl') (unit session-11.scope)... Mar 25 02:34:07.383031 systemd[1]: Reloading... Mar 25 02:34:07.496312 zram_generator::config[2630]: No configuration found. Mar 25 02:34:07.650256 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 25 02:34:07.790254 systemd[1]: Reloading finished in 406 ms. Mar 25 02:34:07.824943 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 02:34:07.826133 kubelet[2320]: I0325 02:34:07.825507 2320 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 25 02:34:07.842454 systemd[1]: kubelet.service: Deactivated successfully. Mar 25 02:34:07.842674 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 02:34:07.842730 systemd[1]: kubelet.service: Consumed 1.093s CPU time, 116.1M memory peak. Mar 25 02:34:07.844743 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 02:34:08.114530 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 02:34:08.125528 (kubelet)[2696]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 25 02:34:08.185476 kubelet[2696]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 25 02:34:08.185476 kubelet[2696]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 25 02:34:08.185476 kubelet[2696]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 25 02:34:08.186167 kubelet[2696]: I0325 02:34:08.185514 2696 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 25 02:34:08.201328 kubelet[2696]: I0325 02:34:08.200103 2696 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Mar 25 02:34:08.201328 kubelet[2696]: I0325 02:34:08.200152 2696 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 25 02:34:08.201328 kubelet[2696]: I0325 02:34:08.200696 2696 server.go:929] "Client rotation is on, will bootstrap in background" Mar 25 02:34:08.203977 kubelet[2696]: I0325 02:34:08.203949 2696 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 25 02:34:08.209620 kubelet[2696]: I0325 02:34:08.209561 2696 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 25 02:34:08.219129 kubelet[2696]: I0325 02:34:08.219072 2696 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 25 02:34:08.222147 kubelet[2696]: I0325 02:34:08.222129 2696 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 25 02:34:08.222352 kubelet[2696]: I0325 02:34:08.222341 2696 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 25 02:34:08.222545 kubelet[2696]: I0325 02:34:08.222520 2696 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 25 02:34:08.222785 kubelet[2696]: I0325 02:34:08.222605 2696 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4284-0-0-3-6c96446f48.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 25 02:34:08.222908 kubelet[2696]: I0325 02:34:08.222896 2696 topology_manager.go:138] "Creating topology manager with none policy" Mar 25 02:34:08.222965 kubelet[2696]: I0325 02:34:08.222958 2696 container_manager_linux.go:300] "Creating device plugin manager" Mar 25 02:34:08.223040 kubelet[2696]: I0325 02:34:08.223030 2696 state_mem.go:36] "Initialized new in-memory state store" Mar 25 02:34:08.223180 kubelet[2696]: I0325 02:34:08.223169 2696 kubelet.go:408] "Attempting to sync node with API server" Mar 25 02:34:08.225319 kubelet[2696]: I0325 02:34:08.225302 2696 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 25 02:34:08.225426 kubelet[2696]: I0325 02:34:08.225416 2696 kubelet.go:314] "Adding apiserver pod source" Mar 25 02:34:08.225493 kubelet[2696]: I0325 02:34:08.225484 2696 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 25 02:34:08.227103 kubelet[2696]: I0325 02:34:08.227068 2696 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" Mar 25 02:34:08.227554 kubelet[2696]: I0325 02:34:08.227525 2696 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 25 02:34:08.228471 kubelet[2696]: I0325 02:34:08.227969 2696 server.go:1269] "Started kubelet" Mar 25 02:34:08.233088 kubelet[2696]: I0325 02:34:08.232684 2696 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 25 02:34:08.240210 kubelet[2696]: I0325 02:34:08.239883 2696 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 25 02:34:08.242725 kubelet[2696]: I0325 02:34:08.242701 2696 server.go:460] "Adding debug handlers to kubelet server" Mar 25 02:34:08.244662 kubelet[2696]: I0325 02:34:08.244616 2696 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 25 02:34:08.244913 kubelet[2696]: I0325 02:34:08.244900 2696 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 25 02:34:08.248799 kubelet[2696]: I0325 02:34:08.245531 2696 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 25 02:34:08.254218 kubelet[2696]: I0325 02:34:08.246769 2696 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 25 02:34:08.260796 kubelet[2696]: I0325 02:34:08.246782 2696 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 25 02:34:08.263653 kubelet[2696]: E0325 02:34:08.246935 2696 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4284-0-0-3-6c96446f48.novalocal\" not found" Mar 25 02:34:08.263653 kubelet[2696]: I0325 02:34:08.261922 2696 reconciler.go:26] "Reconciler: start to sync state" Mar 25 02:34:08.263653 kubelet[2696]: I0325 02:34:08.261172 2696 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 25 02:34:08.266730 kubelet[2696]: I0325 02:34:08.266707 2696 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 25 02:34:08.266853 kubelet[2696]: I0325 02:34:08.266844 2696 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 25 02:34:08.266924 kubelet[2696]: I0325 02:34:08.266915 2696 kubelet.go:2321] "Starting kubelet main sync loop" Mar 25 02:34:08.267045 kubelet[2696]: E0325 02:34:08.267012 2696 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 25 02:34:08.278126 kubelet[2696]: I0325 02:34:08.277535 2696 factory.go:221] Registration of the systemd container factory successfully Mar 25 02:34:08.278126 kubelet[2696]: I0325 02:34:08.277613 2696 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 25 02:34:08.284458 kubelet[2696]: E0325 02:34:08.284429 2696 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 25 02:34:08.288444 kubelet[2696]: I0325 02:34:08.286441 2696 factory.go:221] Registration of the containerd container factory successfully Mar 25 02:34:08.345673 kubelet[2696]: I0325 02:34:08.345639 2696 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 25 02:34:08.345673 kubelet[2696]: I0325 02:34:08.345658 2696 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 25 02:34:08.345673 kubelet[2696]: I0325 02:34:08.345678 2696 state_mem.go:36] "Initialized new in-memory state store" Mar 25 02:34:08.345891 kubelet[2696]: I0325 02:34:08.345868 2696 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 25 02:34:08.345948 kubelet[2696]: I0325 02:34:08.345879 2696 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 25 02:34:08.345948 kubelet[2696]: I0325 02:34:08.345909 2696 policy_none.go:49] "None policy: Start" Mar 25 02:34:08.347338 kubelet[2696]: I0325 02:34:08.347315 2696 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 25 02:34:08.347466 kubelet[2696]: I0325 02:34:08.347452 2696 state_mem.go:35] "Initializing new in-memory state store" Mar 25 02:34:08.348097 kubelet[2696]: I0325 02:34:08.348058 2696 state_mem.go:75] "Updated machine memory state" Mar 25 02:34:08.361510 kubelet[2696]: I0325 02:34:08.361464 2696 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 25 02:34:08.361873 kubelet[2696]: I0325 02:34:08.361862 2696 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 25 02:34:08.361984 kubelet[2696]: I0325 02:34:08.361954 2696 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 25 02:34:08.363365 kubelet[2696]: I0325 02:34:08.362607 2696 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 25 02:34:08.391963 kubelet[2696]: W0325 02:34:08.391934 2696 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 25 02:34:08.397365 kubelet[2696]: W0325 02:34:08.397342 2696 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 25 02:34:08.397703 kubelet[2696]: W0325 02:34:08.397644 2696 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 25 02:34:08.462650 kubelet[2696]: I0325 02:34:08.462364 2696 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9fe12a8a24dd3bc02a67a6d8f248f2a-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4284-0-0-3-6c96446f48.novalocal\" (UID: \"e9fe12a8a24dd3bc02a67a6d8f248f2a\") " pod="kube-system/kube-apiserver-ci-4284-0-0-3-6c96446f48.novalocal" Mar 25 02:34:08.462650 kubelet[2696]: I0325 02:34:08.462405 2696 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c143363b661bfa9d2e5b90ddba7ee671-k8s-certs\") pod \"kube-controller-manager-ci-4284-0-0-3-6c96446f48.novalocal\" (UID: \"c143363b661bfa9d2e5b90ddba7ee671\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-3-6c96446f48.novalocal" Mar 25 02:34:08.462650 kubelet[2696]: I0325 02:34:08.462427 2696 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9fe12a8a24dd3bc02a67a6d8f248f2a-ca-certs\") pod \"kube-apiserver-ci-4284-0-0-3-6c96446f48.novalocal\" (UID: \"e9fe12a8a24dd3bc02a67a6d8f248f2a\") " pod="kube-system/kube-apiserver-ci-4284-0-0-3-6c96446f48.novalocal" Mar 25 02:34:08.462650 kubelet[2696]: I0325 02:34:08.462457 2696 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c143363b661bfa9d2e5b90ddba7ee671-ca-certs\") pod \"kube-controller-manager-ci-4284-0-0-3-6c96446f48.novalocal\" (UID: \"c143363b661bfa9d2e5b90ddba7ee671\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-3-6c96446f48.novalocal" Mar 25 02:34:08.463033 kubelet[2696]: I0325 02:34:08.462479 2696 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c143363b661bfa9d2e5b90ddba7ee671-flexvolume-dir\") pod \"kube-controller-manager-ci-4284-0-0-3-6c96446f48.novalocal\" (UID: \"c143363b661bfa9d2e5b90ddba7ee671\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-3-6c96446f48.novalocal" Mar 25 02:34:08.463033 kubelet[2696]: I0325 02:34:08.462499 2696 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c143363b661bfa9d2e5b90ddba7ee671-kubeconfig\") pod \"kube-controller-manager-ci-4284-0-0-3-6c96446f48.novalocal\" (UID: \"c143363b661bfa9d2e5b90ddba7ee671\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-3-6c96446f48.novalocal" Mar 25 02:34:08.463033 kubelet[2696]: I0325 02:34:08.462518 2696 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c143363b661bfa9d2e5b90ddba7ee671-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4284-0-0-3-6c96446f48.novalocal\" (UID: \"c143363b661bfa9d2e5b90ddba7ee671\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-3-6c96446f48.novalocal" Mar 25 02:34:08.463033 kubelet[2696]: I0325 02:34:08.462542 2696 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2311b0f5079ea0024d2a40d08de83932-kubeconfig\") pod \"kube-scheduler-ci-4284-0-0-3-6c96446f48.novalocal\" (UID: \"2311b0f5079ea0024d2a40d08de83932\") " pod="kube-system/kube-scheduler-ci-4284-0-0-3-6c96446f48.novalocal" Mar 25 02:34:08.463296 kubelet[2696]: I0325 02:34:08.462561 2696 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9fe12a8a24dd3bc02a67a6d8f248f2a-k8s-certs\") pod \"kube-apiserver-ci-4284-0-0-3-6c96446f48.novalocal\" (UID: \"e9fe12a8a24dd3bc02a67a6d8f248f2a\") " pod="kube-system/kube-apiserver-ci-4284-0-0-3-6c96446f48.novalocal" Mar 25 02:34:08.466328 kubelet[2696]: I0325 02:34:08.466081 2696 kubelet_node_status.go:72] "Attempting to register node" node="ci-4284-0-0-3-6c96446f48.novalocal" Mar 25 02:34:08.476775 kubelet[2696]: I0325 02:34:08.476464 2696 kubelet_node_status.go:111] "Node was previously registered" node="ci-4284-0-0-3-6c96446f48.novalocal" Mar 25 02:34:08.476775 kubelet[2696]: I0325 02:34:08.476541 2696 kubelet_node_status.go:75] "Successfully registered node" node="ci-4284-0-0-3-6c96446f48.novalocal" Mar 25 02:34:09.228012 kubelet[2696]: I0325 02:34:09.227077 2696 apiserver.go:52] "Watching apiserver" Mar 25 02:34:09.261674 kubelet[2696]: I0325 02:34:09.261607 2696 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 25 02:34:09.395829 kubelet[2696]: I0325 02:34:09.395491 2696 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4284-0-0-3-6c96446f48.novalocal" podStartSLOduration=1.3954584429999999 podStartE2EDuration="1.395458443s" podCreationTimestamp="2025-03-25 02:34:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 02:34:09.370431978 +0000 UTC m=+1.241186830" watchObservedRunningTime="2025-03-25 02:34:09.395458443 +0000 UTC m=+1.266213285" Mar 25 02:34:09.417404 kubelet[2696]: I0325 02:34:09.416641 2696 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4284-0-0-3-6c96446f48.novalocal" podStartSLOduration=1.416605975 podStartE2EDuration="1.416605975s" podCreationTimestamp="2025-03-25 02:34:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 02:34:09.396968012 +0000 UTC m=+1.267722855" watchObservedRunningTime="2025-03-25 02:34:09.416605975 +0000 UTC m=+1.287360817" Mar 25 02:34:12.781881 kubelet[2696]: I0325 02:34:12.781359 2696 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 25 02:34:12.783874 containerd[1474]: time="2025-03-25T02:34:12.782468285Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 25 02:34:12.784146 kubelet[2696]: I0325 02:34:12.782886 2696 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 25 02:34:13.603465 kubelet[2696]: I0325 02:34:13.603363 2696 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4284-0-0-3-6c96446f48.novalocal" podStartSLOduration=5.603331947 podStartE2EDuration="5.603331947s" podCreationTimestamp="2025-03-25 02:34:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 02:34:09.417582319 +0000 UTC m=+1.288337161" watchObservedRunningTime="2025-03-25 02:34:13.603331947 +0000 UTC m=+5.474086789" Mar 25 02:34:13.610891 kubelet[2696]: W0325 02:34:13.610830 2696 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-4284-0-0-3-6c96446f48.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4284-0-0-3-6c96446f48.novalocal' and this object Mar 25 02:34:13.611137 kubelet[2696]: E0325 02:34:13.610927 2696 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:ci-4284-0-0-3-6c96446f48.novalocal\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4284-0-0-3-6c96446f48.novalocal' and this object" logger="UnhandledError" Mar 25 02:34:13.611137 kubelet[2696]: W0325 02:34:13.611044 2696 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4284-0-0-3-6c96446f48.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4284-0-0-3-6c96446f48.novalocal' and this object Mar 25 02:34:13.611137 kubelet[2696]: E0325 02:34:13.611076 2696 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4284-0-0-3-6c96446f48.novalocal\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4284-0-0-3-6c96446f48.novalocal' and this object" logger="UnhandledError" Mar 25 02:34:13.626496 systemd[1]: Created slice kubepods-besteffort-pod66189d9c_5c77_49ec_9920_faad8917e65a.slice - libcontainer container kubepods-besteffort-pod66189d9c_5c77_49ec_9920_faad8917e65a.slice. Mar 25 02:34:13.697306 kubelet[2696]: I0325 02:34:13.697151 2696 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/66189d9c-5c77-49ec-9920-faad8917e65a-lib-modules\") pod \"kube-proxy-66hlj\" (UID: \"66189d9c-5c77-49ec-9920-faad8917e65a\") " pod="kube-system/kube-proxy-66hlj" Mar 25 02:34:13.697306 kubelet[2696]: I0325 02:34:13.697192 2696 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jtln\" (UniqueName: \"kubernetes.io/projected/66189d9c-5c77-49ec-9920-faad8917e65a-kube-api-access-8jtln\") pod \"kube-proxy-66hlj\" (UID: \"66189d9c-5c77-49ec-9920-faad8917e65a\") " pod="kube-system/kube-proxy-66hlj" Mar 25 02:34:13.697306 kubelet[2696]: I0325 02:34:13.697214 2696 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/66189d9c-5c77-49ec-9920-faad8917e65a-kube-proxy\") pod \"kube-proxy-66hlj\" (UID: \"66189d9c-5c77-49ec-9920-faad8917e65a\") " pod="kube-system/kube-proxy-66hlj" Mar 25 02:34:13.697306 kubelet[2696]: I0325 02:34:13.697230 2696 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/66189d9c-5c77-49ec-9920-faad8917e65a-xtables-lock\") pod \"kube-proxy-66hlj\" (UID: \"66189d9c-5c77-49ec-9920-faad8917e65a\") " pod="kube-system/kube-proxy-66hlj" Mar 25 02:34:13.842520 systemd[1]: Created slice kubepods-besteffort-pod649a9c1f_594d_49e8_aa55_805469a5eb73.slice - libcontainer container kubepods-besteffort-pod649a9c1f_594d_49e8_aa55_805469a5eb73.slice. Mar 25 02:34:13.898421 kubelet[2696]: I0325 02:34:13.898333 2696 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/649a9c1f-594d-49e8-aa55-805469a5eb73-var-lib-calico\") pod \"tigera-operator-64ff5465b7-xcdv5\" (UID: \"649a9c1f-594d-49e8-aa55-805469a5eb73\") " pod="tigera-operator/tigera-operator-64ff5465b7-xcdv5" Mar 25 02:34:13.898421 kubelet[2696]: I0325 02:34:13.898376 2696 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t76tq\" (UniqueName: \"kubernetes.io/projected/649a9c1f-594d-49e8-aa55-805469a5eb73-kube-api-access-t76tq\") pod \"tigera-operator-64ff5465b7-xcdv5\" (UID: \"649a9c1f-594d-49e8-aa55-805469a5eb73\") " pod="tigera-operator/tigera-operator-64ff5465b7-xcdv5" Mar 25 02:34:14.152512 containerd[1474]: time="2025-03-25T02:34:14.151897601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-64ff5465b7-xcdv5,Uid:649a9c1f-594d-49e8-aa55-805469a5eb73,Namespace:tigera-operator,Attempt:0,}" Mar 25 02:34:14.198184 containerd[1474]: time="2025-03-25T02:34:14.197688696Z" level=info msg="connecting to shim fdf5955ae07b7400af189eacf5d3232d800b1813a55315f305f4a615c5f9b7ea" address="unix:///run/containerd/s/494bc57dbf027ff43793acb5d9dc854d751afcae670875d4c65e4b240429b825" namespace=k8s.io protocol=ttrpc version=3 Mar 25 02:34:14.260448 systemd[1]: Started cri-containerd-fdf5955ae07b7400af189eacf5d3232d800b1813a55315f305f4a615c5f9b7ea.scope - libcontainer container fdf5955ae07b7400af189eacf5d3232d800b1813a55315f305f4a615c5f9b7ea. Mar 25 02:34:14.307438 containerd[1474]: time="2025-03-25T02:34:14.307385733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-64ff5465b7-xcdv5,Uid:649a9c1f-594d-49e8-aa55-805469a5eb73,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"fdf5955ae07b7400af189eacf5d3232d800b1813a55315f305f4a615c5f9b7ea\"" Mar 25 02:34:14.309631 containerd[1474]: time="2025-03-25T02:34:14.309557363Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.5\"" Mar 25 02:34:14.473681 sudo[1752]: pam_unix(sudo:session): session closed for user root Mar 25 02:34:14.625746 sshd[1751]: Connection closed by 172.24.4.1 port 52178 Mar 25 02:34:14.627600 sshd-session[1748]: pam_unix(sshd:session): session closed for user core Mar 25 02:34:14.649633 systemd[1]: sshd@8-172.24.4.54:22-172.24.4.1:52178.service: Deactivated successfully. Mar 25 02:34:14.656427 systemd[1]: session-11.scope: Deactivated successfully. Mar 25 02:34:14.657385 systemd[1]: session-11.scope: Consumed 6.587s CPU time, 223.3M memory peak. Mar 25 02:34:14.663444 systemd-logind[1458]: Session 11 logged out. Waiting for processes to exit. Mar 25 02:34:14.666460 systemd-logind[1458]: Removed session 11. Mar 25 02:34:14.801517 kubelet[2696]: E0325 02:34:14.799058 2696 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 25 02:34:14.801517 kubelet[2696]: E0325 02:34:14.801354 2696 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/66189d9c-5c77-49ec-9920-faad8917e65a-kube-proxy podName:66189d9c-5c77-49ec-9920-faad8917e65a nodeName:}" failed. No retries permitted until 2025-03-25 02:34:15.299170499 +0000 UTC m=+7.169925341 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/66189d9c-5c77-49ec-9920-faad8917e65a-kube-proxy") pod "kube-proxy-66hlj" (UID: "66189d9c-5c77-49ec-9920-faad8917e65a") : failed to sync configmap cache: timed out waiting for the condition Mar 25 02:34:15.441635 containerd[1474]: time="2025-03-25T02:34:15.441517985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-66hlj,Uid:66189d9c-5c77-49ec-9920-faad8917e65a,Namespace:kube-system,Attempt:0,}" Mar 25 02:34:15.494353 containerd[1474]: time="2025-03-25T02:34:15.493116815Z" level=info msg="connecting to shim 07d298397a3a570d698ed175cd5ca75a204889abfd8a110b3a90733216dddef9" address="unix:///run/containerd/s/10688c6a2c0949e215adb81240bef89078af6b79d56def587b39778a5c5f9298" namespace=k8s.io protocol=ttrpc version=3 Mar 25 02:34:15.557421 systemd[1]: Started cri-containerd-07d298397a3a570d698ed175cd5ca75a204889abfd8a110b3a90733216dddef9.scope - libcontainer container 07d298397a3a570d698ed175cd5ca75a204889abfd8a110b3a90733216dddef9. Mar 25 02:34:15.593216 containerd[1474]: time="2025-03-25T02:34:15.593149529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-66hlj,Uid:66189d9c-5c77-49ec-9920-faad8917e65a,Namespace:kube-system,Attempt:0,} returns sandbox id \"07d298397a3a570d698ed175cd5ca75a204889abfd8a110b3a90733216dddef9\"" Mar 25 02:34:15.596353 containerd[1474]: time="2025-03-25T02:34:15.596312498Z" level=info msg="CreateContainer within sandbox \"07d298397a3a570d698ed175cd5ca75a204889abfd8a110b3a90733216dddef9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 25 02:34:15.619671 containerd[1474]: time="2025-03-25T02:34:15.619522929Z" level=info msg="Container 27fdd23c0ef1121a51a5afabd9ab5ff0afb103f0fa3dbae2977e652a167d0452: CDI devices from CRI Config.CDIDevices: []" Mar 25 02:34:15.637508 containerd[1474]: time="2025-03-25T02:34:15.637437390Z" level=info msg="CreateContainer within sandbox \"07d298397a3a570d698ed175cd5ca75a204889abfd8a110b3a90733216dddef9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"27fdd23c0ef1121a51a5afabd9ab5ff0afb103f0fa3dbae2977e652a167d0452\"" Mar 25 02:34:15.638375 containerd[1474]: time="2025-03-25T02:34:15.638328877Z" level=info msg="StartContainer for \"27fdd23c0ef1121a51a5afabd9ab5ff0afb103f0fa3dbae2977e652a167d0452\"" Mar 25 02:34:15.640049 containerd[1474]: time="2025-03-25T02:34:15.639998222Z" level=info msg="connecting to shim 27fdd23c0ef1121a51a5afabd9ab5ff0afb103f0fa3dbae2977e652a167d0452" address="unix:///run/containerd/s/10688c6a2c0949e215adb81240bef89078af6b79d56def587b39778a5c5f9298" protocol=ttrpc version=3 Mar 25 02:34:15.668408 systemd[1]: Started cri-containerd-27fdd23c0ef1121a51a5afabd9ab5ff0afb103f0fa3dbae2977e652a167d0452.scope - libcontainer container 27fdd23c0ef1121a51a5afabd9ab5ff0afb103f0fa3dbae2977e652a167d0452. Mar 25 02:34:15.717984 containerd[1474]: time="2025-03-25T02:34:15.717557760Z" level=info msg="StartContainer for \"27fdd23c0ef1121a51a5afabd9ab5ff0afb103f0fa3dbae2977e652a167d0452\" returns successfully" Mar 25 02:34:16.368926 kubelet[2696]: I0325 02:34:16.368747 2696 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-66hlj" podStartSLOduration=3.3687306870000002 podStartE2EDuration="3.368730687s" podCreationTimestamp="2025-03-25 02:34:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 02:34:16.368553497 +0000 UTC m=+8.239308289" watchObservedRunningTime="2025-03-25 02:34:16.368730687 +0000 UTC m=+8.239485499" Mar 25 02:34:16.890280 containerd[1474]: time="2025-03-25T02:34:16.890222269Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:34:16.891687 containerd[1474]: time="2025-03-25T02:34:16.891650967Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.5: active requests=0, bytes read=21945008" Mar 25 02:34:16.893053 containerd[1474]: time="2025-03-25T02:34:16.893030644Z" level=info msg="ImageCreate event name:\"sha256:dc4a8a56c133edb1bc4c3d6bc94bcd96f2bde82413370cb1783ac2d7f3a46d53\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:34:16.895521 containerd[1474]: time="2025-03-25T02:34:16.895497753Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:3341fa9475c0325b86228c8726389f9bae9fd6c430c66fe5cd5dc39d7bb6ad4b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:34:16.896946 containerd[1474]: time="2025-03-25T02:34:16.896921320Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.5\" with image id \"sha256:dc4a8a56c133edb1bc4c3d6bc94bcd96f2bde82413370cb1783ac2d7f3a46d53\", repo tag \"quay.io/tigera/operator:v1.36.5\", repo digest \"quay.io/tigera/operator@sha256:3341fa9475c0325b86228c8726389f9bae9fd6c430c66fe5cd5dc39d7bb6ad4b\", size \"21941003\" in 2.587321991s" Mar 25 02:34:16.897008 containerd[1474]: time="2025-03-25T02:34:16.896949948Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.5\" returns image reference \"sha256:dc4a8a56c133edb1bc4c3d6bc94bcd96f2bde82413370cb1783ac2d7f3a46d53\"" Mar 25 02:34:16.898824 containerd[1474]: time="2025-03-25T02:34:16.898799101Z" level=info msg="CreateContainer within sandbox \"fdf5955ae07b7400af189eacf5d3232d800b1813a55315f305f4a615c5f9b7ea\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 25 02:34:16.911131 containerd[1474]: time="2025-03-25T02:34:16.911026790Z" level=info msg="Container a6b3effd89cf099df44eacf55097c9eb3fda13d886a3217d170abcd8d52606f9: CDI devices from CRI Config.CDIDevices: []" Mar 25 02:34:16.917653 containerd[1474]: time="2025-03-25T02:34:16.917615084Z" level=info msg="CreateContainer within sandbox \"fdf5955ae07b7400af189eacf5d3232d800b1813a55315f305f4a615c5f9b7ea\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a6b3effd89cf099df44eacf55097c9eb3fda13d886a3217d170abcd8d52606f9\"" Mar 25 02:34:16.918302 containerd[1474]: time="2025-03-25T02:34:16.918165555Z" level=info msg="StartContainer for \"a6b3effd89cf099df44eacf55097c9eb3fda13d886a3217d170abcd8d52606f9\"" Mar 25 02:34:16.919080 containerd[1474]: time="2025-03-25T02:34:16.918997167Z" level=info msg="connecting to shim a6b3effd89cf099df44eacf55097c9eb3fda13d886a3217d170abcd8d52606f9" address="unix:///run/containerd/s/494bc57dbf027ff43793acb5d9dc854d751afcae670875d4c65e4b240429b825" protocol=ttrpc version=3 Mar 25 02:34:16.942408 systemd[1]: Started cri-containerd-a6b3effd89cf099df44eacf55097c9eb3fda13d886a3217d170abcd8d52606f9.scope - libcontainer container a6b3effd89cf099df44eacf55097c9eb3fda13d886a3217d170abcd8d52606f9. Mar 25 02:34:16.973067 containerd[1474]: time="2025-03-25T02:34:16.973030998Z" level=info msg="StartContainer for \"a6b3effd89cf099df44eacf55097c9eb3fda13d886a3217d170abcd8d52606f9\" returns successfully" Mar 25 02:34:20.206479 kubelet[2696]: I0325 02:34:20.206132 2696 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-64ff5465b7-xcdv5" podStartSLOduration=4.617381657 podStartE2EDuration="7.206117032s" podCreationTimestamp="2025-03-25 02:34:13 +0000 UTC" firstStartedPulling="2025-03-25 02:34:14.308824895 +0000 UTC m=+6.179579687" lastFinishedPulling="2025-03-25 02:34:16.89756027 +0000 UTC m=+8.768315062" observedRunningTime="2025-03-25 02:34:17.366499957 +0000 UTC m=+9.237254809" watchObservedRunningTime="2025-03-25 02:34:20.206117032 +0000 UTC m=+12.076871824" Mar 25 02:34:20.313966 systemd[1]: Created slice kubepods-besteffort-pod52a8d360_8408_420b_b23d_33dcd0a48cf2.slice - libcontainer container kubepods-besteffort-pod52a8d360_8408_420b_b23d_33dcd0a48cf2.slice. Mar 25 02:34:20.342546 kubelet[2696]: I0325 02:34:20.342508 2696 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/52a8d360-8408-420b-b23d-33dcd0a48cf2-typha-certs\") pod \"calico-typha-7dd745bcb7-9dngt\" (UID: \"52a8d360-8408-420b-b23d-33dcd0a48cf2\") " pod="calico-system/calico-typha-7dd745bcb7-9dngt" Mar 25 02:34:20.342718 kubelet[2696]: I0325 02:34:20.342552 2696 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vd4qr\" (UniqueName: \"kubernetes.io/projected/52a8d360-8408-420b-b23d-33dcd0a48cf2-kube-api-access-vd4qr\") pod \"calico-typha-7dd745bcb7-9dngt\" (UID: \"52a8d360-8408-420b-b23d-33dcd0a48cf2\") " pod="calico-system/calico-typha-7dd745bcb7-9dngt" Mar 25 02:34:20.342718 kubelet[2696]: I0325 02:34:20.342579 2696 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/52a8d360-8408-420b-b23d-33dcd0a48cf2-tigera-ca-bundle\") pod \"calico-typha-7dd745bcb7-9dngt\" (UID: \"52a8d360-8408-420b-b23d-33dcd0a48cf2\") " pod="calico-system/calico-typha-7dd745bcb7-9dngt" Mar 25 02:34:20.508740 systemd[1]: Created slice kubepods-besteffort-pod4f183a8c_bcfc_4932_a74e_138694d2da2a.slice - libcontainer container kubepods-besteffort-pod4f183a8c_bcfc_4932_a74e_138694d2da2a.slice. Mar 25 02:34:20.545675 kubelet[2696]: I0325 02:34:20.544376 2696 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4f183a8c-bcfc-4932-a74e-138694d2da2a-lib-modules\") pod \"calico-node-sd296\" (UID: \"4f183a8c-bcfc-4932-a74e-138694d2da2a\") " pod="calico-system/calico-node-sd296" Mar 25 02:34:20.545995 kubelet[2696]: I0325 02:34:20.545859 2696 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/4f183a8c-bcfc-4932-a74e-138694d2da2a-node-certs\") pod \"calico-node-sd296\" (UID: \"4f183a8c-bcfc-4932-a74e-138694d2da2a\") " pod="calico-system/calico-node-sd296" Mar 25 02:34:20.545995 kubelet[2696]: I0325 02:34:20.545886 2696 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/4f183a8c-bcfc-4932-a74e-138694d2da2a-cni-net-dir\") pod \"calico-node-sd296\" (UID: \"4f183a8c-bcfc-4932-a74e-138694d2da2a\") " pod="calico-system/calico-node-sd296" Mar 25 02:34:20.545995 kubelet[2696]: I0325 02:34:20.545942 2696 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/4f183a8c-bcfc-4932-a74e-138694d2da2a-flexvol-driver-host\") pod \"calico-node-sd296\" (UID: \"4f183a8c-bcfc-4932-a74e-138694d2da2a\") " pod="calico-system/calico-node-sd296" Mar 25 02:34:20.545995 kubelet[2696]: I0325 02:34:20.545966 2696 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47v2p\" (UniqueName: \"kubernetes.io/projected/4f183a8c-bcfc-4932-a74e-138694d2da2a-kube-api-access-47v2p\") pod \"calico-node-sd296\" (UID: \"4f183a8c-bcfc-4932-a74e-138694d2da2a\") " pod="calico-system/calico-node-sd296" Mar 25 02:34:20.546366 kubelet[2696]: I0325 02:34:20.546195 2696 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/4f183a8c-bcfc-4932-a74e-138694d2da2a-policysync\") pod \"calico-node-sd296\" (UID: \"4f183a8c-bcfc-4932-a74e-138694d2da2a\") " pod="calico-system/calico-node-sd296" Mar 25 02:34:20.546366 kubelet[2696]: I0325 02:34:20.546311 2696 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4f183a8c-bcfc-4932-a74e-138694d2da2a-var-lib-calico\") pod \"calico-node-sd296\" (UID: \"4f183a8c-bcfc-4932-a74e-138694d2da2a\") " pod="calico-system/calico-node-sd296" Mar 25 02:34:20.546675 kubelet[2696]: I0325 02:34:20.546467 2696 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/4f183a8c-bcfc-4932-a74e-138694d2da2a-cni-bin-dir\") pod \"calico-node-sd296\" (UID: \"4f183a8c-bcfc-4932-a74e-138694d2da2a\") " pod="calico-system/calico-node-sd296" Mar 25 02:34:20.546675 kubelet[2696]: I0325 02:34:20.546494 2696 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/4f183a8c-bcfc-4932-a74e-138694d2da2a-var-run-calico\") pod \"calico-node-sd296\" (UID: \"4f183a8c-bcfc-4932-a74e-138694d2da2a\") " pod="calico-system/calico-node-sd296" Mar 25 02:34:20.546675 kubelet[2696]: I0325 02:34:20.546524 2696 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4f183a8c-bcfc-4932-a74e-138694d2da2a-xtables-lock\") pod \"calico-node-sd296\" (UID: \"4f183a8c-bcfc-4932-a74e-138694d2da2a\") " pod="calico-system/calico-node-sd296" Mar 25 02:34:20.546675 kubelet[2696]: I0325 02:34:20.546551 2696 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4f183a8c-bcfc-4932-a74e-138694d2da2a-tigera-ca-bundle\") pod \"calico-node-sd296\" (UID: \"4f183a8c-bcfc-4932-a74e-138694d2da2a\") " pod="calico-system/calico-node-sd296" Mar 25 02:34:20.546675 kubelet[2696]: I0325 02:34:20.546568 2696 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/4f183a8c-bcfc-4932-a74e-138694d2da2a-cni-log-dir\") pod \"calico-node-sd296\" (UID: \"4f183a8c-bcfc-4932-a74e-138694d2da2a\") " pod="calico-system/calico-node-sd296" Mar 25 02:34:20.618425 containerd[1474]: time="2025-03-25T02:34:20.618343731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7dd745bcb7-9dngt,Uid:52a8d360-8408-420b-b23d-33dcd0a48cf2,Namespace:calico-system,Attempt:0,}" Mar 25 02:34:20.631839 kubelet[2696]: E0325 02:34:20.631781 2696 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j9qch" podUID="f79a0a0e-5416-4235-b1a3-2a817bf38d19" Mar 25 02:34:20.647330 kubelet[2696]: I0325 02:34:20.646826 2696 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8zgj\" (UniqueName: \"kubernetes.io/projected/f79a0a0e-5416-4235-b1a3-2a817bf38d19-kube-api-access-w8zgj\") pod \"csi-node-driver-j9qch\" (UID: \"f79a0a0e-5416-4235-b1a3-2a817bf38d19\") " pod="calico-system/csi-node-driver-j9qch" Mar 25 02:34:20.647330 kubelet[2696]: I0325 02:34:20.646902 2696 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f79a0a0e-5416-4235-b1a3-2a817bf38d19-registration-dir\") pod \"csi-node-driver-j9qch\" (UID: \"f79a0a0e-5416-4235-b1a3-2a817bf38d19\") " pod="calico-system/csi-node-driver-j9qch" Mar 25 02:34:20.647330 kubelet[2696]: I0325 02:34:20.646966 2696 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f79a0a0e-5416-4235-b1a3-2a817bf38d19-kubelet-dir\") pod \"csi-node-driver-j9qch\" (UID: \"f79a0a0e-5416-4235-b1a3-2a817bf38d19\") " pod="calico-system/csi-node-driver-j9qch" Mar 25 02:34:20.647330 kubelet[2696]: I0325 02:34:20.646986 2696 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/f79a0a0e-5416-4235-b1a3-2a817bf38d19-varrun\") pod \"csi-node-driver-j9qch\" (UID: \"f79a0a0e-5416-4235-b1a3-2a817bf38d19\") " pod="calico-system/csi-node-driver-j9qch" Mar 25 02:34:20.647330 kubelet[2696]: I0325 02:34:20.647004 2696 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f79a0a0e-5416-4235-b1a3-2a817bf38d19-socket-dir\") pod \"csi-node-driver-j9qch\" (UID: \"f79a0a0e-5416-4235-b1a3-2a817bf38d19\") " pod="calico-system/csi-node-driver-j9qch" Mar 25 02:34:20.665571 containerd[1474]: time="2025-03-25T02:34:20.664467171Z" level=info msg="connecting to shim 78718bca15bde8c6c36ab34dcbeb78f8f4bc41bc81109a26ee71b26f1708e6d7" address="unix:///run/containerd/s/5b7159d0fe250ce3e654df2c76aaf30bca957f6418bcefd217fee22dab39cf8b" namespace=k8s.io protocol=ttrpc version=3 Mar 25 02:34:20.672499 kubelet[2696]: E0325 02:34:20.671991 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:20.672499 kubelet[2696]: W0325 02:34:20.672126 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:20.672499 kubelet[2696]: E0325 02:34:20.672148 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:20.698331 kubelet[2696]: E0325 02:34:20.697619 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:20.698331 kubelet[2696]: W0325 02:34:20.698209 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:20.698331 kubelet[2696]: E0325 02:34:20.698246 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:20.713436 systemd[1]: Started cri-containerd-78718bca15bde8c6c36ab34dcbeb78f8f4bc41bc81109a26ee71b26f1708e6d7.scope - libcontainer container 78718bca15bde8c6c36ab34dcbeb78f8f4bc41bc81109a26ee71b26f1708e6d7. Mar 25 02:34:20.749186 kubelet[2696]: E0325 02:34:20.749046 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:20.749186 kubelet[2696]: W0325 02:34:20.749069 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:20.749186 kubelet[2696]: E0325 02:34:20.749087 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:20.749837 kubelet[2696]: E0325 02:34:20.749533 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:20.749837 kubelet[2696]: W0325 02:34:20.749545 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:20.749837 kubelet[2696]: E0325 02:34:20.749610 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:20.750514 kubelet[2696]: E0325 02:34:20.750223 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:20.750514 kubelet[2696]: W0325 02:34:20.750235 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:20.750514 kubelet[2696]: E0325 02:34:20.750246 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:20.750995 kubelet[2696]: E0325 02:34:20.750804 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:20.750995 kubelet[2696]: W0325 02:34:20.750816 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:20.750995 kubelet[2696]: E0325 02:34:20.750857 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:20.751633 kubelet[2696]: E0325 02:34:20.751403 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:20.751633 kubelet[2696]: W0325 02:34:20.751414 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:20.751633 kubelet[2696]: E0325 02:34:20.751483 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:20.752305 kubelet[2696]: E0325 02:34:20.752158 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:20.752305 kubelet[2696]: W0325 02:34:20.752169 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:20.752305 kubelet[2696]: E0325 02:34:20.752209 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:20.752665 kubelet[2696]: E0325 02:34:20.752649 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:20.752665 kubelet[2696]: W0325 02:34:20.752663 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:20.752784 kubelet[2696]: E0325 02:34:20.752679 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:20.753068 kubelet[2696]: E0325 02:34:20.753050 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:20.753068 kubelet[2696]: W0325 02:34:20.753064 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:20.753223 kubelet[2696]: E0325 02:34:20.753207 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:20.753492 kubelet[2696]: E0325 02:34:20.753476 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:20.753492 kubelet[2696]: W0325 02:34:20.753490 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:20.753934 kubelet[2696]: E0325 02:34:20.753915 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:20.754038 kubelet[2696]: E0325 02:34:20.754024 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:20.754038 kubelet[2696]: W0325 02:34:20.754037 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:20.754127 kubelet[2696]: E0325 02:34:20.754093 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:20.754383 kubelet[2696]: E0325 02:34:20.754367 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:20.754383 kubelet[2696]: W0325 02:34:20.754380 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:20.754472 kubelet[2696]: E0325 02:34:20.754452 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:20.754629 kubelet[2696]: E0325 02:34:20.754605 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:20.754629 kubelet[2696]: W0325 02:34:20.754614 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:20.754730 kubelet[2696]: E0325 02:34:20.754703 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:20.754883 kubelet[2696]: E0325 02:34:20.754866 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:20.754883 kubelet[2696]: W0325 02:34:20.754880 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:20.754981 kubelet[2696]: E0325 02:34:20.754968 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:20.755109 kubelet[2696]: E0325 02:34:20.755091 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:20.755109 kubelet[2696]: W0325 02:34:20.755103 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:20.755197 kubelet[2696]: E0325 02:34:20.755115 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:20.755381 kubelet[2696]: E0325 02:34:20.755368 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:20.755381 kubelet[2696]: W0325 02:34:20.755380 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:20.755825 kubelet[2696]: E0325 02:34:20.755799 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:20.756144 kubelet[2696]: E0325 02:34:20.756123 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:20.756144 kubelet[2696]: W0325 02:34:20.756138 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:20.756384 kubelet[2696]: E0325 02:34:20.756362 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:20.756555 kubelet[2696]: E0325 02:34:20.756508 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:20.756555 kubelet[2696]: W0325 02:34:20.756521 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:20.756658 kubelet[2696]: E0325 02:34:20.756628 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:20.756876 kubelet[2696]: E0325 02:34:20.756859 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:20.756876 kubelet[2696]: W0325 02:34:20.756872 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:20.757242 kubelet[2696]: E0325 02:34:20.757067 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:20.757555 kubelet[2696]: E0325 02:34:20.757525 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:20.757555 kubelet[2696]: W0325 02:34:20.757539 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:20.757787 kubelet[2696]: E0325 02:34:20.757697 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:20.758078 kubelet[2696]: E0325 02:34:20.758025 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:20.758078 kubelet[2696]: W0325 02:34:20.758039 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:20.758305 kubelet[2696]: E0325 02:34:20.758175 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:20.758305 kubelet[2696]: E0325 02:34:20.758234 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:20.758305 kubelet[2696]: W0325 02:34:20.758242 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:20.758622 kubelet[2696]: E0325 02:34:20.758601 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:20.758896 kubelet[2696]: E0325 02:34:20.758667 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:20.758896 kubelet[2696]: W0325 02:34:20.758679 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:20.758896 kubelet[2696]: E0325 02:34:20.758741 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:20.759142 kubelet[2696]: E0325 02:34:20.759123 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:20.759142 kubelet[2696]: W0325 02:34:20.759138 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:20.759470 kubelet[2696]: E0325 02:34:20.759152 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:20.761216 kubelet[2696]: E0325 02:34:20.761195 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:20.761216 kubelet[2696]: W0325 02:34:20.761213 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:20.761378 kubelet[2696]: E0325 02:34:20.761229 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:20.762471 kubelet[2696]: E0325 02:34:20.762044 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:20.762471 kubelet[2696]: W0325 02:34:20.762082 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:20.762471 kubelet[2696]: E0325 02:34:20.762094 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:20.780458 kubelet[2696]: E0325 02:34:20.780427 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:20.780653 kubelet[2696]: W0325 02:34:20.780465 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:20.780653 kubelet[2696]: E0325 02:34:20.780485 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:20.814519 containerd[1474]: time="2025-03-25T02:34:20.814461893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-sd296,Uid:4f183a8c-bcfc-4932-a74e-138694d2da2a,Namespace:calico-system,Attempt:0,}" Mar 25 02:34:20.839161 containerd[1474]: time="2025-03-25T02:34:20.839081770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7dd745bcb7-9dngt,Uid:52a8d360-8408-420b-b23d-33dcd0a48cf2,Namespace:calico-system,Attempt:0,} returns sandbox id \"78718bca15bde8c6c36ab34dcbeb78f8f4bc41bc81109a26ee71b26f1708e6d7\"" Mar 25 02:34:20.842561 containerd[1474]: time="2025-03-25T02:34:20.842447386Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.2\"" Mar 25 02:34:20.863063 containerd[1474]: time="2025-03-25T02:34:20.862904787Z" level=info msg="connecting to shim 0fb40c8995d3879246f0d889943529f192ad24d83901858dd4c00c511fb70437" address="unix:///run/containerd/s/8ef4cdecd244b15348bb08f98c058cfb4ff3656087fe93c08a37edd1c9dc089a" namespace=k8s.io protocol=ttrpc version=3 Mar 25 02:34:20.900432 systemd[1]: Started cri-containerd-0fb40c8995d3879246f0d889943529f192ad24d83901858dd4c00c511fb70437.scope - libcontainer container 0fb40c8995d3879246f0d889943529f192ad24d83901858dd4c00c511fb70437. Mar 25 02:34:20.937613 containerd[1474]: time="2025-03-25T02:34:20.937569140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-sd296,Uid:4f183a8c-bcfc-4932-a74e-138694d2da2a,Namespace:calico-system,Attempt:0,} returns sandbox id \"0fb40c8995d3879246f0d889943529f192ad24d83901858dd4c00c511fb70437\"" Mar 25 02:34:22.269399 kubelet[2696]: E0325 02:34:22.269253 2696 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j9qch" podUID="f79a0a0e-5416-4235-b1a3-2a817bf38d19" Mar 25 02:34:23.862075 containerd[1474]: time="2025-03-25T02:34:23.861845575Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:34:23.863292 containerd[1474]: time="2025-03-25T02:34:23.863147553Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.2: active requests=0, bytes read=30414075" Mar 25 02:34:23.864500 containerd[1474]: time="2025-03-25T02:34:23.864455612Z" level=info msg="ImageCreate event name:\"sha256:1d6f9d005866d74e6f0a8b0b8b743d0eaf4efcb7c7032fd2215da9c6ca131cb5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:34:23.868321 containerd[1474]: time="2025-03-25T02:34:23.867644802Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:9839fd34b4c1bad50beed72aec59c64893487a46eea57dc2d7d66c3041d7bcce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:34:23.868321 containerd[1474]: time="2025-03-25T02:34:23.868198916Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.2\" with image id \"sha256:1d6f9d005866d74e6f0a8b0b8b743d0eaf4efcb7c7032fd2215da9c6ca131cb5\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:9839fd34b4c1bad50beed72aec59c64893487a46eea57dc2d7d66c3041d7bcce\", size \"31907171\" in 3.025704196s" Mar 25 02:34:23.868321 containerd[1474]: time="2025-03-25T02:34:23.868223435Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.2\" returns image reference \"sha256:1d6f9d005866d74e6f0a8b0b8b743d0eaf4efcb7c7032fd2215da9c6ca131cb5\"" Mar 25 02:34:23.870091 containerd[1474]: time="2025-03-25T02:34:23.870070209Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\"" Mar 25 02:34:23.882362 containerd[1474]: time="2025-03-25T02:34:23.882323189Z" level=info msg="CreateContainer within sandbox \"78718bca15bde8c6c36ab34dcbeb78f8f4bc41bc81109a26ee71b26f1708e6d7\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 25 02:34:23.894009 containerd[1474]: time="2025-03-25T02:34:23.892439374Z" level=info msg="Container 2c3e62df1dab71f880d06b4d53b99039ebdc4258845be41c875b16ba359c8238: CDI devices from CRI Config.CDIDevices: []" Mar 25 02:34:23.903989 containerd[1474]: time="2025-03-25T02:34:23.903942578Z" level=info msg="CreateContainer within sandbox \"78718bca15bde8c6c36ab34dcbeb78f8f4bc41bc81109a26ee71b26f1708e6d7\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"2c3e62df1dab71f880d06b4d53b99039ebdc4258845be41c875b16ba359c8238\"" Mar 25 02:34:23.905806 containerd[1474]: time="2025-03-25T02:34:23.904558947Z" level=info msg="StartContainer for \"2c3e62df1dab71f880d06b4d53b99039ebdc4258845be41c875b16ba359c8238\"" Mar 25 02:34:23.905806 containerd[1474]: time="2025-03-25T02:34:23.905623497Z" level=info msg="connecting to shim 2c3e62df1dab71f880d06b4d53b99039ebdc4258845be41c875b16ba359c8238" address="unix:///run/containerd/s/5b7159d0fe250ce3e654df2c76aaf30bca957f6418bcefd217fee22dab39cf8b" protocol=ttrpc version=3 Mar 25 02:34:23.932427 systemd[1]: Started cri-containerd-2c3e62df1dab71f880d06b4d53b99039ebdc4258845be41c875b16ba359c8238.scope - libcontainer container 2c3e62df1dab71f880d06b4d53b99039ebdc4258845be41c875b16ba359c8238. Mar 25 02:34:23.985184 containerd[1474]: time="2025-03-25T02:34:23.985092395Z" level=info msg="StartContainer for \"2c3e62df1dab71f880d06b4d53b99039ebdc4258845be41c875b16ba359c8238\" returns successfully" Mar 25 02:34:24.268813 kubelet[2696]: E0325 02:34:24.268118 2696 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j9qch" podUID="f79a0a0e-5416-4235-b1a3-2a817bf38d19" Mar 25 02:34:24.390805 kubelet[2696]: I0325 02:34:24.390425 2696 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7dd745bcb7-9dngt" podStartSLOduration=1.362517441 podStartE2EDuration="4.390410926s" podCreationTimestamp="2025-03-25 02:34:20 +0000 UTC" firstStartedPulling="2025-03-25 02:34:20.841590676 +0000 UTC m=+12.712345478" lastFinishedPulling="2025-03-25 02:34:23.869484161 +0000 UTC m=+15.740238963" observedRunningTime="2025-03-25 02:34:24.388319146 +0000 UTC m=+16.259073938" watchObservedRunningTime="2025-03-25 02:34:24.390410926 +0000 UTC m=+16.261165728" Mar 25 02:34:24.459438 kubelet[2696]: E0325 02:34:24.459394 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:24.459438 kubelet[2696]: W0325 02:34:24.459440 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:24.459636 kubelet[2696]: E0325 02:34:24.459475 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:24.460286 kubelet[2696]: E0325 02:34:24.460232 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:24.460371 kubelet[2696]: W0325 02:34:24.460300 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:24.460371 kubelet[2696]: E0325 02:34:24.460343 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:24.461394 kubelet[2696]: E0325 02:34:24.461360 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:24.461394 kubelet[2696]: W0325 02:34:24.461391 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:24.461627 kubelet[2696]: E0325 02:34:24.461416 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:24.461780 kubelet[2696]: E0325 02:34:24.461749 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:24.461824 kubelet[2696]: W0325 02:34:24.461778 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:24.461824 kubelet[2696]: E0325 02:34:24.461799 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:24.462189 kubelet[2696]: E0325 02:34:24.462165 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:24.462245 kubelet[2696]: W0325 02:34:24.462192 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:24.462245 kubelet[2696]: E0325 02:34:24.462214 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:24.462700 kubelet[2696]: E0325 02:34:24.462655 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:24.462746 kubelet[2696]: W0325 02:34:24.462700 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:24.462746 kubelet[2696]: E0325 02:34:24.462724 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:24.463205 kubelet[2696]: E0325 02:34:24.463177 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:24.463342 kubelet[2696]: W0325 02:34:24.463207 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:24.463342 kubelet[2696]: E0325 02:34:24.463231 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:24.463779 kubelet[2696]: E0325 02:34:24.463748 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:24.463971 kubelet[2696]: W0325 02:34:24.463777 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:24.463971 kubelet[2696]: E0325 02:34:24.463840 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:24.464365 kubelet[2696]: E0325 02:34:24.464318 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:24.464365 kubelet[2696]: W0325 02:34:24.464351 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:24.464446 kubelet[2696]: E0325 02:34:24.464373 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:24.464755 kubelet[2696]: E0325 02:34:24.464712 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:24.464755 kubelet[2696]: W0325 02:34:24.464740 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:24.464870 kubelet[2696]: E0325 02:34:24.464761 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:24.465113 kubelet[2696]: E0325 02:34:24.465086 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:24.465164 kubelet[2696]: W0325 02:34:24.465115 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:24.465164 kubelet[2696]: E0325 02:34:24.465154 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:24.465565 kubelet[2696]: E0325 02:34:24.465539 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:24.465633 kubelet[2696]: W0325 02:34:24.465568 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:24.465633 kubelet[2696]: E0325 02:34:24.465591 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:24.466167 kubelet[2696]: E0325 02:34:24.466128 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:24.466167 kubelet[2696]: W0325 02:34:24.466158 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:24.466316 kubelet[2696]: E0325 02:34:24.466181 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:24.466663 kubelet[2696]: E0325 02:34:24.466638 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:24.466714 kubelet[2696]: W0325 02:34:24.466666 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:24.466714 kubelet[2696]: E0325 02:34:24.466689 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:24.467046 kubelet[2696]: E0325 02:34:24.467020 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:24.467102 kubelet[2696]: W0325 02:34:24.467049 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:24.467102 kubelet[2696]: E0325 02:34:24.467072 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:24.476136 kubelet[2696]: E0325 02:34:24.476001 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:24.476136 kubelet[2696]: W0325 02:34:24.476022 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:24.476136 kubelet[2696]: E0325 02:34:24.476067 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:24.476570 kubelet[2696]: E0325 02:34:24.476537 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:24.476570 kubelet[2696]: W0325 02:34:24.476549 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:24.476737 kubelet[2696]: E0325 02:34:24.476666 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:24.477160 kubelet[2696]: E0325 02:34:24.477084 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:24.477160 kubelet[2696]: W0325 02:34:24.477094 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:24.477160 kubelet[2696]: E0325 02:34:24.477111 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:24.477820 kubelet[2696]: E0325 02:34:24.477785 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:24.477996 kubelet[2696]: W0325 02:34:24.477825 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:24.478159 kubelet[2696]: E0325 02:34:24.478025 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:24.478397 kubelet[2696]: E0325 02:34:24.478377 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:24.478486 kubelet[2696]: W0325 02:34:24.478447 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:24.478612 kubelet[2696]: E0325 02:34:24.478541 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:24.479029 kubelet[2696]: E0325 02:34:24.478923 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:24.479029 kubelet[2696]: W0325 02:34:24.478935 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:24.479029 kubelet[2696]: E0325 02:34:24.479008 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:24.479250 kubelet[2696]: E0325 02:34:24.479232 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:24.479355 kubelet[2696]: W0325 02:34:24.479316 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:24.479558 kubelet[2696]: E0325 02:34:24.479521 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:24.479820 kubelet[2696]: E0325 02:34:24.479774 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:24.479820 kubelet[2696]: W0325 02:34:24.479795 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:24.480077 kubelet[2696]: E0325 02:34:24.480064 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:24.480250 kubelet[2696]: E0325 02:34:24.480232 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:24.480539 kubelet[2696]: W0325 02:34:24.480388 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:24.480539 kubelet[2696]: E0325 02:34:24.480417 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:24.481333 kubelet[2696]: E0325 02:34:24.481027 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:24.481333 kubelet[2696]: W0325 02:34:24.481039 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:24.481333 kubelet[2696]: E0325 02:34:24.481055 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:24.481832 kubelet[2696]: E0325 02:34:24.481799 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:24.481904 kubelet[2696]: W0325 02:34:24.481832 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:24.481904 kubelet[2696]: E0325 02:34:24.481893 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:24.482599 kubelet[2696]: E0325 02:34:24.482563 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:24.482599 kubelet[2696]: W0325 02:34:24.482593 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:24.482848 kubelet[2696]: E0325 02:34:24.482634 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:24.483126 kubelet[2696]: E0325 02:34:24.483095 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:24.483354 kubelet[2696]: W0325 02:34:24.483126 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:24.483354 kubelet[2696]: E0325 02:34:24.483171 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:24.483646 kubelet[2696]: E0325 02:34:24.483561 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:24.483694 kubelet[2696]: W0325 02:34:24.483651 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:24.483794 kubelet[2696]: E0325 02:34:24.483734 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:24.484341 kubelet[2696]: E0325 02:34:24.484210 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:24.484341 kubelet[2696]: W0325 02:34:24.484237 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:24.484815 kubelet[2696]: E0325 02:34:24.484496 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:24.485816 kubelet[2696]: E0325 02:34:24.485764 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:24.485885 kubelet[2696]: W0325 02:34:24.485851 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:24.485916 kubelet[2696]: E0325 02:34:24.485891 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:24.487182 kubelet[2696]: E0325 02:34:24.487076 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:24.487182 kubelet[2696]: W0325 02:34:24.487091 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:24.487182 kubelet[2696]: E0325 02:34:24.487117 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:24.487952 kubelet[2696]: E0325 02:34:24.487908 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:24.487952 kubelet[2696]: W0325 02:34:24.487920 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:24.487952 kubelet[2696]: E0325 02:34:24.487930 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:25.376514 kubelet[2696]: I0325 02:34:25.375619 2696 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 25 02:34:25.475756 kubelet[2696]: E0325 02:34:25.475439 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:25.475756 kubelet[2696]: W0325 02:34:25.475481 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:25.475756 kubelet[2696]: E0325 02:34:25.475515 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:25.476671 kubelet[2696]: E0325 02:34:25.476438 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:25.476671 kubelet[2696]: W0325 02:34:25.476468 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:25.476671 kubelet[2696]: E0325 02:34:25.476492 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:25.477100 kubelet[2696]: E0325 02:34:25.477071 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:25.477343 kubelet[2696]: W0325 02:34:25.477225 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:25.477343 kubelet[2696]: E0325 02:34:25.477258 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:25.478138 kubelet[2696]: E0325 02:34:25.477900 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:25.478138 kubelet[2696]: W0325 02:34:25.477932 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:25.478138 kubelet[2696]: E0325 02:34:25.477955 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:25.478765 kubelet[2696]: E0325 02:34:25.478735 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:25.479108 kubelet[2696]: W0325 02:34:25.478898 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:25.479108 kubelet[2696]: E0325 02:34:25.478931 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:25.479522 kubelet[2696]: E0325 02:34:25.479484 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:25.479676 kubelet[2696]: W0325 02:34:25.479651 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:25.480024 kubelet[2696]: E0325 02:34:25.479843 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:25.480324 kubelet[2696]: E0325 02:34:25.480240 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:25.480495 kubelet[2696]: W0325 02:34:25.480465 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:25.480794 kubelet[2696]: E0325 02:34:25.480617 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:25.481146 kubelet[2696]: E0325 02:34:25.481119 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:25.481353 kubelet[2696]: W0325 02:34:25.481301 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:25.481700 kubelet[2696]: E0325 02:34:25.481490 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:25.481954 kubelet[2696]: E0325 02:34:25.481926 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:25.482090 kubelet[2696]: W0325 02:34:25.482066 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:25.482432 kubelet[2696]: E0325 02:34:25.482208 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:25.482860 kubelet[2696]: E0325 02:34:25.482642 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:25.482860 kubelet[2696]: W0325 02:34:25.482669 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:25.482860 kubelet[2696]: E0325 02:34:25.482691 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:25.483226 kubelet[2696]: E0325 02:34:25.483198 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:25.483439 kubelet[2696]: W0325 02:34:25.483413 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:25.483748 kubelet[2696]: E0325 02:34:25.483562 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:25.483988 kubelet[2696]: E0325 02:34:25.483960 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:25.484135 kubelet[2696]: W0325 02:34:25.484111 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:25.484333 kubelet[2696]: E0325 02:34:25.484249 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:25.485105 kubelet[2696]: E0325 02:34:25.484889 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:25.485105 kubelet[2696]: W0325 02:34:25.484918 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:25.485105 kubelet[2696]: E0325 02:34:25.484940 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:25.485533 kubelet[2696]: E0325 02:34:25.485505 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:25.485848 kubelet[2696]: W0325 02:34:25.485642 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:25.485848 kubelet[2696]: E0325 02:34:25.485673 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:25.486167 kubelet[2696]: E0325 02:34:25.486138 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:25.486362 kubelet[2696]: W0325 02:34:25.486334 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:25.486827 kubelet[2696]: E0325 02:34:25.486478 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:25.487353 kubelet[2696]: E0325 02:34:25.487043 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:25.487353 kubelet[2696]: W0325 02:34:25.487071 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:25.487353 kubelet[2696]: E0325 02:34:25.487093 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:25.487843 kubelet[2696]: E0325 02:34:25.487813 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:25.488065 kubelet[2696]: W0325 02:34:25.487966 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:25.488395 kubelet[2696]: E0325 02:34:25.488179 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:25.488648 kubelet[2696]: E0325 02:34:25.488603 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:25.488748 kubelet[2696]: W0325 02:34:25.488648 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:25.488748 kubelet[2696]: E0325 02:34:25.488720 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:25.489600 kubelet[2696]: E0325 02:34:25.489553 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:25.489600 kubelet[2696]: W0325 02:34:25.489585 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:25.489782 kubelet[2696]: E0325 02:34:25.489659 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:25.490523 kubelet[2696]: E0325 02:34:25.490467 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:25.490642 kubelet[2696]: W0325 02:34:25.490525 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:25.490642 kubelet[2696]: E0325 02:34:25.490563 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:25.491187 kubelet[2696]: E0325 02:34:25.491106 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:25.491187 kubelet[2696]: W0325 02:34:25.491142 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:25.491553 kubelet[2696]: E0325 02:34:25.491348 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:25.491553 kubelet[2696]: E0325 02:34:25.491526 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:25.491553 kubelet[2696]: W0325 02:34:25.491547 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:25.491821 kubelet[2696]: E0325 02:34:25.491759 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:25.492008 kubelet[2696]: E0325 02:34:25.491973 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:25.492008 kubelet[2696]: W0325 02:34:25.492004 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:25.492358 kubelet[2696]: E0325 02:34:25.492065 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:25.492470 kubelet[2696]: E0325 02:34:25.492396 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:25.492470 kubelet[2696]: W0325 02:34:25.492416 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:25.492470 kubelet[2696]: E0325 02:34:25.492451 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:25.493331 kubelet[2696]: E0325 02:34:25.493144 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:25.493331 kubelet[2696]: W0325 02:34:25.493180 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:25.493331 kubelet[2696]: E0325 02:34:25.493320 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:25.494368 kubelet[2696]: E0325 02:34:25.494097 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:25.494368 kubelet[2696]: W0325 02:34:25.494130 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:25.496506 kubelet[2696]: E0325 02:34:25.494501 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:25.496506 kubelet[2696]: E0325 02:34:25.495019 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:25.496506 kubelet[2696]: W0325 02:34:25.495041 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:25.496506 kubelet[2696]: E0325 02:34:25.495500 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:25.496506 kubelet[2696]: W0325 02:34:25.495551 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:25.496506 kubelet[2696]: E0325 02:34:25.495576 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:25.496506 kubelet[2696]: E0325 02:34:25.495619 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:25.496506 kubelet[2696]: E0325 02:34:25.496046 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:25.496506 kubelet[2696]: W0325 02:34:25.496120 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:25.496506 kubelet[2696]: E0325 02:34:25.496143 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:25.497129 kubelet[2696]: E0325 02:34:25.496750 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:25.497129 kubelet[2696]: W0325 02:34:25.496775 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:25.497129 kubelet[2696]: E0325 02:34:25.496868 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:25.499619 kubelet[2696]: E0325 02:34:25.497804 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:25.499619 kubelet[2696]: W0325 02:34:25.497839 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:25.499619 kubelet[2696]: E0325 02:34:25.498488 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:25.499619 kubelet[2696]: W0325 02:34:25.498510 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:25.499619 kubelet[2696]: E0325 02:34:25.498585 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:25.501143 kubelet[2696]: E0325 02:34:25.500533 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:25.516371 kubelet[2696]: E0325 02:34:25.516209 2696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 02:34:25.516610 kubelet[2696]: W0325 02:34:25.516251 2696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 02:34:25.516777 kubelet[2696]: E0325 02:34:25.516749 2696 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 02:34:25.875346 containerd[1474]: time="2025-03-25T02:34:25.875301811Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:34:25.876991 containerd[1474]: time="2025-03-25T02:34:25.876941379Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2: active requests=0, bytes read=5364011" Mar 25 02:34:25.878295 containerd[1474]: time="2025-03-25T02:34:25.878255183Z" level=info msg="ImageCreate event name:\"sha256:441bf8ace5b7fa3742b7fafaf6cd60fea340dd307169a18c75a1d78cba3a8365\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:34:25.880339 containerd[1474]: time="2025-03-25T02:34:25.880285674Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:51d9341a4a37e278a906f40ecc73f5076e768612c21621f1b1d4f2b2f0735a1d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:34:25.880967 containerd[1474]: time="2025-03-25T02:34:25.880823553Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" with image id \"sha256:441bf8ace5b7fa3742b7fafaf6cd60fea340dd307169a18c75a1d78cba3a8365\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:51d9341a4a37e278a906f40ecc73f5076e768612c21621f1b1d4f2b2f0735a1d\", size \"6857075\" in 2.010663103s" Mar 25 02:34:25.880967 containerd[1474]: time="2025-03-25T02:34:25.880854825Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" returns image reference \"sha256:441bf8ace5b7fa3742b7fafaf6cd60fea340dd307169a18c75a1d78cba3a8365\"" Mar 25 02:34:25.883140 containerd[1474]: time="2025-03-25T02:34:25.883117692Z" level=info msg="CreateContainer within sandbox \"0fb40c8995d3879246f0d889943529f192ad24d83901858dd4c00c511fb70437\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 25 02:34:25.908838 containerd[1474]: time="2025-03-25T02:34:25.906368550Z" level=info msg="Container 8602e3ed2bf71a95bb7aec798eeb97a6ef1e8e18e476b411c34970e06f40309d: CDI devices from CRI Config.CDIDevices: []" Mar 25 02:34:25.922909 containerd[1474]: time="2025-03-25T02:34:25.922863623Z" level=info msg="CreateContainer within sandbox \"0fb40c8995d3879246f0d889943529f192ad24d83901858dd4c00c511fb70437\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"8602e3ed2bf71a95bb7aec798eeb97a6ef1e8e18e476b411c34970e06f40309d\"" Mar 25 02:34:25.926432 containerd[1474]: time="2025-03-25T02:34:25.924427909Z" level=info msg="StartContainer for \"8602e3ed2bf71a95bb7aec798eeb97a6ef1e8e18e476b411c34970e06f40309d\"" Mar 25 02:34:25.926432 containerd[1474]: time="2025-03-25T02:34:25.926075261Z" level=info msg="connecting to shim 8602e3ed2bf71a95bb7aec798eeb97a6ef1e8e18e476b411c34970e06f40309d" address="unix:///run/containerd/s/8ef4cdecd244b15348bb08f98c058cfb4ff3656087fe93c08a37edd1c9dc089a" protocol=ttrpc version=3 Mar 25 02:34:25.954409 systemd[1]: Started cri-containerd-8602e3ed2bf71a95bb7aec798eeb97a6ef1e8e18e476b411c34970e06f40309d.scope - libcontainer container 8602e3ed2bf71a95bb7aec798eeb97a6ef1e8e18e476b411c34970e06f40309d. Mar 25 02:34:26.000982 containerd[1474]: time="2025-03-25T02:34:26.000949203Z" level=info msg="StartContainer for \"8602e3ed2bf71a95bb7aec798eeb97a6ef1e8e18e476b411c34970e06f40309d\" returns successfully" Mar 25 02:34:26.007839 systemd[1]: cri-containerd-8602e3ed2bf71a95bb7aec798eeb97a6ef1e8e18e476b411c34970e06f40309d.scope: Deactivated successfully. Mar 25 02:34:26.010860 containerd[1474]: time="2025-03-25T02:34:26.010826525Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8602e3ed2bf71a95bb7aec798eeb97a6ef1e8e18e476b411c34970e06f40309d\" id:\"8602e3ed2bf71a95bb7aec798eeb97a6ef1e8e18e476b411c34970e06f40309d\" pid:3325 exited_at:{seconds:1742870066 nanos:10318888}" Mar 25 02:34:26.011073 containerd[1474]: time="2025-03-25T02:34:26.011037808Z" level=info msg="received exit event container_id:\"8602e3ed2bf71a95bb7aec798eeb97a6ef1e8e18e476b411c34970e06f40309d\" id:\"8602e3ed2bf71a95bb7aec798eeb97a6ef1e8e18e476b411c34970e06f40309d\" pid:3325 exited_at:{seconds:1742870066 nanos:10318888}" Mar 25 02:34:26.032736 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8602e3ed2bf71a95bb7aec798eeb97a6ef1e8e18e476b411c34970e06f40309d-rootfs.mount: Deactivated successfully. Mar 25 02:34:26.268558 kubelet[2696]: E0325 02:34:26.267808 2696 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j9qch" podUID="f79a0a0e-5416-4235-b1a3-2a817bf38d19" Mar 25 02:34:27.399043 containerd[1474]: time="2025-03-25T02:34:27.398615883Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.2\"" Mar 25 02:34:28.269789 kubelet[2696]: E0325 02:34:28.269088 2696 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j9qch" podUID="f79a0a0e-5416-4235-b1a3-2a817bf38d19" Mar 25 02:34:30.269280 kubelet[2696]: E0325 02:34:30.267819 2696 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j9qch" podUID="f79a0a0e-5416-4235-b1a3-2a817bf38d19" Mar 25 02:34:32.268088 kubelet[2696]: E0325 02:34:32.268038 2696 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j9qch" podUID="f79a0a0e-5416-4235-b1a3-2a817bf38d19" Mar 25 02:34:33.265025 containerd[1474]: time="2025-03-25T02:34:33.264987308Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:34:33.266178 containerd[1474]: time="2025-03-25T02:34:33.266144552Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.2: active requests=0, bytes read=97781477" Mar 25 02:34:33.267404 containerd[1474]: time="2025-03-25T02:34:33.267383547Z" level=info msg="ImageCreate event name:\"sha256:cda13293c895a8a3b06c1e190b70fb6fe61036db2e59764036fc6e65ec374693\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:34:33.270426 containerd[1474]: time="2025-03-25T02:34:33.270405490Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:890e1db6ae363695cfc23ffae4d612cc85cdd99d759bd539af6683969d0c3c25\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:34:33.270987 containerd[1474]: time="2025-03-25T02:34:33.270947929Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.2\" with image id \"sha256:cda13293c895a8a3b06c1e190b70fb6fe61036db2e59764036fc6e65ec374693\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:890e1db6ae363695cfc23ffae4d612cc85cdd99d759bd539af6683969d0c3c25\", size \"99274581\" in 5.872267707s" Mar 25 02:34:33.271035 containerd[1474]: time="2025-03-25T02:34:33.270986055Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.2\" returns image reference \"sha256:cda13293c895a8a3b06c1e190b70fb6fe61036db2e59764036fc6e65ec374693\"" Mar 25 02:34:33.274298 containerd[1474]: time="2025-03-25T02:34:33.274241793Z" level=info msg="CreateContainer within sandbox \"0fb40c8995d3879246f0d889943529f192ad24d83901858dd4c00c511fb70437\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 25 02:34:33.290222 containerd[1474]: time="2025-03-25T02:34:33.288986181Z" level=info msg="Container 7fd7a2e52f2df03ab05b5551b02899d47860745e792f7d854e7045f679cc5b82: CDI devices from CRI Config.CDIDevices: []" Mar 25 02:34:33.305026 containerd[1474]: time="2025-03-25T02:34:33.304983813Z" level=info msg="CreateContainer within sandbox \"0fb40c8995d3879246f0d889943529f192ad24d83901858dd4c00c511fb70437\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"7fd7a2e52f2df03ab05b5551b02899d47860745e792f7d854e7045f679cc5b82\"" Mar 25 02:34:33.306589 containerd[1474]: time="2025-03-25T02:34:33.305411955Z" level=info msg="StartContainer for \"7fd7a2e52f2df03ab05b5551b02899d47860745e792f7d854e7045f679cc5b82\"" Mar 25 02:34:33.306990 containerd[1474]: time="2025-03-25T02:34:33.306957781Z" level=info msg="connecting to shim 7fd7a2e52f2df03ab05b5551b02899d47860745e792f7d854e7045f679cc5b82" address="unix:///run/containerd/s/8ef4cdecd244b15348bb08f98c058cfb4ff3656087fe93c08a37edd1c9dc089a" protocol=ttrpc version=3 Mar 25 02:34:33.330410 systemd[1]: Started cri-containerd-7fd7a2e52f2df03ab05b5551b02899d47860745e792f7d854e7045f679cc5b82.scope - libcontainer container 7fd7a2e52f2df03ab05b5551b02899d47860745e792f7d854e7045f679cc5b82. Mar 25 02:34:33.373648 containerd[1474]: time="2025-03-25T02:34:33.373561809Z" level=info msg="StartContainer for \"7fd7a2e52f2df03ab05b5551b02899d47860745e792f7d854e7045f679cc5b82\" returns successfully" Mar 25 02:34:34.270832 kubelet[2696]: E0325 02:34:34.268661 2696 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j9qch" podUID="f79a0a0e-5416-4235-b1a3-2a817bf38d19" Mar 25 02:34:34.516139 containerd[1474]: time="2025-03-25T02:34:34.516099461Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 25 02:34:34.520050 systemd[1]: cri-containerd-7fd7a2e52f2df03ab05b5551b02899d47860745e792f7d854e7045f679cc5b82.scope: Deactivated successfully. Mar 25 02:34:34.521933 containerd[1474]: time="2025-03-25T02:34:34.521343367Z" level=info msg="received exit event container_id:\"7fd7a2e52f2df03ab05b5551b02899d47860745e792f7d854e7045f679cc5b82\" id:\"7fd7a2e52f2df03ab05b5551b02899d47860745e792f7d854e7045f679cc5b82\" pid:3385 exited_at:{seconds:1742870074 nanos:519886241}" Mar 25 02:34:34.521933 containerd[1474]: time="2025-03-25T02:34:34.521498607Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7fd7a2e52f2df03ab05b5551b02899d47860745e792f7d854e7045f679cc5b82\" id:\"7fd7a2e52f2df03ab05b5551b02899d47860745e792f7d854e7045f679cc5b82\" pid:3385 exited_at:{seconds:1742870074 nanos:519886241}" Mar 25 02:34:34.520293 systemd[1]: cri-containerd-7fd7a2e52f2df03ab05b5551b02899d47860745e792f7d854e7045f679cc5b82.scope: Consumed 654ms CPU time, 171.8M memory peak, 154M written to disk. Mar 25 02:34:34.541400 kubelet[2696]: I0325 02:34:34.540040 2696 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Mar 25 02:34:34.554990 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7fd7a2e52f2df03ab05b5551b02899d47860745e792f7d854e7045f679cc5b82-rootfs.mount: Deactivated successfully. Mar 25 02:34:34.905766 systemd[1]: Created slice kubepods-burstable-pod214903cc_b391_4047_856d_b05154dc66b9.slice - libcontainer container kubepods-burstable-pod214903cc_b391_4047_856d_b05154dc66b9.slice. Mar 25 02:34:34.931678 systemd[1]: Created slice kubepods-burstable-pod2b559c34_893b_4f02_97bc_c02a40c329db.slice - libcontainer container kubepods-burstable-pod2b559c34_893b_4f02_97bc_c02a40c329db.slice. Mar 25 02:34:34.938959 systemd[1]: Created slice kubepods-besteffort-poda4a252c4_bc04_429c_a966_5e112b1ec89a.slice - libcontainer container kubepods-besteffort-poda4a252c4_bc04_429c_a966_5e112b1ec89a.slice. Mar 25 02:34:34.946603 systemd[1]: Created slice kubepods-besteffort-pod7f391f5b_dcc7_4a47_b54c_3b6f49c4fbe2.slice - libcontainer container kubepods-besteffort-pod7f391f5b_dcc7_4a47_b54c_3b6f49c4fbe2.slice. Mar 25 02:34:34.951062 systemd[1]: Created slice kubepods-besteffort-pode8693bdb_73f1_4d4a_97a8_0a999f60d9a1.slice - libcontainer container kubepods-besteffort-pode8693bdb_73f1_4d4a_97a8_0a999f60d9a1.slice. Mar 25 02:34:34.959170 kubelet[2696]: I0325 02:34:34.959139 2696 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chqcv\" (UniqueName: \"kubernetes.io/projected/214903cc-b391-4047-856d-b05154dc66b9-kube-api-access-chqcv\") pod \"coredns-6f6b679f8f-8lq6d\" (UID: \"214903cc-b391-4047-856d-b05154dc66b9\") " pod="kube-system/coredns-6f6b679f8f-8lq6d" Mar 25 02:34:34.959288 kubelet[2696]: I0325 02:34:34.959217 2696 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/214903cc-b391-4047-856d-b05154dc66b9-config-volume\") pod \"coredns-6f6b679f8f-8lq6d\" (UID: \"214903cc-b391-4047-856d-b05154dc66b9\") " pod="kube-system/coredns-6f6b679f8f-8lq6d" Mar 25 02:34:35.059731 kubelet[2696]: I0325 02:34:35.059645 2696 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e8693bdb-73f1-4d4a-97a8-0a999f60d9a1-tigera-ca-bundle\") pod \"calico-kube-controllers-5fff7f6b85-pbvr6\" (UID: \"e8693bdb-73f1-4d4a-97a8-0a999f60d9a1\") " pod="calico-system/calico-kube-controllers-5fff7f6b85-pbvr6" Mar 25 02:34:35.059731 kubelet[2696]: I0325 02:34:35.059726 2696 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkdn2\" (UniqueName: \"kubernetes.io/projected/7f391f5b-dcc7-4a47-b54c-3b6f49c4fbe2-kube-api-access-kkdn2\") pod \"calico-apiserver-6975797848-8h5kp\" (UID: \"7f391f5b-dcc7-4a47-b54c-3b6f49c4fbe2\") " pod="calico-apiserver/calico-apiserver-6975797848-8h5kp" Mar 25 02:34:35.059957 kubelet[2696]: I0325 02:34:35.059810 2696 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vztlk\" (UniqueName: \"kubernetes.io/projected/2b559c34-893b-4f02-97bc-c02a40c329db-kube-api-access-vztlk\") pod \"coredns-6f6b679f8f-rhz22\" (UID: \"2b559c34-893b-4f02-97bc-c02a40c329db\") " pod="kube-system/coredns-6f6b679f8f-rhz22" Mar 25 02:34:35.059957 kubelet[2696]: I0325 02:34:35.059894 2696 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4qv2\" (UniqueName: \"kubernetes.io/projected/e8693bdb-73f1-4d4a-97a8-0a999f60d9a1-kube-api-access-c4qv2\") pod \"calico-kube-controllers-5fff7f6b85-pbvr6\" (UID: \"e8693bdb-73f1-4d4a-97a8-0a999f60d9a1\") " pod="calico-system/calico-kube-controllers-5fff7f6b85-pbvr6" Mar 25 02:34:35.060107 kubelet[2696]: I0325 02:34:35.059979 2696 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7f391f5b-dcc7-4a47-b54c-3b6f49c4fbe2-calico-apiserver-certs\") pod \"calico-apiserver-6975797848-8h5kp\" (UID: \"7f391f5b-dcc7-4a47-b54c-3b6f49c4fbe2\") " pod="calico-apiserver/calico-apiserver-6975797848-8h5kp" Mar 25 02:34:35.060107 kubelet[2696]: I0325 02:34:35.060030 2696 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a4a252c4-bc04-429c-a966-5e112b1ec89a-calico-apiserver-certs\") pod \"calico-apiserver-6975797848-wwxm2\" (UID: \"a4a252c4-bc04-429c-a966-5e112b1ec89a\") " pod="calico-apiserver/calico-apiserver-6975797848-wwxm2" Mar 25 02:34:35.060107 kubelet[2696]: I0325 02:34:35.060074 2696 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6djg6\" (UniqueName: \"kubernetes.io/projected/a4a252c4-bc04-429c-a966-5e112b1ec89a-kube-api-access-6djg6\") pod \"calico-apiserver-6975797848-wwxm2\" (UID: \"a4a252c4-bc04-429c-a966-5e112b1ec89a\") " pod="calico-apiserver/calico-apiserver-6975797848-wwxm2" Mar 25 02:34:35.060345 kubelet[2696]: I0325 02:34:35.060118 2696 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2b559c34-893b-4f02-97bc-c02a40c329db-config-volume\") pod \"coredns-6f6b679f8f-rhz22\" (UID: \"2b559c34-893b-4f02-97bc-c02a40c329db\") " pod="kube-system/coredns-6f6b679f8f-rhz22" Mar 25 02:34:35.255359 containerd[1474]: time="2025-03-25T02:34:35.255100763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5fff7f6b85-pbvr6,Uid:e8693bdb-73f1-4d4a-97a8-0a999f60d9a1,Namespace:calico-system,Attempt:0,}" Mar 25 02:34:35.326343 kubelet[2696]: I0325 02:34:35.325593 2696 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 25 02:34:35.528340 containerd[1474]: time="2025-03-25T02:34:35.525924936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-8lq6d,Uid:214903cc-b391-4047-856d-b05154dc66b9,Namespace:kube-system,Attempt:0,}" Mar 25 02:34:35.536865 containerd[1474]: time="2025-03-25T02:34:35.536616481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-rhz22,Uid:2b559c34-893b-4f02-97bc-c02a40c329db,Namespace:kube-system,Attempt:0,}" Mar 25 02:34:35.543633 containerd[1474]: time="2025-03-25T02:34:35.543498426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6975797848-wwxm2,Uid:a4a252c4-bc04-429c-a966-5e112b1ec89a,Namespace:calico-apiserver,Attempt:0,}" Mar 25 02:34:35.554408 containerd[1474]: time="2025-03-25T02:34:35.552848715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6975797848-8h5kp,Uid:7f391f5b-dcc7-4a47-b54c-3b6f49c4fbe2,Namespace:calico-apiserver,Attempt:0,}" Mar 25 02:34:35.621186 containerd[1474]: time="2025-03-25T02:34:35.620962561Z" level=error msg="Failed to destroy network for sandbox \"5f55bfbf90acf540120163eff1121b1ebf74df78129a72a3d3e0279d6e3a7ff3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 02:34:35.623914 systemd[1]: run-netns-cni\x2df052f0d4\x2d3bfe\x2db15c\x2d166d\x2da74afe80626b.mount: Deactivated successfully. Mar 25 02:34:35.626789 kubelet[2696]: E0325 02:34:35.625956 2696 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f55bfbf90acf540120163eff1121b1ebf74df78129a72a3d3e0279d6e3a7ff3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 02:34:35.626789 kubelet[2696]: E0325 02:34:35.626035 2696 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f55bfbf90acf540120163eff1121b1ebf74df78129a72a3d3e0279d6e3a7ff3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5fff7f6b85-pbvr6" Mar 25 02:34:35.626789 kubelet[2696]: E0325 02:34:35.626060 2696 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f55bfbf90acf540120163eff1121b1ebf74df78129a72a3d3e0279d6e3a7ff3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5fff7f6b85-pbvr6" Mar 25 02:34:35.626903 containerd[1474]: time="2025-03-25T02:34:35.625470739Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5fff7f6b85-pbvr6,Uid:e8693bdb-73f1-4d4a-97a8-0a999f60d9a1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f55bfbf90acf540120163eff1121b1ebf74df78129a72a3d3e0279d6e3a7ff3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 02:34:35.626985 kubelet[2696]: E0325 02:34:35.626118 2696 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5fff7f6b85-pbvr6_calico-system(e8693bdb-73f1-4d4a-97a8-0a999f60d9a1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5fff7f6b85-pbvr6_calico-system(e8693bdb-73f1-4d4a-97a8-0a999f60d9a1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5f55bfbf90acf540120163eff1121b1ebf74df78129a72a3d3e0279d6e3a7ff3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5fff7f6b85-pbvr6" podUID="e8693bdb-73f1-4d4a-97a8-0a999f60d9a1" Mar 25 02:34:35.712372 containerd[1474]: time="2025-03-25T02:34:35.712314082Z" level=error msg="Failed to destroy network for sandbox \"1b0a928b962964948d8755fd3f8efdb8755664a68550903ffb30d77b4f1c56c1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 02:34:35.718670 systemd[1]: run-netns-cni\x2dd1e26be5\x2decb3\x2db4b8\x2d6c08\x2d043346a980ee.mount: Deactivated successfully. Mar 25 02:34:35.718938 containerd[1474]: time="2025-03-25T02:34:35.718898736Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-8lq6d,Uid:214903cc-b391-4047-856d-b05154dc66b9,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b0a928b962964948d8755fd3f8efdb8755664a68550903ffb30d77b4f1c56c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 02:34:35.719378 kubelet[2696]: E0325 02:34:35.719340 2696 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b0a928b962964948d8755fd3f8efdb8755664a68550903ffb30d77b4f1c56c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 02:34:35.719444 kubelet[2696]: E0325 02:34:35.719402 2696 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b0a928b962964948d8755fd3f8efdb8755664a68550903ffb30d77b4f1c56c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-8lq6d" Mar 25 02:34:35.719444 kubelet[2696]: E0325 02:34:35.719426 2696 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b0a928b962964948d8755fd3f8efdb8755664a68550903ffb30d77b4f1c56c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-8lq6d" Mar 25 02:34:35.719527 kubelet[2696]: E0325 02:34:35.719473 2696 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-8lq6d_kube-system(214903cc-b391-4047-856d-b05154dc66b9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-8lq6d_kube-system(214903cc-b391-4047-856d-b05154dc66b9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1b0a928b962964948d8755fd3f8efdb8755664a68550903ffb30d77b4f1c56c1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-8lq6d" podUID="214903cc-b391-4047-856d-b05154dc66b9" Mar 25 02:34:35.729366 containerd[1474]: time="2025-03-25T02:34:35.729093504Z" level=error msg="Failed to destroy network for sandbox \"2b610cbeebbb28ad96eef816c368569c159d984b47d2a6376ccff26e58e36665\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 02:34:35.731567 containerd[1474]: time="2025-03-25T02:34:35.731521368Z" level=error msg="Failed to destroy network for sandbox \"6bd83a0675f6e723d8de2f6b894e41f28b34f8c19f9c0c322d52b7e1a84aaa67\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 02:34:35.731902 containerd[1474]: time="2025-03-25T02:34:35.731782517Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6975797848-8h5kp,Uid:7f391f5b-dcc7-4a47-b54c-3b6f49c4fbe2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b610cbeebbb28ad96eef816c368569c159d984b47d2a6376ccff26e58e36665\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 02:34:35.732361 kubelet[2696]: E0325 02:34:35.732319 2696 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b610cbeebbb28ad96eef816c368569c159d984b47d2a6376ccff26e58e36665\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 02:34:35.732518 kubelet[2696]: E0325 02:34:35.732386 2696 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b610cbeebbb28ad96eef816c368569c159d984b47d2a6376ccff26e58e36665\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6975797848-8h5kp" Mar 25 02:34:35.732518 kubelet[2696]: E0325 02:34:35.732408 2696 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b610cbeebbb28ad96eef816c368569c159d984b47d2a6376ccff26e58e36665\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6975797848-8h5kp" Mar 25 02:34:35.732518 kubelet[2696]: E0325 02:34:35.732453 2696 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6975797848-8h5kp_calico-apiserver(7f391f5b-dcc7-4a47-b54c-3b6f49c4fbe2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6975797848-8h5kp_calico-apiserver(7f391f5b-dcc7-4a47-b54c-3b6f49c4fbe2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2b610cbeebbb28ad96eef816c368569c159d984b47d2a6376ccff26e58e36665\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6975797848-8h5kp" podUID="7f391f5b-dcc7-4a47-b54c-3b6f49c4fbe2" Mar 25 02:34:35.735971 containerd[1474]: time="2025-03-25T02:34:35.735353382Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6975797848-wwxm2,Uid:a4a252c4-bc04-429c-a966-5e112b1ec89a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6bd83a0675f6e723d8de2f6b894e41f28b34f8c19f9c0c322d52b7e1a84aaa67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 02:34:35.736073 kubelet[2696]: E0325 02:34:35.735702 2696 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6bd83a0675f6e723d8de2f6b894e41f28b34f8c19f9c0c322d52b7e1a84aaa67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 02:34:35.736073 kubelet[2696]: E0325 02:34:35.735752 2696 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6bd83a0675f6e723d8de2f6b894e41f28b34f8c19f9c0c322d52b7e1a84aaa67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6975797848-wwxm2" Mar 25 02:34:35.736073 kubelet[2696]: E0325 02:34:35.735773 2696 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6bd83a0675f6e723d8de2f6b894e41f28b34f8c19f9c0c322d52b7e1a84aaa67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6975797848-wwxm2" Mar 25 02:34:35.736167 kubelet[2696]: E0325 02:34:35.735814 2696 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6975797848-wwxm2_calico-apiserver(a4a252c4-bc04-429c-a966-5e112b1ec89a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6975797848-wwxm2_calico-apiserver(a4a252c4-bc04-429c-a966-5e112b1ec89a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6bd83a0675f6e723d8de2f6b894e41f28b34f8c19f9c0c322d52b7e1a84aaa67\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6975797848-wwxm2" podUID="a4a252c4-bc04-429c-a966-5e112b1ec89a" Mar 25 02:34:35.740112 containerd[1474]: time="2025-03-25T02:34:35.739940297Z" level=error msg="Failed to destroy network for sandbox \"91bf06a181e08280d2f33e068dcba9b1f65ff0e68e59492e4de98d04321aec86\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 02:34:35.741846 containerd[1474]: time="2025-03-25T02:34:35.741774722Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-rhz22,Uid:2b559c34-893b-4f02-97bc-c02a40c329db,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"91bf06a181e08280d2f33e068dcba9b1f65ff0e68e59492e4de98d04321aec86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 02:34:35.742510 kubelet[2696]: E0325 02:34:35.742098 2696 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91bf06a181e08280d2f33e068dcba9b1f65ff0e68e59492e4de98d04321aec86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 02:34:35.742510 kubelet[2696]: E0325 02:34:35.742146 2696 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91bf06a181e08280d2f33e068dcba9b1f65ff0e68e59492e4de98d04321aec86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-rhz22" Mar 25 02:34:35.742510 kubelet[2696]: E0325 02:34:35.742166 2696 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91bf06a181e08280d2f33e068dcba9b1f65ff0e68e59492e4de98d04321aec86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-rhz22" Mar 25 02:34:35.742629 kubelet[2696]: E0325 02:34:35.742210 2696 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-rhz22_kube-system(2b559c34-893b-4f02-97bc-c02a40c329db)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-rhz22_kube-system(2b559c34-893b-4f02-97bc-c02a40c329db)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"91bf06a181e08280d2f33e068dcba9b1f65ff0e68e59492e4de98d04321aec86\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-rhz22" podUID="2b559c34-893b-4f02-97bc-c02a40c329db" Mar 25 02:34:36.280210 systemd[1]: Created slice kubepods-besteffort-podf79a0a0e_5416_4235_b1a3_2a817bf38d19.slice - libcontainer container kubepods-besteffort-podf79a0a0e_5416_4235_b1a3_2a817bf38d19.slice. Mar 25 02:34:36.286095 containerd[1474]: time="2025-03-25T02:34:36.286006452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j9qch,Uid:f79a0a0e-5416-4235-b1a3-2a817bf38d19,Namespace:calico-system,Attempt:0,}" Mar 25 02:34:36.390661 containerd[1474]: time="2025-03-25T02:34:36.390240122Z" level=error msg="Failed to destroy network for sandbox \"ab9027d42be375fa2b0dd20ebaa768c1b5db7a3196d38940ca9ab17de2838138\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 02:34:36.392716 containerd[1474]: time="2025-03-25T02:34:36.392641891Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j9qch,Uid:f79a0a0e-5416-4235-b1a3-2a817bf38d19,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab9027d42be375fa2b0dd20ebaa768c1b5db7a3196d38940ca9ab17de2838138\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 02:34:36.393229 kubelet[2696]: E0325 02:34:36.393157 2696 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab9027d42be375fa2b0dd20ebaa768c1b5db7a3196d38940ca9ab17de2838138\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 02:34:36.393511 kubelet[2696]: E0325 02:34:36.393398 2696 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab9027d42be375fa2b0dd20ebaa768c1b5db7a3196d38940ca9ab17de2838138\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-j9qch" Mar 25 02:34:36.393631 kubelet[2696]: E0325 02:34:36.393496 2696 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab9027d42be375fa2b0dd20ebaa768c1b5db7a3196d38940ca9ab17de2838138\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-j9qch" Mar 25 02:34:36.393837 kubelet[2696]: E0325 02:34:36.393742 2696 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-j9qch_calico-system(f79a0a0e-5416-4235-b1a3-2a817bf38d19)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-j9qch_calico-system(f79a0a0e-5416-4235-b1a3-2a817bf38d19)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ab9027d42be375fa2b0dd20ebaa768c1b5db7a3196d38940ca9ab17de2838138\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-j9qch" podUID="f79a0a0e-5416-4235-b1a3-2a817bf38d19" Mar 25 02:34:36.426502 containerd[1474]: time="2025-03-25T02:34:36.424515626Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.2\"" Mar 25 02:34:36.554363 systemd[1]: run-netns-cni\x2d5586c54f\x2d2eaf\x2dfe7c\x2d0216\x2d6b48564910e9.mount: Deactivated successfully. Mar 25 02:34:36.554594 systemd[1]: run-netns-cni\x2d2bf52132\x2d4b12\x2dacce\x2dfa50\x2dfa1aba4a2ed0.mount: Deactivated successfully. Mar 25 02:34:36.554756 systemd[1]: run-netns-cni\x2d4d0ca33b\x2d2ec4\x2df305\x2db5d4\x2d17750d931fac.mount: Deactivated successfully. Mar 25 02:34:44.680421 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount726831450.mount: Deactivated successfully. Mar 25 02:34:44.726472 containerd[1474]: time="2025-03-25T02:34:44.726434745Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:34:44.728318 containerd[1474]: time="2025-03-25T02:34:44.728277517Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.2: active requests=0, bytes read=142241445" Mar 25 02:34:44.729805 containerd[1474]: time="2025-03-25T02:34:44.729762173Z" level=info msg="ImageCreate event name:\"sha256:048bf7af1f8c697d151dbecc478a18e89d89ed8627da98e17a56c11b3d45d351\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:34:44.732638 containerd[1474]: time="2025-03-25T02:34:44.732595004Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:d9a21be37fe591ee5ab5a2e3dc26408ea165a44a55705102ffaa002de9908b32\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:34:44.733642 containerd[1474]: time="2025-03-25T02:34:44.733220140Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.2\" with image id \"sha256:048bf7af1f8c697d151dbecc478a18e89d89ed8627da98e17a56c11b3d45d351\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:d9a21be37fe591ee5ab5a2e3dc26408ea165a44a55705102ffaa002de9908b32\", size \"142241307\" in 8.308640459s" Mar 25 02:34:44.733642 containerd[1474]: time="2025-03-25T02:34:44.733281227Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.2\" returns image reference \"sha256:048bf7af1f8c697d151dbecc478a18e89d89ed8627da98e17a56c11b3d45d351\"" Mar 25 02:34:44.747563 containerd[1474]: time="2025-03-25T02:34:44.747505976Z" level=info msg="CreateContainer within sandbox \"0fb40c8995d3879246f0d889943529f192ad24d83901858dd4c00c511fb70437\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 25 02:34:44.765303 containerd[1474]: time="2025-03-25T02:34:44.760122704Z" level=info msg="Container f772cb72260b266b7dc70cdbdc27a90e10a9354f6db9703e8aa5a21684453d43: CDI devices from CRI Config.CDIDevices: []" Mar 25 02:34:44.775951 containerd[1474]: time="2025-03-25T02:34:44.775910927Z" level=info msg="CreateContainer within sandbox \"0fb40c8995d3879246f0d889943529f192ad24d83901858dd4c00c511fb70437\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"f772cb72260b266b7dc70cdbdc27a90e10a9354f6db9703e8aa5a21684453d43\"" Mar 25 02:34:44.777108 containerd[1474]: time="2025-03-25T02:34:44.776468666Z" level=info msg="StartContainer for \"f772cb72260b266b7dc70cdbdc27a90e10a9354f6db9703e8aa5a21684453d43\"" Mar 25 02:34:44.779029 containerd[1474]: time="2025-03-25T02:34:44.778995669Z" level=info msg="connecting to shim f772cb72260b266b7dc70cdbdc27a90e10a9354f6db9703e8aa5a21684453d43" address="unix:///run/containerd/s/8ef4cdecd244b15348bb08f98c058cfb4ff3656087fe93c08a37edd1c9dc089a" protocol=ttrpc version=3 Mar 25 02:34:44.800801 systemd[1]: Started cri-containerd-f772cb72260b266b7dc70cdbdc27a90e10a9354f6db9703e8aa5a21684453d43.scope - libcontainer container f772cb72260b266b7dc70cdbdc27a90e10a9354f6db9703e8aa5a21684453d43. Mar 25 02:34:44.850592 containerd[1474]: time="2025-03-25T02:34:44.850552006Z" level=info msg="StartContainer for \"f772cb72260b266b7dc70cdbdc27a90e10a9354f6db9703e8aa5a21684453d43\" returns successfully" Mar 25 02:34:44.915434 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Mar 25 02:34:44.915539 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Mar 25 02:34:45.585337 containerd[1474]: time="2025-03-25T02:34:45.585197781Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f772cb72260b266b7dc70cdbdc27a90e10a9354f6db9703e8aa5a21684453d43\" id:\"7c8229fdccc309943bb01b92c2bbba6ab2eadb8794bddb4ba458f819a5c4a0aa\" pid:3684 exit_status:1 exited_at:{seconds:1742870085 nanos:584894878}" Mar 25 02:34:46.588309 kernel: bpftool[3839]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 25 02:34:46.601689 containerd[1474]: time="2025-03-25T02:34:46.601647847Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f772cb72260b266b7dc70cdbdc27a90e10a9354f6db9703e8aa5a21684453d43\" id:\"db1be3f45b58ef3377408b2a8a626cb359dbb0eab34ddfc9d0e3723e42fa6e3a\" pid:3810 exit_status:1 exited_at:{seconds:1742870086 nanos:601314377}" Mar 25 02:34:47.024038 systemd-networkd[1389]: vxlan.calico: Link UP Mar 25 02:34:47.024058 systemd-networkd[1389]: vxlan.calico: Gained carrier Mar 25 02:34:48.271140 containerd[1474]: time="2025-03-25T02:34:48.270098269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-8lq6d,Uid:214903cc-b391-4047-856d-b05154dc66b9,Namespace:kube-system,Attempt:0,}" Mar 25 02:34:48.273128 containerd[1474]: time="2025-03-25T02:34:48.272839941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5fff7f6b85-pbvr6,Uid:e8693bdb-73f1-4d4a-97a8-0a999f60d9a1,Namespace:calico-system,Attempt:0,}" Mar 25 02:34:48.509379 systemd-networkd[1389]: cali870c512b556: Link UP Mar 25 02:34:48.510257 systemd-networkd[1389]: cali870c512b556: Gained carrier Mar 25 02:34:48.530155 kubelet[2696]: I0325 02:34:48.528912 2696 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-sd296" podStartSLOduration=4.733530568 podStartE2EDuration="28.528897041s" podCreationTimestamp="2025-03-25 02:34:20 +0000 UTC" firstStartedPulling="2025-03-25 02:34:20.938957233 +0000 UTC m=+12.809712035" lastFinishedPulling="2025-03-25 02:34:44.734323706 +0000 UTC m=+36.605078508" observedRunningTime="2025-03-25 02:34:45.510758048 +0000 UTC m=+37.381512890" watchObservedRunningTime="2025-03-25 02:34:48.528897041 +0000 UTC m=+40.399651843" Mar 25 02:34:48.543012 containerd[1474]: 2025-03-25 02:34:48.410 [INFO][3915] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4284--0--0--3--6c96446f48.novalocal-k8s-calico--kube--controllers--5fff7f6b85--pbvr6-eth0 calico-kube-controllers-5fff7f6b85- calico-system e8693bdb-73f1-4d4a-97a8-0a999f60d9a1 671 0 2025-03-25 02:34:20 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5fff7f6b85 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4284-0-0-3-6c96446f48.novalocal calico-kube-controllers-5fff7f6b85-pbvr6 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali870c512b556 [] []}} ContainerID="48a532df19a7aad4e46f8fbb12c761869a8ac739490cb873e7f08b0f2adc017a" Namespace="calico-system" Pod="calico-kube-controllers-5fff7f6b85-pbvr6" WorkloadEndpoint="ci--4284--0--0--3--6c96446f48.novalocal-k8s-calico--kube--controllers--5fff7f6b85--pbvr6-" Mar 25 02:34:48.543012 containerd[1474]: 2025-03-25 02:34:48.411 [INFO][3915] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="48a532df19a7aad4e46f8fbb12c761869a8ac739490cb873e7f08b0f2adc017a" Namespace="calico-system" Pod="calico-kube-controllers-5fff7f6b85-pbvr6" WorkloadEndpoint="ci--4284--0--0--3--6c96446f48.novalocal-k8s-calico--kube--controllers--5fff7f6b85--pbvr6-eth0" Mar 25 02:34:48.543012 containerd[1474]: 2025-03-25 02:34:48.448 [INFO][3934] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="48a532df19a7aad4e46f8fbb12c761869a8ac739490cb873e7f08b0f2adc017a" HandleID="k8s-pod-network.48a532df19a7aad4e46f8fbb12c761869a8ac739490cb873e7f08b0f2adc017a" Workload="ci--4284--0--0--3--6c96446f48.novalocal-k8s-calico--kube--controllers--5fff7f6b85--pbvr6-eth0" Mar 25 02:34:48.543208 containerd[1474]: 2025-03-25 02:34:48.462 [INFO][3934] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="48a532df19a7aad4e46f8fbb12c761869a8ac739490cb873e7f08b0f2adc017a" HandleID="k8s-pod-network.48a532df19a7aad4e46f8fbb12c761869a8ac739490cb873e7f08b0f2adc017a" Workload="ci--4284--0--0--3--6c96446f48.novalocal-k8s-calico--kube--controllers--5fff7f6b85--pbvr6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003bcba0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4284-0-0-3-6c96446f48.novalocal", "pod":"calico-kube-controllers-5fff7f6b85-pbvr6", "timestamp":"2025-03-25 02:34:48.448715901 +0000 UTC"}, Hostname:"ci-4284-0-0-3-6c96446f48.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 25 02:34:48.543208 containerd[1474]: 2025-03-25 02:34:48.462 [INFO][3934] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 25 02:34:48.543208 containerd[1474]: 2025-03-25 02:34:48.462 [INFO][3934] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 25 02:34:48.543208 containerd[1474]: 2025-03-25 02:34:48.462 [INFO][3934] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4284-0-0-3-6c96446f48.novalocal' Mar 25 02:34:48.543208 containerd[1474]: 2025-03-25 02:34:48.465 [INFO][3934] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.48a532df19a7aad4e46f8fbb12c761869a8ac739490cb873e7f08b0f2adc017a" host="ci-4284-0-0-3-6c96446f48.novalocal" Mar 25 02:34:48.543208 containerd[1474]: 2025-03-25 02:34:48.470 [INFO][3934] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4284-0-0-3-6c96446f48.novalocal" Mar 25 02:34:48.543208 containerd[1474]: 2025-03-25 02:34:48.474 [INFO][3934] ipam/ipam.go 489: Trying affinity for 192.168.47.128/26 host="ci-4284-0-0-3-6c96446f48.novalocal" Mar 25 02:34:48.543208 containerd[1474]: 2025-03-25 02:34:48.476 [INFO][3934] ipam/ipam.go 155: Attempting to load block cidr=192.168.47.128/26 host="ci-4284-0-0-3-6c96446f48.novalocal" Mar 25 02:34:48.543208 containerd[1474]: 2025-03-25 02:34:48.478 [INFO][3934] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.47.128/26 host="ci-4284-0-0-3-6c96446f48.novalocal" Mar 25 02:34:48.543469 containerd[1474]: 2025-03-25 02:34:48.478 [INFO][3934] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.47.128/26 handle="k8s-pod-network.48a532df19a7aad4e46f8fbb12c761869a8ac739490cb873e7f08b0f2adc017a" host="ci-4284-0-0-3-6c96446f48.novalocal" Mar 25 02:34:48.543469 containerd[1474]: 2025-03-25 02:34:48.480 [INFO][3934] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.48a532df19a7aad4e46f8fbb12c761869a8ac739490cb873e7f08b0f2adc017a Mar 25 02:34:48.543469 containerd[1474]: 2025-03-25 02:34:48.488 [INFO][3934] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.47.128/26 handle="k8s-pod-network.48a532df19a7aad4e46f8fbb12c761869a8ac739490cb873e7f08b0f2adc017a" host="ci-4284-0-0-3-6c96446f48.novalocal" Mar 25 02:34:48.543469 containerd[1474]: 2025-03-25 02:34:48.493 [INFO][3934] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.47.129/26] block=192.168.47.128/26 handle="k8s-pod-network.48a532df19a7aad4e46f8fbb12c761869a8ac739490cb873e7f08b0f2adc017a" host="ci-4284-0-0-3-6c96446f48.novalocal" Mar 25 02:34:48.543469 containerd[1474]: 2025-03-25 02:34:48.493 [INFO][3934] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.47.129/26] handle="k8s-pod-network.48a532df19a7aad4e46f8fbb12c761869a8ac739490cb873e7f08b0f2adc017a" host="ci-4284-0-0-3-6c96446f48.novalocal" Mar 25 02:34:48.543469 containerd[1474]: 2025-03-25 02:34:48.493 [INFO][3934] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 25 02:34:48.543469 containerd[1474]: 2025-03-25 02:34:48.493 [INFO][3934] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.47.129/26] IPv6=[] ContainerID="48a532df19a7aad4e46f8fbb12c761869a8ac739490cb873e7f08b0f2adc017a" HandleID="k8s-pod-network.48a532df19a7aad4e46f8fbb12c761869a8ac739490cb873e7f08b0f2adc017a" Workload="ci--4284--0--0--3--6c96446f48.novalocal-k8s-calico--kube--controllers--5fff7f6b85--pbvr6-eth0" Mar 25 02:34:48.543633 containerd[1474]: 2025-03-25 02:34:48.497 [INFO][3915] cni-plugin/k8s.go 386: Populated endpoint ContainerID="48a532df19a7aad4e46f8fbb12c761869a8ac739490cb873e7f08b0f2adc017a" Namespace="calico-system" Pod="calico-kube-controllers-5fff7f6b85-pbvr6" WorkloadEndpoint="ci--4284--0--0--3--6c96446f48.novalocal-k8s-calico--kube--controllers--5fff7f6b85--pbvr6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284--0--0--3--6c96446f48.novalocal-k8s-calico--kube--controllers--5fff7f6b85--pbvr6-eth0", GenerateName:"calico-kube-controllers-5fff7f6b85-", Namespace:"calico-system", SelfLink:"", UID:"e8693bdb-73f1-4d4a-97a8-0a999f60d9a1", ResourceVersion:"671", Generation:0, CreationTimestamp:time.Date(2025, time.March, 25, 2, 34, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5fff7f6b85", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284-0-0-3-6c96446f48.novalocal", ContainerID:"", Pod:"calico-kube-controllers-5fff7f6b85-pbvr6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.47.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali870c512b556", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 25 02:34:48.543701 containerd[1474]: 2025-03-25 02:34:48.498 [INFO][3915] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.47.129/32] ContainerID="48a532df19a7aad4e46f8fbb12c761869a8ac739490cb873e7f08b0f2adc017a" Namespace="calico-system" Pod="calico-kube-controllers-5fff7f6b85-pbvr6" WorkloadEndpoint="ci--4284--0--0--3--6c96446f48.novalocal-k8s-calico--kube--controllers--5fff7f6b85--pbvr6-eth0" Mar 25 02:34:48.543701 containerd[1474]: 2025-03-25 02:34:48.499 [INFO][3915] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali870c512b556 ContainerID="48a532df19a7aad4e46f8fbb12c761869a8ac739490cb873e7f08b0f2adc017a" Namespace="calico-system" Pod="calico-kube-controllers-5fff7f6b85-pbvr6" WorkloadEndpoint="ci--4284--0--0--3--6c96446f48.novalocal-k8s-calico--kube--controllers--5fff7f6b85--pbvr6-eth0" Mar 25 02:34:48.543701 containerd[1474]: 2025-03-25 02:34:48.512 [INFO][3915] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="48a532df19a7aad4e46f8fbb12c761869a8ac739490cb873e7f08b0f2adc017a" Namespace="calico-system" Pod="calico-kube-controllers-5fff7f6b85-pbvr6" WorkloadEndpoint="ci--4284--0--0--3--6c96446f48.novalocal-k8s-calico--kube--controllers--5fff7f6b85--pbvr6-eth0" Mar 25 02:34:48.543777 containerd[1474]: 2025-03-25 02:34:48.514 [INFO][3915] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="48a532df19a7aad4e46f8fbb12c761869a8ac739490cb873e7f08b0f2adc017a" Namespace="calico-system" Pod="calico-kube-controllers-5fff7f6b85-pbvr6" WorkloadEndpoint="ci--4284--0--0--3--6c96446f48.novalocal-k8s-calico--kube--controllers--5fff7f6b85--pbvr6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284--0--0--3--6c96446f48.novalocal-k8s-calico--kube--controllers--5fff7f6b85--pbvr6-eth0", GenerateName:"calico-kube-controllers-5fff7f6b85-", Namespace:"calico-system", SelfLink:"", UID:"e8693bdb-73f1-4d4a-97a8-0a999f60d9a1", ResourceVersion:"671", Generation:0, CreationTimestamp:time.Date(2025, time.March, 25, 2, 34, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5fff7f6b85", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284-0-0-3-6c96446f48.novalocal", ContainerID:"48a532df19a7aad4e46f8fbb12c761869a8ac739490cb873e7f08b0f2adc017a", Pod:"calico-kube-controllers-5fff7f6b85-pbvr6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.47.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali870c512b556", MAC:"a2:65:0d:1a:a5:4b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 25 02:34:48.543843 containerd[1474]: 2025-03-25 02:34:48.534 [INFO][3915] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="48a532df19a7aad4e46f8fbb12c761869a8ac739490cb873e7f08b0f2adc017a" Namespace="calico-system" Pod="calico-kube-controllers-5fff7f6b85-pbvr6" WorkloadEndpoint="ci--4284--0--0--3--6c96446f48.novalocal-k8s-calico--kube--controllers--5fff7f6b85--pbvr6-eth0" Mar 25 02:34:48.613378 systemd-networkd[1389]: vxlan.calico: Gained IPv6LL Mar 25 02:34:48.617954 containerd[1474]: time="2025-03-25T02:34:48.617915957Z" level=info msg="connecting to shim 48a532df19a7aad4e46f8fbb12c761869a8ac739490cb873e7f08b0f2adc017a" address="unix:///run/containerd/s/a27ae1b06b0513a3f85eb065306dd48b30a2854cd0f4eb81d7d46b52452c4d0c" namespace=k8s.io protocol=ttrpc version=3 Mar 25 02:34:48.620274 systemd-networkd[1389]: cali738e0d6f04a: Link UP Mar 25 02:34:48.621051 systemd-networkd[1389]: cali738e0d6f04a: Gained carrier Mar 25 02:34:48.646210 containerd[1474]: 2025-03-25 02:34:48.410 [INFO][3910] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4284--0--0--3--6c96446f48.novalocal-k8s-coredns--6f6b679f8f--8lq6d-eth0 coredns-6f6b679f8f- kube-system 214903cc-b391-4047-856d-b05154dc66b9 668 0 2025-03-25 02:34:13 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4284-0-0-3-6c96446f48.novalocal coredns-6f6b679f8f-8lq6d eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali738e0d6f04a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="d5ae9a3431d7861fa27788af84a5140630255f91c40a833e821d47b7d787ff94" Namespace="kube-system" Pod="coredns-6f6b679f8f-8lq6d" WorkloadEndpoint="ci--4284--0--0--3--6c96446f48.novalocal-k8s-coredns--6f6b679f8f--8lq6d-" Mar 25 02:34:48.646210 containerd[1474]: 2025-03-25 02:34:48.411 [INFO][3910] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d5ae9a3431d7861fa27788af84a5140630255f91c40a833e821d47b7d787ff94" Namespace="kube-system" Pod="coredns-6f6b679f8f-8lq6d" WorkloadEndpoint="ci--4284--0--0--3--6c96446f48.novalocal-k8s-coredns--6f6b679f8f--8lq6d-eth0" Mar 25 02:34:48.646210 containerd[1474]: 2025-03-25 02:34:48.455 [INFO][3939] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d5ae9a3431d7861fa27788af84a5140630255f91c40a833e821d47b7d787ff94" HandleID="k8s-pod-network.d5ae9a3431d7861fa27788af84a5140630255f91c40a833e821d47b7d787ff94" Workload="ci--4284--0--0--3--6c96446f48.novalocal-k8s-coredns--6f6b679f8f--8lq6d-eth0" Mar 25 02:34:48.646430 containerd[1474]: 2025-03-25 02:34:48.466 [INFO][3939] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d5ae9a3431d7861fa27788af84a5140630255f91c40a833e821d47b7d787ff94" HandleID="k8s-pod-network.d5ae9a3431d7861fa27788af84a5140630255f91c40a833e821d47b7d787ff94" Workload="ci--4284--0--0--3--6c96446f48.novalocal-k8s-coredns--6f6b679f8f--8lq6d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00011a150), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4284-0-0-3-6c96446f48.novalocal", "pod":"coredns-6f6b679f8f-8lq6d", "timestamp":"2025-03-25 02:34:48.45515304 +0000 UTC"}, Hostname:"ci-4284-0-0-3-6c96446f48.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 25 02:34:48.646430 containerd[1474]: 2025-03-25 02:34:48.466 [INFO][3939] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 25 02:34:48.646430 containerd[1474]: 2025-03-25 02:34:48.493 [INFO][3939] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 25 02:34:48.646430 containerd[1474]: 2025-03-25 02:34:48.493 [INFO][3939] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4284-0-0-3-6c96446f48.novalocal' Mar 25 02:34:48.646430 containerd[1474]: 2025-03-25 02:34:48.567 [INFO][3939] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d5ae9a3431d7861fa27788af84a5140630255f91c40a833e821d47b7d787ff94" host="ci-4284-0-0-3-6c96446f48.novalocal" Mar 25 02:34:48.646430 containerd[1474]: 2025-03-25 02:34:48.574 [INFO][3939] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4284-0-0-3-6c96446f48.novalocal" Mar 25 02:34:48.646430 containerd[1474]: 2025-03-25 02:34:48.588 [INFO][3939] ipam/ipam.go 489: Trying affinity for 192.168.47.128/26 host="ci-4284-0-0-3-6c96446f48.novalocal" Mar 25 02:34:48.646430 containerd[1474]: 2025-03-25 02:34:48.591 [INFO][3939] ipam/ipam.go 155: Attempting to load block cidr=192.168.47.128/26 host="ci-4284-0-0-3-6c96446f48.novalocal" Mar 25 02:34:48.646430 containerd[1474]: 2025-03-25 02:34:48.593 [INFO][3939] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.47.128/26 host="ci-4284-0-0-3-6c96446f48.novalocal" Mar 25 02:34:48.646671 containerd[1474]: 2025-03-25 02:34:48.593 [INFO][3939] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.47.128/26 handle="k8s-pod-network.d5ae9a3431d7861fa27788af84a5140630255f91c40a833e821d47b7d787ff94" host="ci-4284-0-0-3-6c96446f48.novalocal" Mar 25 02:34:48.646671 containerd[1474]: 2025-03-25 02:34:48.595 [INFO][3939] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d5ae9a3431d7861fa27788af84a5140630255f91c40a833e821d47b7d787ff94 Mar 25 02:34:48.646671 containerd[1474]: 2025-03-25 02:34:48.600 [INFO][3939] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.47.128/26 handle="k8s-pod-network.d5ae9a3431d7861fa27788af84a5140630255f91c40a833e821d47b7d787ff94" host="ci-4284-0-0-3-6c96446f48.novalocal" Mar 25 02:34:48.646671 containerd[1474]: 2025-03-25 02:34:48.608 [INFO][3939] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.47.130/26] block=192.168.47.128/26 handle="k8s-pod-network.d5ae9a3431d7861fa27788af84a5140630255f91c40a833e821d47b7d787ff94" host="ci-4284-0-0-3-6c96446f48.novalocal" Mar 25 02:34:48.646671 containerd[1474]: 2025-03-25 02:34:48.608 [INFO][3939] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.47.130/26] handle="k8s-pod-network.d5ae9a3431d7861fa27788af84a5140630255f91c40a833e821d47b7d787ff94" host="ci-4284-0-0-3-6c96446f48.novalocal" Mar 25 02:34:48.646671 containerd[1474]: 2025-03-25 02:34:48.608 [INFO][3939] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 25 02:34:48.646671 containerd[1474]: 2025-03-25 02:34:48.608 [INFO][3939] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.47.130/26] IPv6=[] ContainerID="d5ae9a3431d7861fa27788af84a5140630255f91c40a833e821d47b7d787ff94" HandleID="k8s-pod-network.d5ae9a3431d7861fa27788af84a5140630255f91c40a833e821d47b7d787ff94" Workload="ci--4284--0--0--3--6c96446f48.novalocal-k8s-coredns--6f6b679f8f--8lq6d-eth0" Mar 25 02:34:48.646834 containerd[1474]: 2025-03-25 02:34:48.614 [INFO][3910] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d5ae9a3431d7861fa27788af84a5140630255f91c40a833e821d47b7d787ff94" Namespace="kube-system" Pod="coredns-6f6b679f8f-8lq6d" WorkloadEndpoint="ci--4284--0--0--3--6c96446f48.novalocal-k8s-coredns--6f6b679f8f--8lq6d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284--0--0--3--6c96446f48.novalocal-k8s-coredns--6f6b679f8f--8lq6d-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"214903cc-b391-4047-856d-b05154dc66b9", ResourceVersion:"668", Generation:0, CreationTimestamp:time.Date(2025, time.March, 25, 2, 34, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284-0-0-3-6c96446f48.novalocal", ContainerID:"", Pod:"coredns-6f6b679f8f-8lq6d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.47.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali738e0d6f04a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 25 02:34:48.646834 containerd[1474]: 2025-03-25 02:34:48.615 [INFO][3910] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.47.130/32] ContainerID="d5ae9a3431d7861fa27788af84a5140630255f91c40a833e821d47b7d787ff94" Namespace="kube-system" Pod="coredns-6f6b679f8f-8lq6d" WorkloadEndpoint="ci--4284--0--0--3--6c96446f48.novalocal-k8s-coredns--6f6b679f8f--8lq6d-eth0" Mar 25 02:34:48.646834 containerd[1474]: 2025-03-25 02:34:48.615 [INFO][3910] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali738e0d6f04a ContainerID="d5ae9a3431d7861fa27788af84a5140630255f91c40a833e821d47b7d787ff94" Namespace="kube-system" Pod="coredns-6f6b679f8f-8lq6d" WorkloadEndpoint="ci--4284--0--0--3--6c96446f48.novalocal-k8s-coredns--6f6b679f8f--8lq6d-eth0" Mar 25 02:34:48.646834 containerd[1474]: 2025-03-25 02:34:48.620 [INFO][3910] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d5ae9a3431d7861fa27788af84a5140630255f91c40a833e821d47b7d787ff94" Namespace="kube-system" Pod="coredns-6f6b679f8f-8lq6d" WorkloadEndpoint="ci--4284--0--0--3--6c96446f48.novalocal-k8s-coredns--6f6b679f8f--8lq6d-eth0" Mar 25 02:34:48.646834 containerd[1474]: 2025-03-25 02:34:48.621 [INFO][3910] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d5ae9a3431d7861fa27788af84a5140630255f91c40a833e821d47b7d787ff94" Namespace="kube-system" Pod="coredns-6f6b679f8f-8lq6d" WorkloadEndpoint="ci--4284--0--0--3--6c96446f48.novalocal-k8s-coredns--6f6b679f8f--8lq6d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284--0--0--3--6c96446f48.novalocal-k8s-coredns--6f6b679f8f--8lq6d-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"214903cc-b391-4047-856d-b05154dc66b9", ResourceVersion:"668", Generation:0, CreationTimestamp:time.Date(2025, time.March, 25, 2, 34, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284-0-0-3-6c96446f48.novalocal", ContainerID:"d5ae9a3431d7861fa27788af84a5140630255f91c40a833e821d47b7d787ff94", Pod:"coredns-6f6b679f8f-8lq6d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.47.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali738e0d6f04a", MAC:"ea:7b:b0:dc:ec:ab", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 25 02:34:48.646834 containerd[1474]: 2025-03-25 02:34:48.642 [INFO][3910] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d5ae9a3431d7861fa27788af84a5140630255f91c40a833e821d47b7d787ff94" Namespace="kube-system" Pod="coredns-6f6b679f8f-8lq6d" WorkloadEndpoint="ci--4284--0--0--3--6c96446f48.novalocal-k8s-coredns--6f6b679f8f--8lq6d-eth0" Mar 25 02:34:48.665785 systemd[1]: Started cri-containerd-48a532df19a7aad4e46f8fbb12c761869a8ac739490cb873e7f08b0f2adc017a.scope - libcontainer container 48a532df19a7aad4e46f8fbb12c761869a8ac739490cb873e7f08b0f2adc017a. Mar 25 02:34:48.702475 containerd[1474]: time="2025-03-25T02:34:48.702241092Z" level=info msg="connecting to shim d5ae9a3431d7861fa27788af84a5140630255f91c40a833e821d47b7d787ff94" address="unix:///run/containerd/s/f37fb7b965971f2a90e380ad72193ffc799061e45bedaf743da105a760ba5a79" namespace=k8s.io protocol=ttrpc version=3 Mar 25 02:34:48.737667 systemd[1]: Started cri-containerd-d5ae9a3431d7861fa27788af84a5140630255f91c40a833e821d47b7d787ff94.scope - libcontainer container d5ae9a3431d7861fa27788af84a5140630255f91c40a833e821d47b7d787ff94. Mar 25 02:34:48.743902 containerd[1474]: time="2025-03-25T02:34:48.743633811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5fff7f6b85-pbvr6,Uid:e8693bdb-73f1-4d4a-97a8-0a999f60d9a1,Namespace:calico-system,Attempt:0,} returns sandbox id \"48a532df19a7aad4e46f8fbb12c761869a8ac739490cb873e7f08b0f2adc017a\"" Mar 25 02:34:48.747416 containerd[1474]: time="2025-03-25T02:34:48.747367266Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\"" Mar 25 02:34:48.792658 containerd[1474]: time="2025-03-25T02:34:48.792550602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-8lq6d,Uid:214903cc-b391-4047-856d-b05154dc66b9,Namespace:kube-system,Attempt:0,} returns sandbox id \"d5ae9a3431d7861fa27788af84a5140630255f91c40a833e821d47b7d787ff94\"" Mar 25 02:34:48.797030 containerd[1474]: time="2025-03-25T02:34:48.796900061Z" level=info msg="CreateContainer within sandbox \"d5ae9a3431d7861fa27788af84a5140630255f91c40a833e821d47b7d787ff94\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 25 02:34:48.812842 containerd[1474]: time="2025-03-25T02:34:48.812811898Z" level=info msg="Container 8e0a7d5368008cf95e12c32e8c60bd2a7e3546a2ca428b4caf932760b623303c: CDI devices from CRI Config.CDIDevices: []" Mar 25 02:34:48.822777 containerd[1474]: time="2025-03-25T02:34:48.822667893Z" level=info msg="CreateContainer within sandbox \"d5ae9a3431d7861fa27788af84a5140630255f91c40a833e821d47b7d787ff94\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8e0a7d5368008cf95e12c32e8c60bd2a7e3546a2ca428b4caf932760b623303c\"" Mar 25 02:34:48.823148 containerd[1474]: time="2025-03-25T02:34:48.823129553Z" level=info msg="StartContainer for \"8e0a7d5368008cf95e12c32e8c60bd2a7e3546a2ca428b4caf932760b623303c\"" Mar 25 02:34:48.824058 containerd[1474]: time="2025-03-25T02:34:48.823974675Z" level=info msg="connecting to shim 8e0a7d5368008cf95e12c32e8c60bd2a7e3546a2ca428b4caf932760b623303c" address="unix:///run/containerd/s/f37fb7b965971f2a90e380ad72193ffc799061e45bedaf743da105a760ba5a79" protocol=ttrpc version=3 Mar 25 02:34:48.840430 systemd[1]: Started cri-containerd-8e0a7d5368008cf95e12c32e8c60bd2a7e3546a2ca428b4caf932760b623303c.scope - libcontainer container 8e0a7d5368008cf95e12c32e8c60bd2a7e3546a2ca428b4caf932760b623303c. Mar 25 02:34:48.872256 containerd[1474]: time="2025-03-25T02:34:48.872204835Z" level=info msg="StartContainer for \"8e0a7d5368008cf95e12c32e8c60bd2a7e3546a2ca428b4caf932760b623303c\" returns successfully" Mar 25 02:34:49.269741 containerd[1474]: time="2025-03-25T02:34:49.269162279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-rhz22,Uid:2b559c34-893b-4f02-97bc-c02a40c329db,Namespace:kube-system,Attempt:0,}" Mar 25 02:34:49.270240 containerd[1474]: time="2025-03-25T02:34:49.270099680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6975797848-wwxm2,Uid:a4a252c4-bc04-429c-a966-5e112b1ec89a,Namespace:calico-apiserver,Attempt:0,}" Mar 25 02:34:49.458960 systemd-networkd[1389]: cali9ff6118d9be: Link UP Mar 25 02:34:49.459353 systemd-networkd[1389]: cali9ff6118d9be: Gained carrier Mar 25 02:34:49.487070 containerd[1474]: 2025-03-25 02:34:49.369 [INFO][4106] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4284--0--0--3--6c96446f48.novalocal-k8s-calico--apiserver--6975797848--wwxm2-eth0 calico-apiserver-6975797848- calico-apiserver a4a252c4-bc04-429c-a966-5e112b1ec89a 677 0 2025-03-25 02:34:20 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6975797848 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4284-0-0-3-6c96446f48.novalocal calico-apiserver-6975797848-wwxm2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9ff6118d9be [] []}} ContainerID="b48972ff35be772a2c96612202c8941c3f0b49244431756f888d98abdec28791" Namespace="calico-apiserver" Pod="calico-apiserver-6975797848-wwxm2" WorkloadEndpoint="ci--4284--0--0--3--6c96446f48.novalocal-k8s-calico--apiserver--6975797848--wwxm2-" Mar 25 02:34:49.487070 containerd[1474]: 2025-03-25 02:34:49.370 [INFO][4106] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b48972ff35be772a2c96612202c8941c3f0b49244431756f888d98abdec28791" Namespace="calico-apiserver" Pod="calico-apiserver-6975797848-wwxm2" WorkloadEndpoint="ci--4284--0--0--3--6c96446f48.novalocal-k8s-calico--apiserver--6975797848--wwxm2-eth0" Mar 25 02:34:49.487070 containerd[1474]: 2025-03-25 02:34:49.408 [INFO][4123] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b48972ff35be772a2c96612202c8941c3f0b49244431756f888d98abdec28791" HandleID="k8s-pod-network.b48972ff35be772a2c96612202c8941c3f0b49244431756f888d98abdec28791" Workload="ci--4284--0--0--3--6c96446f48.novalocal-k8s-calico--apiserver--6975797848--wwxm2-eth0" Mar 25 02:34:49.487070 containerd[1474]: 2025-03-25 02:34:49.421 [INFO][4123] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b48972ff35be772a2c96612202c8941c3f0b49244431756f888d98abdec28791" HandleID="k8s-pod-network.b48972ff35be772a2c96612202c8941c3f0b49244431756f888d98abdec28791" Workload="ci--4284--0--0--3--6c96446f48.novalocal-k8s-calico--apiserver--6975797848--wwxm2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00042c270), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4284-0-0-3-6c96446f48.novalocal", "pod":"calico-apiserver-6975797848-wwxm2", "timestamp":"2025-03-25 02:34:49.408951144 +0000 UTC"}, Hostname:"ci-4284-0-0-3-6c96446f48.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 25 02:34:49.487070 containerd[1474]: 2025-03-25 02:34:49.421 [INFO][4123] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 25 02:34:49.487070 containerd[1474]: 2025-03-25 02:34:49.421 [INFO][4123] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 25 02:34:49.487070 containerd[1474]: 2025-03-25 02:34:49.421 [INFO][4123] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4284-0-0-3-6c96446f48.novalocal' Mar 25 02:34:49.487070 containerd[1474]: 2025-03-25 02:34:49.423 [INFO][4123] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b48972ff35be772a2c96612202c8941c3f0b49244431756f888d98abdec28791" host="ci-4284-0-0-3-6c96446f48.novalocal" Mar 25 02:34:49.487070 containerd[1474]: 2025-03-25 02:34:49.428 [INFO][4123] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4284-0-0-3-6c96446f48.novalocal" Mar 25 02:34:49.487070 containerd[1474]: 2025-03-25 02:34:49.433 [INFO][4123] ipam/ipam.go 489: Trying affinity for 192.168.47.128/26 host="ci-4284-0-0-3-6c96446f48.novalocal" Mar 25 02:34:49.487070 containerd[1474]: 2025-03-25 02:34:49.435 [INFO][4123] ipam/ipam.go 155: Attempting to load block cidr=192.168.47.128/26 host="ci-4284-0-0-3-6c96446f48.novalocal" Mar 25 02:34:49.487070 containerd[1474]: 2025-03-25 02:34:49.438 [INFO][4123] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.47.128/26 host="ci-4284-0-0-3-6c96446f48.novalocal" Mar 25 02:34:49.487070 containerd[1474]: 2025-03-25 02:34:49.438 [INFO][4123] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.47.128/26 handle="k8s-pod-network.b48972ff35be772a2c96612202c8941c3f0b49244431756f888d98abdec28791" host="ci-4284-0-0-3-6c96446f48.novalocal" Mar 25 02:34:49.487070 containerd[1474]: 2025-03-25 02:34:49.439 [INFO][4123] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b48972ff35be772a2c96612202c8941c3f0b49244431756f888d98abdec28791 Mar 25 02:34:49.487070 containerd[1474]: 2025-03-25 02:34:49.446 [INFO][4123] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.47.128/26 handle="k8s-pod-network.b48972ff35be772a2c96612202c8941c3f0b49244431756f888d98abdec28791" host="ci-4284-0-0-3-6c96446f48.novalocal" Mar 25 02:34:49.487070 containerd[1474]: 2025-03-25 02:34:49.452 [INFO][4123] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.47.131/26] block=192.168.47.128/26 handle="k8s-pod-network.b48972ff35be772a2c96612202c8941c3f0b49244431756f888d98abdec28791" host="ci-4284-0-0-3-6c96446f48.novalocal" Mar 25 02:34:49.487070 containerd[1474]: 2025-03-25 02:34:49.452 [INFO][4123] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.47.131/26] handle="k8s-pod-network.b48972ff35be772a2c96612202c8941c3f0b49244431756f888d98abdec28791" host="ci-4284-0-0-3-6c96446f48.novalocal" Mar 25 02:34:49.487070 containerd[1474]: 2025-03-25 02:34:49.452 [INFO][4123] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 25 02:34:49.487070 containerd[1474]: 2025-03-25 02:34:49.452 [INFO][4123] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.47.131/26] IPv6=[] ContainerID="b48972ff35be772a2c96612202c8941c3f0b49244431756f888d98abdec28791" HandleID="k8s-pod-network.b48972ff35be772a2c96612202c8941c3f0b49244431756f888d98abdec28791" Workload="ci--4284--0--0--3--6c96446f48.novalocal-k8s-calico--apiserver--6975797848--wwxm2-eth0" Mar 25 02:34:49.488127 containerd[1474]: 2025-03-25 02:34:49.454 [INFO][4106] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b48972ff35be772a2c96612202c8941c3f0b49244431756f888d98abdec28791" Namespace="calico-apiserver" Pod="calico-apiserver-6975797848-wwxm2" WorkloadEndpoint="ci--4284--0--0--3--6c96446f48.novalocal-k8s-calico--apiserver--6975797848--wwxm2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284--0--0--3--6c96446f48.novalocal-k8s-calico--apiserver--6975797848--wwxm2-eth0", GenerateName:"calico-apiserver-6975797848-", Namespace:"calico-apiserver", SelfLink:"", UID:"a4a252c4-bc04-429c-a966-5e112b1ec89a", ResourceVersion:"677", Generation:0, CreationTimestamp:time.Date(2025, time.March, 25, 2, 34, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6975797848", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284-0-0-3-6c96446f48.novalocal", ContainerID:"", Pod:"calico-apiserver-6975797848-wwxm2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.47.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9ff6118d9be", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 25 02:34:49.488127 containerd[1474]: 2025-03-25 02:34:49.454 [INFO][4106] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.47.131/32] ContainerID="b48972ff35be772a2c96612202c8941c3f0b49244431756f888d98abdec28791" Namespace="calico-apiserver" Pod="calico-apiserver-6975797848-wwxm2" WorkloadEndpoint="ci--4284--0--0--3--6c96446f48.novalocal-k8s-calico--apiserver--6975797848--wwxm2-eth0" Mar 25 02:34:49.488127 containerd[1474]: 2025-03-25 02:34:49.454 [INFO][4106] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9ff6118d9be ContainerID="b48972ff35be772a2c96612202c8941c3f0b49244431756f888d98abdec28791" Namespace="calico-apiserver" Pod="calico-apiserver-6975797848-wwxm2" WorkloadEndpoint="ci--4284--0--0--3--6c96446f48.novalocal-k8s-calico--apiserver--6975797848--wwxm2-eth0" Mar 25 02:34:49.488127 containerd[1474]: 2025-03-25 02:34:49.457 [INFO][4106] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b48972ff35be772a2c96612202c8941c3f0b49244431756f888d98abdec28791" Namespace="calico-apiserver" Pod="calico-apiserver-6975797848-wwxm2" WorkloadEndpoint="ci--4284--0--0--3--6c96446f48.novalocal-k8s-calico--apiserver--6975797848--wwxm2-eth0" Mar 25 02:34:49.488127 containerd[1474]: 2025-03-25 02:34:49.458 [INFO][4106] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b48972ff35be772a2c96612202c8941c3f0b49244431756f888d98abdec28791" Namespace="calico-apiserver" Pod="calico-apiserver-6975797848-wwxm2" WorkloadEndpoint="ci--4284--0--0--3--6c96446f48.novalocal-k8s-calico--apiserver--6975797848--wwxm2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284--0--0--3--6c96446f48.novalocal-k8s-calico--apiserver--6975797848--wwxm2-eth0", GenerateName:"calico-apiserver-6975797848-", Namespace:"calico-apiserver", SelfLink:"", UID:"a4a252c4-bc04-429c-a966-5e112b1ec89a", ResourceVersion:"677", Generation:0, CreationTimestamp:time.Date(2025, time.March, 25, 2, 34, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6975797848", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284-0-0-3-6c96446f48.novalocal", ContainerID:"b48972ff35be772a2c96612202c8941c3f0b49244431756f888d98abdec28791", Pod:"calico-apiserver-6975797848-wwxm2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.47.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9ff6118d9be", MAC:"62:30:24:ac:d3:0b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 25 02:34:49.488127 containerd[1474]: 2025-03-25 02:34:49.473 [INFO][4106] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b48972ff35be772a2c96612202c8941c3f0b49244431756f888d98abdec28791" Namespace="calico-apiserver" Pod="calico-apiserver-6975797848-wwxm2" WorkloadEndpoint="ci--4284--0--0--3--6c96446f48.novalocal-k8s-calico--apiserver--6975797848--wwxm2-eth0" Mar 25 02:34:49.499487 kubelet[2696]: I0325 02:34:49.499153 2696 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-8lq6d" podStartSLOduration=36.49913622 podStartE2EDuration="36.49913622s" podCreationTimestamp="2025-03-25 02:34:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 02:34:49.499044917 +0000 UTC m=+41.369799719" watchObservedRunningTime="2025-03-25 02:34:49.49913622 +0000 UTC m=+41.369891012" Mar 25 02:34:49.551396 containerd[1474]: time="2025-03-25T02:34:49.549951928Z" level=info msg="connecting to shim b48972ff35be772a2c96612202c8941c3f0b49244431756f888d98abdec28791" address="unix:///run/containerd/s/029d4d2d30107d065941382899e19829cb51e74ccf370bd59af701d1f478159c" namespace=k8s.io protocol=ttrpc version=3 Mar 25 02:34:49.587025 systemd[1]: Started cri-containerd-b48972ff35be772a2c96612202c8941c3f0b49244431756f888d98abdec28791.scope - libcontainer container b48972ff35be772a2c96612202c8941c3f0b49244431756f888d98abdec28791. Mar 25 02:34:49.611231 systemd-networkd[1389]: cali57aae118b54: Link UP Mar 25 02:34:49.612196 systemd-networkd[1389]: cali57aae118b54: Gained carrier Mar 25 02:34:49.637683 containerd[1474]: 2025-03-25 02:34:49.368 [INFO][4096] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4284--0--0--3--6c96446f48.novalocal-k8s-coredns--6f6b679f8f--rhz22-eth0 coredns-6f6b679f8f- kube-system 2b559c34-893b-4f02-97bc-c02a40c329db 676 0 2025-03-25 02:34:13 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4284-0-0-3-6c96446f48.novalocal coredns-6f6b679f8f-rhz22 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali57aae118b54 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="8af92531a9516b1ddf6a6c1a071c22f51516a31e3952f02482cf2b8f68df1096" Namespace="kube-system" Pod="coredns-6f6b679f8f-rhz22" WorkloadEndpoint="ci--4284--0--0--3--6c96446f48.novalocal-k8s-coredns--6f6b679f8f--rhz22-" Mar 25 02:34:49.637683 containerd[1474]: 2025-03-25 02:34:49.369 [INFO][4096] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8af92531a9516b1ddf6a6c1a071c22f51516a31e3952f02482cf2b8f68df1096" Namespace="kube-system" Pod="coredns-6f6b679f8f-rhz22" WorkloadEndpoint="ci--4284--0--0--3--6c96446f48.novalocal-k8s-coredns--6f6b679f8f--rhz22-eth0" Mar 25 02:34:49.637683 containerd[1474]: 2025-03-25 02:34:49.424 [INFO][4121] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8af92531a9516b1ddf6a6c1a071c22f51516a31e3952f02482cf2b8f68df1096" HandleID="k8s-pod-network.8af92531a9516b1ddf6a6c1a071c22f51516a31e3952f02482cf2b8f68df1096" Workload="ci--4284--0--0--3--6c96446f48.novalocal-k8s-coredns--6f6b679f8f--rhz22-eth0" Mar 25 02:34:49.637683 containerd[1474]: 2025-03-25 02:34:49.436 [INFO][4121] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8af92531a9516b1ddf6a6c1a071c22f51516a31e3952f02482cf2b8f68df1096" HandleID="k8s-pod-network.8af92531a9516b1ddf6a6c1a071c22f51516a31e3952f02482cf2b8f68df1096" Workload="ci--4284--0--0--3--6c96446f48.novalocal-k8s-coredns--6f6b679f8f--rhz22-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ec250), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4284-0-0-3-6c96446f48.novalocal", "pod":"coredns-6f6b679f8f-rhz22", "timestamp":"2025-03-25 02:34:49.424564211 +0000 UTC"}, Hostname:"ci-4284-0-0-3-6c96446f48.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 25 02:34:49.637683 containerd[1474]: 2025-03-25 02:34:49.436 [INFO][4121] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 25 02:34:49.637683 containerd[1474]: 2025-03-25 02:34:49.452 [INFO][4121] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 25 02:34:49.637683 containerd[1474]: 2025-03-25 02:34:49.453 [INFO][4121] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4284-0-0-3-6c96446f48.novalocal' Mar 25 02:34:49.637683 containerd[1474]: 2025-03-25 02:34:49.526 [INFO][4121] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8af92531a9516b1ddf6a6c1a071c22f51516a31e3952f02482cf2b8f68df1096" host="ci-4284-0-0-3-6c96446f48.novalocal" Mar 25 02:34:49.637683 containerd[1474]: 2025-03-25 02:34:49.538 [INFO][4121] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4284-0-0-3-6c96446f48.novalocal" Mar 25 02:34:49.637683 containerd[1474]: 2025-03-25 02:34:49.570 [INFO][4121] ipam/ipam.go 489: Trying affinity for 192.168.47.128/26 host="ci-4284-0-0-3-6c96446f48.novalocal" Mar 25 02:34:49.637683 containerd[1474]: 2025-03-25 02:34:49.578 [INFO][4121] ipam/ipam.go 155: Attempting to load block cidr=192.168.47.128/26 host="ci-4284-0-0-3-6c96446f48.novalocal" Mar 25 02:34:49.637683 containerd[1474]: 2025-03-25 02:34:49.582 [INFO][4121] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.47.128/26 host="ci-4284-0-0-3-6c96446f48.novalocal" Mar 25 02:34:49.637683 containerd[1474]: 2025-03-25 02:34:49.582 [INFO][4121] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.47.128/26 handle="k8s-pod-network.8af92531a9516b1ddf6a6c1a071c22f51516a31e3952f02482cf2b8f68df1096" host="ci-4284-0-0-3-6c96446f48.novalocal" Mar 25 02:34:49.637683 containerd[1474]: 2025-03-25 02:34:49.590 [INFO][4121] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8af92531a9516b1ddf6a6c1a071c22f51516a31e3952f02482cf2b8f68df1096 Mar 25 02:34:49.637683 containerd[1474]: 2025-03-25 02:34:49.595 [INFO][4121] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.47.128/26 handle="k8s-pod-network.8af92531a9516b1ddf6a6c1a071c22f51516a31e3952f02482cf2b8f68df1096" host="ci-4284-0-0-3-6c96446f48.novalocal" Mar 25 02:34:49.637683 containerd[1474]: 2025-03-25 02:34:49.603 [INFO][4121] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.47.132/26] block=192.168.47.128/26 handle="k8s-pod-network.8af92531a9516b1ddf6a6c1a071c22f51516a31e3952f02482cf2b8f68df1096" host="ci-4284-0-0-3-6c96446f48.novalocal" Mar 25 02:34:49.637683 containerd[1474]: 2025-03-25 02:34:49.603 [INFO][4121] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.47.132/26] handle="k8s-pod-network.8af92531a9516b1ddf6a6c1a071c22f51516a31e3952f02482cf2b8f68df1096" host="ci-4284-0-0-3-6c96446f48.novalocal" Mar 25 02:34:49.637683 containerd[1474]: 2025-03-25 02:34:49.603 [INFO][4121] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 25 02:34:49.637683 containerd[1474]: 2025-03-25 02:34:49.603 [INFO][4121] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.47.132/26] IPv6=[] ContainerID="8af92531a9516b1ddf6a6c1a071c22f51516a31e3952f02482cf2b8f68df1096" HandleID="k8s-pod-network.8af92531a9516b1ddf6a6c1a071c22f51516a31e3952f02482cf2b8f68df1096" Workload="ci--4284--0--0--3--6c96446f48.novalocal-k8s-coredns--6f6b679f8f--rhz22-eth0" Mar 25 02:34:49.639163 containerd[1474]: 2025-03-25 02:34:49.606 [INFO][4096] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8af92531a9516b1ddf6a6c1a071c22f51516a31e3952f02482cf2b8f68df1096" Namespace="kube-system" Pod="coredns-6f6b679f8f-rhz22" WorkloadEndpoint="ci--4284--0--0--3--6c96446f48.novalocal-k8s-coredns--6f6b679f8f--rhz22-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284--0--0--3--6c96446f48.novalocal-k8s-coredns--6f6b679f8f--rhz22-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"2b559c34-893b-4f02-97bc-c02a40c329db", ResourceVersion:"676", Generation:0, CreationTimestamp:time.Date(2025, time.March, 25, 2, 34, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284-0-0-3-6c96446f48.novalocal", ContainerID:"", Pod:"coredns-6f6b679f8f-rhz22", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.47.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali57aae118b54", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 25 02:34:49.639163 containerd[1474]: 2025-03-25 02:34:49.606 [INFO][4096] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.47.132/32] ContainerID="8af92531a9516b1ddf6a6c1a071c22f51516a31e3952f02482cf2b8f68df1096" Namespace="kube-system" Pod="coredns-6f6b679f8f-rhz22" WorkloadEndpoint="ci--4284--0--0--3--6c96446f48.novalocal-k8s-coredns--6f6b679f8f--rhz22-eth0" Mar 25 02:34:49.639163 containerd[1474]: 2025-03-25 02:34:49.606 [INFO][4096] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali57aae118b54 ContainerID="8af92531a9516b1ddf6a6c1a071c22f51516a31e3952f02482cf2b8f68df1096" Namespace="kube-system" Pod="coredns-6f6b679f8f-rhz22" WorkloadEndpoint="ci--4284--0--0--3--6c96446f48.novalocal-k8s-coredns--6f6b679f8f--rhz22-eth0" Mar 25 02:34:49.639163 containerd[1474]: 2025-03-25 02:34:49.611 [INFO][4096] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8af92531a9516b1ddf6a6c1a071c22f51516a31e3952f02482cf2b8f68df1096" Namespace="kube-system" Pod="coredns-6f6b679f8f-rhz22" WorkloadEndpoint="ci--4284--0--0--3--6c96446f48.novalocal-k8s-coredns--6f6b679f8f--rhz22-eth0" Mar 25 02:34:49.639163 containerd[1474]: 2025-03-25 02:34:49.612 [INFO][4096] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8af92531a9516b1ddf6a6c1a071c22f51516a31e3952f02482cf2b8f68df1096" Namespace="kube-system" Pod="coredns-6f6b679f8f-rhz22" WorkloadEndpoint="ci--4284--0--0--3--6c96446f48.novalocal-k8s-coredns--6f6b679f8f--rhz22-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284--0--0--3--6c96446f48.novalocal-k8s-coredns--6f6b679f8f--rhz22-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"2b559c34-893b-4f02-97bc-c02a40c329db", ResourceVersion:"676", Generation:0, CreationTimestamp:time.Date(2025, time.March, 25, 2, 34, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284-0-0-3-6c96446f48.novalocal", ContainerID:"8af92531a9516b1ddf6a6c1a071c22f51516a31e3952f02482cf2b8f68df1096", Pod:"coredns-6f6b679f8f-rhz22", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.47.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali57aae118b54", MAC:"ea:67:2b:09:2c:73", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 25 02:34:49.639163 containerd[1474]: 2025-03-25 02:34:49.632 [INFO][4096] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8af92531a9516b1ddf6a6c1a071c22f51516a31e3952f02482cf2b8f68df1096" Namespace="kube-system" Pod="coredns-6f6b679f8f-rhz22" WorkloadEndpoint="ci--4284--0--0--3--6c96446f48.novalocal-k8s-coredns--6f6b679f8f--rhz22-eth0" Mar 25 02:34:49.688685 containerd[1474]: time="2025-03-25T02:34:49.688593016Z" level=info msg="connecting to shim 8af92531a9516b1ddf6a6c1a071c22f51516a31e3952f02482cf2b8f68df1096" address="unix:///run/containerd/s/a0ebe94ff0ae1ccf671d4b78bf63358fe7356e41e9048180ae6e9c36637157ce" namespace=k8s.io protocol=ttrpc version=3 Mar 25 02:34:49.712239 containerd[1474]: time="2025-03-25T02:34:49.712099321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6975797848-wwxm2,Uid:a4a252c4-bc04-429c-a966-5e112b1ec89a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"b48972ff35be772a2c96612202c8941c3f0b49244431756f888d98abdec28791\"" Mar 25 02:34:49.732436 systemd[1]: Started cri-containerd-8af92531a9516b1ddf6a6c1a071c22f51516a31e3952f02482cf2b8f68df1096.scope - libcontainer container 8af92531a9516b1ddf6a6c1a071c22f51516a31e3952f02482cf2b8f68df1096. Mar 25 02:34:49.766404 systemd-networkd[1389]: cali738e0d6f04a: Gained IPv6LL Mar 25 02:34:49.778046 containerd[1474]: time="2025-03-25T02:34:49.777927246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-rhz22,Uid:2b559c34-893b-4f02-97bc-c02a40c329db,Namespace:kube-system,Attempt:0,} returns sandbox id \"8af92531a9516b1ddf6a6c1a071c22f51516a31e3952f02482cf2b8f68df1096\"" Mar 25 02:34:49.782314 containerd[1474]: time="2025-03-25T02:34:49.781519288Z" level=info msg="CreateContainer within sandbox \"8af92531a9516b1ddf6a6c1a071c22f51516a31e3952f02482cf2b8f68df1096\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 25 02:34:49.793864 containerd[1474]: time="2025-03-25T02:34:49.793821254Z" level=info msg="Container 9eb9d140b7d13f7d3e047dce9b123411730e3a3527d2197df0f098447d2e273a: CDI devices from CRI Config.CDIDevices: []" Mar 25 02:34:49.802546 containerd[1474]: time="2025-03-25T02:34:49.802442650Z" level=info msg="CreateContainer within sandbox \"8af92531a9516b1ddf6a6c1a071c22f51516a31e3952f02482cf2b8f68df1096\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9eb9d140b7d13f7d3e047dce9b123411730e3a3527d2197df0f098447d2e273a\"" Mar 25 02:34:49.804377 containerd[1474]: time="2025-03-25T02:34:49.803053768Z" level=info msg="StartContainer for \"9eb9d140b7d13f7d3e047dce9b123411730e3a3527d2197df0f098447d2e273a\"" Mar 25 02:34:49.804377 containerd[1474]: time="2025-03-25T02:34:49.803906378Z" level=info msg="connecting to shim 9eb9d140b7d13f7d3e047dce9b123411730e3a3527d2197df0f098447d2e273a" address="unix:///run/containerd/s/a0ebe94ff0ae1ccf671d4b78bf63358fe7356e41e9048180ae6e9c36637157ce" protocol=ttrpc version=3 Mar 25 02:34:49.824432 systemd[1]: Started cri-containerd-9eb9d140b7d13f7d3e047dce9b123411730e3a3527d2197df0f098447d2e273a.scope - libcontainer container 9eb9d140b7d13f7d3e047dce9b123411730e3a3527d2197df0f098447d2e273a. Mar 25 02:34:49.860124 containerd[1474]: time="2025-03-25T02:34:49.860081318Z" level=info msg="StartContainer for \"9eb9d140b7d13f7d3e047dce9b123411730e3a3527d2197df0f098447d2e273a\" returns successfully" Mar 25 02:34:49.958425 systemd-networkd[1389]: cali870c512b556: Gained IPv6LL Mar 25 02:34:50.521496 kubelet[2696]: I0325 02:34:50.520880 2696 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-rhz22" podStartSLOduration=37.520844747 podStartE2EDuration="37.520844747s" podCreationTimestamp="2025-03-25 02:34:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 02:34:50.517073396 +0000 UTC m=+42.387828238" watchObservedRunningTime="2025-03-25 02:34:50.520844747 +0000 UTC m=+42.391599589" Mar 25 02:34:50.661491 systemd-networkd[1389]: cali57aae118b54: Gained IPv6LL Mar 25 02:34:50.853393 systemd-networkd[1389]: cali9ff6118d9be: Gained IPv6LL Mar 25 02:34:51.270764 containerd[1474]: time="2025-03-25T02:34:51.269907504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j9qch,Uid:f79a0a0e-5416-4235-b1a3-2a817bf38d19,Namespace:calico-system,Attempt:0,}" Mar 25 02:34:51.291126 containerd[1474]: time="2025-03-25T02:34:51.291083304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6975797848-8h5kp,Uid:7f391f5b-dcc7-4a47-b54c-3b6f49c4fbe2,Namespace:calico-apiserver,Attempt:0,}" Mar 25 02:34:51.513486 systemd-networkd[1389]: calif3e1943b139: Link UP Mar 25 02:34:51.513790 systemd-networkd[1389]: calif3e1943b139: Gained carrier Mar 25 02:34:51.531748 containerd[1474]: 2025-03-25 02:34:51.370 [INFO][4296] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4284--0--0--3--6c96446f48.novalocal-k8s-csi--node--driver--j9qch-eth0 csi-node-driver- calico-system f79a0a0e-5416-4235-b1a3-2a817bf38d19 581 0 2025-03-25 02:34:20 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:568c96974f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4284-0-0-3-6c96446f48.novalocal csi-node-driver-j9qch eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calif3e1943b139 [] []}} ContainerID="6ba96dccb2dffbf77d2ac4b4b68c51c59d29f31b2429e7cd5ba18203a17210ec" Namespace="calico-system" Pod="csi-node-driver-j9qch" WorkloadEndpoint="ci--4284--0--0--3--6c96446f48.novalocal-k8s-csi--node--driver--j9qch-" Mar 25 02:34:51.531748 containerd[1474]: 2025-03-25 02:34:51.370 [INFO][4296] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6ba96dccb2dffbf77d2ac4b4b68c51c59d29f31b2429e7cd5ba18203a17210ec" Namespace="calico-system" Pod="csi-node-driver-j9qch" WorkloadEndpoint="ci--4284--0--0--3--6c96446f48.novalocal-k8s-csi--node--driver--j9qch-eth0" Mar 25 02:34:51.531748 containerd[1474]: 2025-03-25 02:34:51.435 [INFO][4321] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6ba96dccb2dffbf77d2ac4b4b68c51c59d29f31b2429e7cd5ba18203a17210ec" HandleID="k8s-pod-network.6ba96dccb2dffbf77d2ac4b4b68c51c59d29f31b2429e7cd5ba18203a17210ec" Workload="ci--4284--0--0--3--6c96446f48.novalocal-k8s-csi--node--driver--j9qch-eth0" Mar 25 02:34:51.531748 containerd[1474]: 2025-03-25 02:34:51.452 [INFO][4321] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6ba96dccb2dffbf77d2ac4b4b68c51c59d29f31b2429e7cd5ba18203a17210ec" HandleID="k8s-pod-network.6ba96dccb2dffbf77d2ac4b4b68c51c59d29f31b2429e7cd5ba18203a17210ec" Workload="ci--4284--0--0--3--6c96446f48.novalocal-k8s-csi--node--driver--j9qch-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000050150), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4284-0-0-3-6c96446f48.novalocal", "pod":"csi-node-driver-j9qch", "timestamp":"2025-03-25 02:34:51.435851144 +0000 UTC"}, Hostname:"ci-4284-0-0-3-6c96446f48.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 25 02:34:51.531748 containerd[1474]: 2025-03-25 02:34:51.452 [INFO][4321] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 25 02:34:51.531748 containerd[1474]: 2025-03-25 02:34:51.453 [INFO][4321] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 25 02:34:51.531748 containerd[1474]: 2025-03-25 02:34:51.453 [INFO][4321] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4284-0-0-3-6c96446f48.novalocal' Mar 25 02:34:51.531748 containerd[1474]: 2025-03-25 02:34:51.458 [INFO][4321] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6ba96dccb2dffbf77d2ac4b4b68c51c59d29f31b2429e7cd5ba18203a17210ec" host="ci-4284-0-0-3-6c96446f48.novalocal" Mar 25 02:34:51.531748 containerd[1474]: 2025-03-25 02:34:51.465 [INFO][4321] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4284-0-0-3-6c96446f48.novalocal" Mar 25 02:34:51.531748 containerd[1474]: 2025-03-25 02:34:51.473 [INFO][4321] ipam/ipam.go 489: Trying affinity for 192.168.47.128/26 host="ci-4284-0-0-3-6c96446f48.novalocal" Mar 25 02:34:51.531748 containerd[1474]: 2025-03-25 02:34:51.479 [INFO][4321] ipam/ipam.go 155: Attempting to load block cidr=192.168.47.128/26 host="ci-4284-0-0-3-6c96446f48.novalocal" Mar 25 02:34:51.531748 containerd[1474]: 2025-03-25 02:34:51.482 [INFO][4321] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.47.128/26 host="ci-4284-0-0-3-6c96446f48.novalocal" Mar 25 02:34:51.531748 containerd[1474]: 2025-03-25 02:34:51.482 [INFO][4321] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.47.128/26 handle="k8s-pod-network.6ba96dccb2dffbf77d2ac4b4b68c51c59d29f31b2429e7cd5ba18203a17210ec" host="ci-4284-0-0-3-6c96446f48.novalocal" Mar 25 02:34:51.531748 containerd[1474]: 2025-03-25 02:34:51.484 [INFO][4321] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6ba96dccb2dffbf77d2ac4b4b68c51c59d29f31b2429e7cd5ba18203a17210ec Mar 25 02:34:51.531748 containerd[1474]: 2025-03-25 02:34:51.498 [INFO][4321] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.47.128/26 handle="k8s-pod-network.6ba96dccb2dffbf77d2ac4b4b68c51c59d29f31b2429e7cd5ba18203a17210ec" host="ci-4284-0-0-3-6c96446f48.novalocal" Mar 25 02:34:51.531748 containerd[1474]: 2025-03-25 02:34:51.505 [INFO][4321] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.47.133/26] block=192.168.47.128/26 handle="k8s-pod-network.6ba96dccb2dffbf77d2ac4b4b68c51c59d29f31b2429e7cd5ba18203a17210ec" host="ci-4284-0-0-3-6c96446f48.novalocal" Mar 25 02:34:51.531748 containerd[1474]: 2025-03-25 02:34:51.505 [INFO][4321] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.47.133/26] handle="k8s-pod-network.6ba96dccb2dffbf77d2ac4b4b68c51c59d29f31b2429e7cd5ba18203a17210ec" host="ci-4284-0-0-3-6c96446f48.novalocal" Mar 25 02:34:51.531748 containerd[1474]: 2025-03-25 02:34:51.505 [INFO][4321] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 25 02:34:51.531748 containerd[1474]: 2025-03-25 02:34:51.505 [INFO][4321] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.47.133/26] IPv6=[] ContainerID="6ba96dccb2dffbf77d2ac4b4b68c51c59d29f31b2429e7cd5ba18203a17210ec" HandleID="k8s-pod-network.6ba96dccb2dffbf77d2ac4b4b68c51c59d29f31b2429e7cd5ba18203a17210ec" Workload="ci--4284--0--0--3--6c96446f48.novalocal-k8s-csi--node--driver--j9qch-eth0" Mar 25 02:34:51.533698 containerd[1474]: 2025-03-25 02:34:51.507 [INFO][4296] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6ba96dccb2dffbf77d2ac4b4b68c51c59d29f31b2429e7cd5ba18203a17210ec" Namespace="calico-system" Pod="csi-node-driver-j9qch" WorkloadEndpoint="ci--4284--0--0--3--6c96446f48.novalocal-k8s-csi--node--driver--j9qch-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284--0--0--3--6c96446f48.novalocal-k8s-csi--node--driver--j9qch-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f79a0a0e-5416-4235-b1a3-2a817bf38d19", ResourceVersion:"581", Generation:0, CreationTimestamp:time.Date(2025, time.March, 25, 2, 34, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"568c96974f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284-0-0-3-6c96446f48.novalocal", ContainerID:"", Pod:"csi-node-driver-j9qch", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.47.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif3e1943b139", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 25 02:34:51.533698 containerd[1474]: 2025-03-25 02:34:51.507 [INFO][4296] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.47.133/32] ContainerID="6ba96dccb2dffbf77d2ac4b4b68c51c59d29f31b2429e7cd5ba18203a17210ec" Namespace="calico-system" Pod="csi-node-driver-j9qch" WorkloadEndpoint="ci--4284--0--0--3--6c96446f48.novalocal-k8s-csi--node--driver--j9qch-eth0" Mar 25 02:34:51.533698 containerd[1474]: 2025-03-25 02:34:51.507 [INFO][4296] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif3e1943b139 ContainerID="6ba96dccb2dffbf77d2ac4b4b68c51c59d29f31b2429e7cd5ba18203a17210ec" Namespace="calico-system" Pod="csi-node-driver-j9qch" WorkloadEndpoint="ci--4284--0--0--3--6c96446f48.novalocal-k8s-csi--node--driver--j9qch-eth0" Mar 25 02:34:51.533698 containerd[1474]: 2025-03-25 02:34:51.513 [INFO][4296] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6ba96dccb2dffbf77d2ac4b4b68c51c59d29f31b2429e7cd5ba18203a17210ec" Namespace="calico-system" Pod="csi-node-driver-j9qch" WorkloadEndpoint="ci--4284--0--0--3--6c96446f48.novalocal-k8s-csi--node--driver--j9qch-eth0" Mar 25 02:34:51.533698 containerd[1474]: 2025-03-25 02:34:51.514 [INFO][4296] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6ba96dccb2dffbf77d2ac4b4b68c51c59d29f31b2429e7cd5ba18203a17210ec" Namespace="calico-system" Pod="csi-node-driver-j9qch" WorkloadEndpoint="ci--4284--0--0--3--6c96446f48.novalocal-k8s-csi--node--driver--j9qch-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284--0--0--3--6c96446f48.novalocal-k8s-csi--node--driver--j9qch-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f79a0a0e-5416-4235-b1a3-2a817bf38d19", ResourceVersion:"581", Generation:0, CreationTimestamp:time.Date(2025, time.March, 25, 2, 34, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"568c96974f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284-0-0-3-6c96446f48.novalocal", ContainerID:"6ba96dccb2dffbf77d2ac4b4b68c51c59d29f31b2429e7cd5ba18203a17210ec", Pod:"csi-node-driver-j9qch", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.47.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif3e1943b139", MAC:"32:bb:f6:e3:af:78", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 25 02:34:51.533698 containerd[1474]: 2025-03-25 02:34:51.528 [INFO][4296] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6ba96dccb2dffbf77d2ac4b4b68c51c59d29f31b2429e7cd5ba18203a17210ec" Namespace="calico-system" Pod="csi-node-driver-j9qch" WorkloadEndpoint="ci--4284--0--0--3--6c96446f48.novalocal-k8s-csi--node--driver--j9qch-eth0" Mar 25 02:34:51.582281 containerd[1474]: time="2025-03-25T02:34:51.581251928Z" level=info msg="connecting to shim 6ba96dccb2dffbf77d2ac4b4b68c51c59d29f31b2429e7cd5ba18203a17210ec" address="unix:///run/containerd/s/440b980e301268959872c9d70ef7b35245011c37e4191e9e62832263e0a53be8" namespace=k8s.io protocol=ttrpc version=3 Mar 25 02:34:51.621472 systemd[1]: Started cri-containerd-6ba96dccb2dffbf77d2ac4b4b68c51c59d29f31b2429e7cd5ba18203a17210ec.scope - libcontainer container 6ba96dccb2dffbf77d2ac4b4b68c51c59d29f31b2429e7cd5ba18203a17210ec. Mar 25 02:34:51.636029 systemd-networkd[1389]: cali79d76d1a685: Link UP Mar 25 02:34:51.636217 systemd-networkd[1389]: cali79d76d1a685: Gained carrier Mar 25 02:34:51.654435 containerd[1474]: 2025-03-25 02:34:51.377 [INFO][4306] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4284--0--0--3--6c96446f48.novalocal-k8s-calico--apiserver--6975797848--8h5kp-eth0 calico-apiserver-6975797848- calico-apiserver 7f391f5b-dcc7-4a47-b54c-3b6f49c4fbe2 678 0 2025-03-25 02:34:20 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6975797848 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4284-0-0-3-6c96446f48.novalocal calico-apiserver-6975797848-8h5kp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali79d76d1a685 [] []}} ContainerID="9499f60c62b6c570c5a9563074299cd429340ae1f3af539c1d14e48a663888fd" Namespace="calico-apiserver" Pod="calico-apiserver-6975797848-8h5kp" WorkloadEndpoint="ci--4284--0--0--3--6c96446f48.novalocal-k8s-calico--apiserver--6975797848--8h5kp-" Mar 25 02:34:51.654435 containerd[1474]: 2025-03-25 02:34:51.377 [INFO][4306] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9499f60c62b6c570c5a9563074299cd429340ae1f3af539c1d14e48a663888fd" Namespace="calico-apiserver" Pod="calico-apiserver-6975797848-8h5kp" WorkloadEndpoint="ci--4284--0--0--3--6c96446f48.novalocal-k8s-calico--apiserver--6975797848--8h5kp-eth0" Mar 25 02:34:51.654435 containerd[1474]: 2025-03-25 02:34:51.439 [INFO][4323] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9499f60c62b6c570c5a9563074299cd429340ae1f3af539c1d14e48a663888fd" HandleID="k8s-pod-network.9499f60c62b6c570c5a9563074299cd429340ae1f3af539c1d14e48a663888fd" Workload="ci--4284--0--0--3--6c96446f48.novalocal-k8s-calico--apiserver--6975797848--8h5kp-eth0" Mar 25 02:34:51.654435 containerd[1474]: 2025-03-25 02:34:51.458 [INFO][4323] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9499f60c62b6c570c5a9563074299cd429340ae1f3af539c1d14e48a663888fd" HandleID="k8s-pod-network.9499f60c62b6c570c5a9563074299cd429340ae1f3af539c1d14e48a663888fd" Workload="ci--4284--0--0--3--6c96446f48.novalocal-k8s-calico--apiserver--6975797848--8h5kp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00030ecd0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4284-0-0-3-6c96446f48.novalocal", "pod":"calico-apiserver-6975797848-8h5kp", "timestamp":"2025-03-25 02:34:51.439189727 +0000 UTC"}, Hostname:"ci-4284-0-0-3-6c96446f48.novalocal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 25 02:34:51.654435 containerd[1474]: 2025-03-25 02:34:51.458 [INFO][4323] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 25 02:34:51.654435 containerd[1474]: 2025-03-25 02:34:51.505 [INFO][4323] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 25 02:34:51.654435 containerd[1474]: 2025-03-25 02:34:51.505 [INFO][4323] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4284-0-0-3-6c96446f48.novalocal' Mar 25 02:34:51.654435 containerd[1474]: 2025-03-25 02:34:51.561 [INFO][4323] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9499f60c62b6c570c5a9563074299cd429340ae1f3af539c1d14e48a663888fd" host="ci-4284-0-0-3-6c96446f48.novalocal" Mar 25 02:34:51.654435 containerd[1474]: 2025-03-25 02:34:51.573 [INFO][4323] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4284-0-0-3-6c96446f48.novalocal" Mar 25 02:34:51.654435 containerd[1474]: 2025-03-25 02:34:51.585 [INFO][4323] ipam/ipam.go 489: Trying affinity for 192.168.47.128/26 host="ci-4284-0-0-3-6c96446f48.novalocal" Mar 25 02:34:51.654435 containerd[1474]: 2025-03-25 02:34:51.597 [INFO][4323] ipam/ipam.go 155: Attempting to load block cidr=192.168.47.128/26 host="ci-4284-0-0-3-6c96446f48.novalocal" Mar 25 02:34:51.654435 containerd[1474]: 2025-03-25 02:34:51.605 [INFO][4323] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.47.128/26 host="ci-4284-0-0-3-6c96446f48.novalocal" Mar 25 02:34:51.654435 containerd[1474]: 2025-03-25 02:34:51.605 [INFO][4323] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.47.128/26 handle="k8s-pod-network.9499f60c62b6c570c5a9563074299cd429340ae1f3af539c1d14e48a663888fd" host="ci-4284-0-0-3-6c96446f48.novalocal" Mar 25 02:34:51.654435 containerd[1474]: 2025-03-25 02:34:51.608 [INFO][4323] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9499f60c62b6c570c5a9563074299cd429340ae1f3af539c1d14e48a663888fd Mar 25 02:34:51.654435 containerd[1474]: 2025-03-25 02:34:51.618 [INFO][4323] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.47.128/26 handle="k8s-pod-network.9499f60c62b6c570c5a9563074299cd429340ae1f3af539c1d14e48a663888fd" host="ci-4284-0-0-3-6c96446f48.novalocal" Mar 25 02:34:51.654435 containerd[1474]: 2025-03-25 02:34:51.630 [INFO][4323] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.47.134/26] block=192.168.47.128/26 handle="k8s-pod-network.9499f60c62b6c570c5a9563074299cd429340ae1f3af539c1d14e48a663888fd" host="ci-4284-0-0-3-6c96446f48.novalocal" Mar 25 02:34:51.654435 containerd[1474]: 2025-03-25 02:34:51.630 [INFO][4323] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.47.134/26] handle="k8s-pod-network.9499f60c62b6c570c5a9563074299cd429340ae1f3af539c1d14e48a663888fd" host="ci-4284-0-0-3-6c96446f48.novalocal" Mar 25 02:34:51.654435 containerd[1474]: 2025-03-25 02:34:51.630 [INFO][4323] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 25 02:34:51.654435 containerd[1474]: 2025-03-25 02:34:51.630 [INFO][4323] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.47.134/26] IPv6=[] ContainerID="9499f60c62b6c570c5a9563074299cd429340ae1f3af539c1d14e48a663888fd" HandleID="k8s-pod-network.9499f60c62b6c570c5a9563074299cd429340ae1f3af539c1d14e48a663888fd" Workload="ci--4284--0--0--3--6c96446f48.novalocal-k8s-calico--apiserver--6975797848--8h5kp-eth0" Mar 25 02:34:51.655717 containerd[1474]: 2025-03-25 02:34:51.631 [INFO][4306] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9499f60c62b6c570c5a9563074299cd429340ae1f3af539c1d14e48a663888fd" Namespace="calico-apiserver" Pod="calico-apiserver-6975797848-8h5kp" WorkloadEndpoint="ci--4284--0--0--3--6c96446f48.novalocal-k8s-calico--apiserver--6975797848--8h5kp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284--0--0--3--6c96446f48.novalocal-k8s-calico--apiserver--6975797848--8h5kp-eth0", GenerateName:"calico-apiserver-6975797848-", Namespace:"calico-apiserver", SelfLink:"", UID:"7f391f5b-dcc7-4a47-b54c-3b6f49c4fbe2", ResourceVersion:"678", Generation:0, CreationTimestamp:time.Date(2025, time.March, 25, 2, 34, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6975797848", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284-0-0-3-6c96446f48.novalocal", ContainerID:"", Pod:"calico-apiserver-6975797848-8h5kp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.47.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali79d76d1a685", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 25 02:34:51.655717 containerd[1474]: 2025-03-25 02:34:51.632 [INFO][4306] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.47.134/32] ContainerID="9499f60c62b6c570c5a9563074299cd429340ae1f3af539c1d14e48a663888fd" Namespace="calico-apiserver" Pod="calico-apiserver-6975797848-8h5kp" WorkloadEndpoint="ci--4284--0--0--3--6c96446f48.novalocal-k8s-calico--apiserver--6975797848--8h5kp-eth0" Mar 25 02:34:51.655717 containerd[1474]: 2025-03-25 02:34:51.632 [INFO][4306] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali79d76d1a685 ContainerID="9499f60c62b6c570c5a9563074299cd429340ae1f3af539c1d14e48a663888fd" Namespace="calico-apiserver" Pod="calico-apiserver-6975797848-8h5kp" WorkloadEndpoint="ci--4284--0--0--3--6c96446f48.novalocal-k8s-calico--apiserver--6975797848--8h5kp-eth0" Mar 25 02:34:51.655717 containerd[1474]: 2025-03-25 02:34:51.635 [INFO][4306] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9499f60c62b6c570c5a9563074299cd429340ae1f3af539c1d14e48a663888fd" Namespace="calico-apiserver" Pod="calico-apiserver-6975797848-8h5kp" WorkloadEndpoint="ci--4284--0--0--3--6c96446f48.novalocal-k8s-calico--apiserver--6975797848--8h5kp-eth0" Mar 25 02:34:51.655717 containerd[1474]: 2025-03-25 02:34:51.635 [INFO][4306] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9499f60c62b6c570c5a9563074299cd429340ae1f3af539c1d14e48a663888fd" Namespace="calico-apiserver" Pod="calico-apiserver-6975797848-8h5kp" WorkloadEndpoint="ci--4284--0--0--3--6c96446f48.novalocal-k8s-calico--apiserver--6975797848--8h5kp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284--0--0--3--6c96446f48.novalocal-k8s-calico--apiserver--6975797848--8h5kp-eth0", GenerateName:"calico-apiserver-6975797848-", Namespace:"calico-apiserver", SelfLink:"", UID:"7f391f5b-dcc7-4a47-b54c-3b6f49c4fbe2", ResourceVersion:"678", Generation:0, CreationTimestamp:time.Date(2025, time.March, 25, 2, 34, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6975797848", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284-0-0-3-6c96446f48.novalocal", ContainerID:"9499f60c62b6c570c5a9563074299cd429340ae1f3af539c1d14e48a663888fd", Pod:"calico-apiserver-6975797848-8h5kp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.47.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali79d76d1a685", MAC:"8e:1e:5a:35:0d:3b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 25 02:34:51.655717 containerd[1474]: 2025-03-25 02:34:51.652 [INFO][4306] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9499f60c62b6c570c5a9563074299cd429340ae1f3af539c1d14e48a663888fd" Namespace="calico-apiserver" Pod="calico-apiserver-6975797848-8h5kp" WorkloadEndpoint="ci--4284--0--0--3--6c96446f48.novalocal-k8s-calico--apiserver--6975797848--8h5kp-eth0" Mar 25 02:34:51.700512 containerd[1474]: time="2025-03-25T02:34:51.700475298Z" level=info msg="connecting to shim 9499f60c62b6c570c5a9563074299cd429340ae1f3af539c1d14e48a663888fd" address="unix:///run/containerd/s/562fb699095626408d31ced521a610707079ca858fdc7f77aa5b89a968c27dbe" namespace=k8s.io protocol=ttrpc version=3 Mar 25 02:34:51.717938 containerd[1474]: time="2025-03-25T02:34:51.717878327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j9qch,Uid:f79a0a0e-5416-4235-b1a3-2a817bf38d19,Namespace:calico-system,Attempt:0,} returns sandbox id \"6ba96dccb2dffbf77d2ac4b4b68c51c59d29f31b2429e7cd5ba18203a17210ec\"" Mar 25 02:34:51.744468 systemd[1]: Started cri-containerd-9499f60c62b6c570c5a9563074299cd429340ae1f3af539c1d14e48a663888fd.scope - libcontainer container 9499f60c62b6c570c5a9563074299cd429340ae1f3af539c1d14e48a663888fd. Mar 25 02:34:51.795345 containerd[1474]: time="2025-03-25T02:34:51.793993960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6975797848-8h5kp,Uid:7f391f5b-dcc7-4a47-b54c-3b6f49c4fbe2,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"9499f60c62b6c570c5a9563074299cd429340ae1f3af539c1d14e48a663888fd\"" Mar 25 02:34:53.414719 systemd-networkd[1389]: calif3e1943b139: Gained IPv6LL Mar 25 02:34:53.554711 containerd[1474]: time="2025-03-25T02:34:53.554646941Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:34:53.556138 containerd[1474]: time="2025-03-25T02:34:53.556012902Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.2: active requests=0, bytes read=34792912" Mar 25 02:34:53.557459 containerd[1474]: time="2025-03-25T02:34:53.557417261Z" level=info msg="ImageCreate event name:\"sha256:f6a228558381bc7de7c5296ac6c4e903cfda929899c85806367a726ef6d7ff5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:34:53.559843 containerd[1474]: time="2025-03-25T02:34:53.559801807Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:6d1f392b747f912366ec5c60ee1130952c2c07e8ce24c53480187daa0e3364aa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:34:53.560832 containerd[1474]: time="2025-03-25T02:34:53.560421854Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\" with image id \"sha256:f6a228558381bc7de7c5296ac6c4e903cfda929899c85806367a726ef6d7ff5f\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:6d1f392b747f912366ec5c60ee1130952c2c07e8ce24c53480187daa0e3364aa\", size \"36285984\" in 4.812999901s" Mar 25 02:34:53.560832 containerd[1474]: time="2025-03-25T02:34:53.560461025Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\" returns image reference \"sha256:f6a228558381bc7de7c5296ac6c4e903cfda929899c85806367a726ef6d7ff5f\"" Mar 25 02:34:53.562925 containerd[1474]: time="2025-03-25T02:34:53.562372579Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.2\"" Mar 25 02:34:53.572182 containerd[1474]: time="2025-03-25T02:34:53.572057355Z" level=info msg="CreateContainer within sandbox \"48a532df19a7aad4e46f8fbb12c761869a8ac739490cb873e7f08b0f2adc017a\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 25 02:34:53.598351 containerd[1474]: time="2025-03-25T02:34:53.597503956Z" level=info msg="Container 40a5084b1f82b7fc41162546c95425570b388d3a541529666232da4317974cb0: CDI devices from CRI Config.CDIDevices: []" Mar 25 02:34:53.605978 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3600262261.mount: Deactivated successfully. Mar 25 02:34:53.617992 containerd[1474]: time="2025-03-25T02:34:53.617945049Z" level=info msg="CreateContainer within sandbox \"48a532df19a7aad4e46f8fbb12c761869a8ac739490cb873e7f08b0f2adc017a\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"40a5084b1f82b7fc41162546c95425570b388d3a541529666232da4317974cb0\"" Mar 25 02:34:53.619290 containerd[1474]: time="2025-03-25T02:34:53.619123763Z" level=info msg="StartContainer for \"40a5084b1f82b7fc41162546c95425570b388d3a541529666232da4317974cb0\"" Mar 25 02:34:53.621048 containerd[1474]: time="2025-03-25T02:34:53.620657095Z" level=info msg="connecting to shim 40a5084b1f82b7fc41162546c95425570b388d3a541529666232da4317974cb0" address="unix:///run/containerd/s/a27ae1b06b0513a3f85eb065306dd48b30a2854cd0f4eb81d7d46b52452c4d0c" protocol=ttrpc version=3 Mar 25 02:34:53.645426 systemd[1]: Started cri-containerd-40a5084b1f82b7fc41162546c95425570b388d3a541529666232da4317974cb0.scope - libcontainer container 40a5084b1f82b7fc41162546c95425570b388d3a541529666232da4317974cb0. Mar 25 02:34:53.669424 systemd-networkd[1389]: cali79d76d1a685: Gained IPv6LL Mar 25 02:34:53.698437 containerd[1474]: time="2025-03-25T02:34:53.698397922Z" level=info msg="StartContainer for \"40a5084b1f82b7fc41162546c95425570b388d3a541529666232da4317974cb0\" returns successfully" Mar 25 02:34:54.569688 kubelet[2696]: I0325 02:34:54.569433 2696 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5fff7f6b85-pbvr6" podStartSLOduration=29.75407854 podStartE2EDuration="34.569418058s" podCreationTimestamp="2025-03-25 02:34:20 +0000 UTC" firstStartedPulling="2025-03-25 02:34:48.746247786 +0000 UTC m=+40.617002578" lastFinishedPulling="2025-03-25 02:34:53.561587304 +0000 UTC m=+45.432342096" observedRunningTime="2025-03-25 02:34:54.569118327 +0000 UTC m=+46.439873129" watchObservedRunningTime="2025-03-25 02:34:54.569418058 +0000 UTC m=+46.440172850" Mar 25 02:34:54.637110 containerd[1474]: time="2025-03-25T02:34:54.636579454Z" level=info msg="TaskExit event in podsandbox handler container_id:\"40a5084b1f82b7fc41162546c95425570b388d3a541529666232da4317974cb0\" id:\"d5f832f7e4ecf22833d472fbdfdeac31404f8c6b22e5000cb2d66cb4603417d9\" pid:4510 exited_at:{seconds:1742870094 nanos:635882065}" Mar 25 02:34:57.108559 containerd[1474]: time="2025-03-25T02:34:57.108453870Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:34:57.110598 containerd[1474]: time="2025-03-25T02:34:57.110542958Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.2: active requests=0, bytes read=42993204" Mar 25 02:34:57.112396 containerd[1474]: time="2025-03-25T02:34:57.112337982Z" level=info msg="ImageCreate event name:\"sha256:d27fc480d1ad33921c40abef2ab6828fadf6524674fdcc622f571a5abc34ad55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:34:57.115193 containerd[1474]: time="2025-03-25T02:34:57.115149715Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:3623f5b60fad0da3387a8649371b53171a4b1226f4d989d2acad9145dc0ef56f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:34:57.115908 containerd[1474]: time="2025-03-25T02:34:57.115610403Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.2\" with image id \"sha256:d27fc480d1ad33921c40abef2ab6828fadf6524674fdcc622f571a5abc34ad55\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:3623f5b60fad0da3387a8649371b53171a4b1226f4d989d2acad9145dc0ef56f\", size \"44486324\" in 3.553206768s" Mar 25 02:34:57.115908 containerd[1474]: time="2025-03-25T02:34:57.115641369Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.2\" returns image reference \"sha256:d27fc480d1ad33921c40abef2ab6828fadf6524674fdcc622f571a5abc34ad55\"" Mar 25 02:34:57.118275 containerd[1474]: time="2025-03-25T02:34:57.118232772Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.2\"" Mar 25 02:34:57.119119 containerd[1474]: time="2025-03-25T02:34:57.118987423Z" level=info msg="CreateContainer within sandbox \"b48972ff35be772a2c96612202c8941c3f0b49244431756f888d98abdec28791\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 25 02:34:57.133056 containerd[1474]: time="2025-03-25T02:34:57.132240583Z" level=info msg="Container 63b12fc504e6a651bf22de0464c986791dbe09d77c203f75f09b65c10736549f: CDI devices from CRI Config.CDIDevices: []" Mar 25 02:34:57.145935 containerd[1474]: time="2025-03-25T02:34:57.145902895Z" level=info msg="CreateContainer within sandbox \"b48972ff35be772a2c96612202c8941c3f0b49244431756f888d98abdec28791\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"63b12fc504e6a651bf22de0464c986791dbe09d77c203f75f09b65c10736549f\"" Mar 25 02:34:57.146678 containerd[1474]: time="2025-03-25T02:34:57.146642841Z" level=info msg="StartContainer for \"63b12fc504e6a651bf22de0464c986791dbe09d77c203f75f09b65c10736549f\"" Mar 25 02:34:57.148151 containerd[1474]: time="2025-03-25T02:34:57.148130035Z" level=info msg="connecting to shim 63b12fc504e6a651bf22de0464c986791dbe09d77c203f75f09b65c10736549f" address="unix:///run/containerd/s/029d4d2d30107d065941382899e19829cb51e74ccf370bd59af701d1f478159c" protocol=ttrpc version=3 Mar 25 02:34:57.171413 systemd[1]: Started cri-containerd-63b12fc504e6a651bf22de0464c986791dbe09d77c203f75f09b65c10736549f.scope - libcontainer container 63b12fc504e6a651bf22de0464c986791dbe09d77c203f75f09b65c10736549f. Mar 25 02:34:57.231051 containerd[1474]: time="2025-03-25T02:34:57.230528674Z" level=info msg="StartContainer for \"63b12fc504e6a651bf22de0464c986791dbe09d77c203f75f09b65c10736549f\" returns successfully" Mar 25 02:34:58.638288 kubelet[2696]: I0325 02:34:58.636799 2696 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6975797848-wwxm2" podStartSLOduration=31.235358662 podStartE2EDuration="38.636782434s" podCreationTimestamp="2025-03-25 02:34:20 +0000 UTC" firstStartedPulling="2025-03-25 02:34:49.715054108 +0000 UTC m=+41.585808900" lastFinishedPulling="2025-03-25 02:34:57.11647788 +0000 UTC m=+48.987232672" observedRunningTime="2025-03-25 02:34:57.574630817 +0000 UTC m=+49.445385659" watchObservedRunningTime="2025-03-25 02:34:58.636782434 +0000 UTC m=+50.507537226" Mar 25 02:34:59.923849 containerd[1474]: time="2025-03-25T02:34:59.923749529Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:34:59.925810 containerd[1474]: time="2025-03-25T02:34:59.925738721Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.2: active requests=0, bytes read=7909887" Mar 25 02:34:59.927472 containerd[1474]: time="2025-03-25T02:34:59.927120044Z" level=info msg="ImageCreate event name:\"sha256:0fae09f861e350c042fe0db9ce9f8cc5ac4df975a5c4e4a9ddc3c6fac1552a9a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:34:59.930659 containerd[1474]: time="2025-03-25T02:34:59.930599387Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:214b4eef7008808bda55ad3cc1d4a3cd8df9e0e8094dff213fa3241104eb892c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:34:59.931466 containerd[1474]: time="2025-03-25T02:34:59.931439701Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.2\" with image id \"sha256:0fae09f861e350c042fe0db9ce9f8cc5ac4df975a5c4e4a9ddc3c6fac1552a9a\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:214b4eef7008808bda55ad3cc1d4a3cd8df9e0e8094dff213fa3241104eb892c\", size \"9402991\" in 2.813044495s" Mar 25 02:34:59.931550 containerd[1474]: time="2025-03-25T02:34:59.931535747Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.2\" returns image reference \"sha256:0fae09f861e350c042fe0db9ce9f8cc5ac4df975a5c4e4a9ddc3c6fac1552a9a\"" Mar 25 02:34:59.933651 containerd[1474]: time="2025-03-25T02:34:59.933631243Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.2\"" Mar 25 02:34:59.936930 containerd[1474]: time="2025-03-25T02:34:59.936199292Z" level=info msg="CreateContainer within sandbox \"6ba96dccb2dffbf77d2ac4b4b68c51c59d29f31b2429e7cd5ba18203a17210ec\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 25 02:34:59.956659 containerd[1474]: time="2025-03-25T02:34:59.956593813Z" level=info msg="Container 9b2740ec3559755b9558da6e751e08ed67374a14b9d03db3a2f3dfa7ba217190: CDI devices from CRI Config.CDIDevices: []" Mar 25 02:34:59.970730 containerd[1474]: time="2025-03-25T02:34:59.970665427Z" level=info msg="CreateContainer within sandbox \"6ba96dccb2dffbf77d2ac4b4b68c51c59d29f31b2429e7cd5ba18203a17210ec\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"9b2740ec3559755b9558da6e751e08ed67374a14b9d03db3a2f3dfa7ba217190\"" Mar 25 02:34:59.971403 containerd[1474]: time="2025-03-25T02:34:59.971328989Z" level=info msg="StartContainer for \"9b2740ec3559755b9558da6e751e08ed67374a14b9d03db3a2f3dfa7ba217190\"" Mar 25 02:34:59.975653 containerd[1474]: time="2025-03-25T02:34:59.975451407Z" level=info msg="connecting to shim 9b2740ec3559755b9558da6e751e08ed67374a14b9d03db3a2f3dfa7ba217190" address="unix:///run/containerd/s/440b980e301268959872c9d70ef7b35245011c37e4191e9e62832263e0a53be8" protocol=ttrpc version=3 Mar 25 02:35:00.011427 systemd[1]: Started cri-containerd-9b2740ec3559755b9558da6e751e08ed67374a14b9d03db3a2f3dfa7ba217190.scope - libcontainer container 9b2740ec3559755b9558da6e751e08ed67374a14b9d03db3a2f3dfa7ba217190. Mar 25 02:35:00.061436 containerd[1474]: time="2025-03-25T02:35:00.061368482Z" level=info msg="StartContainer for \"9b2740ec3559755b9558da6e751e08ed67374a14b9d03db3a2f3dfa7ba217190\" returns successfully" Mar 25 02:35:00.426807 containerd[1474]: time="2025-03-25T02:35:00.426654734Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:35:00.429066 containerd[1474]: time="2025-03-25T02:35:00.428338674Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.2: active requests=0, bytes read=77" Mar 25 02:35:00.434058 containerd[1474]: time="2025-03-25T02:35:00.433962123Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.2\" with image id \"sha256:d27fc480d1ad33921c40abef2ab6828fadf6524674fdcc622f571a5abc34ad55\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:3623f5b60fad0da3387a8649371b53171a4b1226f4d989d2acad9145dc0ef56f\", size \"44486324\" in 500.198218ms" Mar 25 02:35:00.434206 containerd[1474]: time="2025-03-25T02:35:00.434052489Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.2\" returns image reference \"sha256:d27fc480d1ad33921c40abef2ab6828fadf6524674fdcc622f571a5abc34ad55\"" Mar 25 02:35:00.437336 containerd[1474]: time="2025-03-25T02:35:00.437177175Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\"" Mar 25 02:35:00.439790 containerd[1474]: time="2025-03-25T02:35:00.439632218Z" level=info msg="CreateContainer within sandbox \"9499f60c62b6c570c5a9563074299cd429340ae1f3af539c1d14e48a663888fd\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 25 02:35:00.456341 containerd[1474]: time="2025-03-25T02:35:00.454167873Z" level=info msg="Container 7a1a0b660d85fba5b47d8153aeb33e89a2cf37cf8490960c94f048277ceb2047: CDI devices from CRI Config.CDIDevices: []" Mar 25 02:35:00.480360 containerd[1474]: time="2025-03-25T02:35:00.479948793Z" level=info msg="CreateContainer within sandbox \"9499f60c62b6c570c5a9563074299cd429340ae1f3af539c1d14e48a663888fd\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7a1a0b660d85fba5b47d8153aeb33e89a2cf37cf8490960c94f048277ceb2047\"" Mar 25 02:35:00.481816 containerd[1474]: time="2025-03-25T02:35:00.481661888Z" level=info msg="StartContainer for \"7a1a0b660d85fba5b47d8153aeb33e89a2cf37cf8490960c94f048277ceb2047\"" Mar 25 02:35:00.485483 containerd[1474]: time="2025-03-25T02:35:00.485349284Z" level=info msg="connecting to shim 7a1a0b660d85fba5b47d8153aeb33e89a2cf37cf8490960c94f048277ceb2047" address="unix:///run/containerd/s/562fb699095626408d31ced521a610707079ca858fdc7f77aa5b89a968c27dbe" protocol=ttrpc version=3 Mar 25 02:35:00.529442 systemd[1]: Started cri-containerd-7a1a0b660d85fba5b47d8153aeb33e89a2cf37cf8490960c94f048277ceb2047.scope - libcontainer container 7a1a0b660d85fba5b47d8153aeb33e89a2cf37cf8490960c94f048277ceb2047. Mar 25 02:35:00.590027 containerd[1474]: time="2025-03-25T02:35:00.589924021Z" level=info msg="StartContainer for \"7a1a0b660d85fba5b47d8153aeb33e89a2cf37cf8490960c94f048277ceb2047\" returns successfully" Mar 25 02:35:00.707745 containerd[1474]: time="2025-03-25T02:35:00.707522386Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f772cb72260b266b7dc70cdbdc27a90e10a9354f6db9703e8aa5a21684453d43\" id:\"c0fa9a14a8380d60e7c156b42c1cac6a3effd4ef30011cf6dbb68aeb1be3e25d\" pid:4652 exited_at:{seconds:1742870100 nanos:705989622}" Mar 25 02:35:01.583299 kubelet[2696]: I0325 02:35:01.582115 2696 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6975797848-8h5kp" podStartSLOduration=32.943462359 podStartE2EDuration="41.582099496s" podCreationTimestamp="2025-03-25 02:34:20 +0000 UTC" firstStartedPulling="2025-03-25 02:34:51.796880082 +0000 UTC m=+43.667634874" lastFinishedPulling="2025-03-25 02:35:00.435517169 +0000 UTC m=+52.306272011" observedRunningTime="2025-03-25 02:35:01.579280137 +0000 UTC m=+53.450034959" watchObservedRunningTime="2025-03-25 02:35:01.582099496 +0000 UTC m=+53.452854288" Mar 25 02:35:03.748486 containerd[1474]: time="2025-03-25T02:35:03.748410281Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:35:03.750945 containerd[1474]: time="2025-03-25T02:35:03.750905092Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2: active requests=0, bytes read=13986843" Mar 25 02:35:03.753982 containerd[1474]: time="2025-03-25T02:35:03.753952168Z" level=info msg="ImageCreate event name:\"sha256:09a5a6ea58a48ac826468e05538c78d1378e103737124f1744efea8699fc29a8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:35:03.757369 containerd[1474]: time="2025-03-25T02:35:03.757343739Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:54ef0afa50feb3f691782e8d6df9a7f27d127a3af9bbcbd0bcdadac98e8be8e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:35:03.759739 containerd[1474]: time="2025-03-25T02:35:03.759690778Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" with image id \"sha256:09a5a6ea58a48ac826468e05538c78d1378e103737124f1744efea8699fc29a8\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:54ef0afa50feb3f691782e8d6df9a7f27d127a3af9bbcbd0bcdadac98e8be8e3\", size \"15479899\" in 3.32193332s" Mar 25 02:35:03.759958 containerd[1474]: time="2025-03-25T02:35:03.759840452Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" returns image reference \"sha256:09a5a6ea58a48ac826468e05538c78d1378e103737124f1744efea8699fc29a8\"" Mar 25 02:35:03.766463 containerd[1474]: time="2025-03-25T02:35:03.764902266Z" level=info msg="CreateContainer within sandbox \"6ba96dccb2dffbf77d2ac4b4b68c51c59d29f31b2429e7cd5ba18203a17210ec\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 25 02:35:03.780900 containerd[1474]: time="2025-03-25T02:35:03.780860930Z" level=info msg="Container 012dead2ff4111f8b5c5155b1fefb3a7fc0c6b5b592b0f3f0c3d332812d6ca37: CDI devices from CRI Config.CDIDevices: []" Mar 25 02:35:03.804703 containerd[1474]: time="2025-03-25T02:35:03.804657274Z" level=info msg="CreateContainer within sandbox \"6ba96dccb2dffbf77d2ac4b4b68c51c59d29f31b2429e7cd5ba18203a17210ec\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"012dead2ff4111f8b5c5155b1fefb3a7fc0c6b5b592b0f3f0c3d332812d6ca37\"" Mar 25 02:35:03.805814 containerd[1474]: time="2025-03-25T02:35:03.805757528Z" level=info msg="StartContainer for \"012dead2ff4111f8b5c5155b1fefb3a7fc0c6b5b592b0f3f0c3d332812d6ca37\"" Mar 25 02:35:03.807972 containerd[1474]: time="2025-03-25T02:35:03.807941447Z" level=info msg="connecting to shim 012dead2ff4111f8b5c5155b1fefb3a7fc0c6b5b592b0f3f0c3d332812d6ca37" address="unix:///run/containerd/s/440b980e301268959872c9d70ef7b35245011c37e4191e9e62832263e0a53be8" protocol=ttrpc version=3 Mar 25 02:35:03.839061 systemd[1]: Started cri-containerd-012dead2ff4111f8b5c5155b1fefb3a7fc0c6b5b592b0f3f0c3d332812d6ca37.scope - libcontainer container 012dead2ff4111f8b5c5155b1fefb3a7fc0c6b5b592b0f3f0c3d332812d6ca37. Mar 25 02:35:03.914760 containerd[1474]: time="2025-03-25T02:35:03.914337821Z" level=info msg="StartContainer for \"012dead2ff4111f8b5c5155b1fefb3a7fc0c6b5b592b0f3f0c3d332812d6ca37\" returns successfully" Mar 25 02:35:04.395106 kubelet[2696]: I0325 02:35:04.394937 2696 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 25 02:35:04.395106 kubelet[2696]: I0325 02:35:04.394992 2696 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 25 02:35:04.771389 kubelet[2696]: I0325 02:35:04.770709 2696 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-j9qch" podStartSLOduration=32.732265902 podStartE2EDuration="44.770694103s" podCreationTimestamp="2025-03-25 02:34:20 +0000 UTC" firstStartedPulling="2025-03-25 02:34:51.722426878 +0000 UTC m=+43.593181670" lastFinishedPulling="2025-03-25 02:35:03.760855079 +0000 UTC m=+55.631609871" observedRunningTime="2025-03-25 02:35:04.768991024 +0000 UTC m=+56.639745826" watchObservedRunningTime="2025-03-25 02:35:04.770694103 +0000 UTC m=+56.641448895" Mar 25 02:35:10.156848 containerd[1474]: time="2025-03-25T02:35:10.156659060Z" level=info msg="TaskExit event in podsandbox handler container_id:\"40a5084b1f82b7fc41162546c95425570b388d3a541529666232da4317974cb0\" id:\"e3c10b65afe93d79746e1a7fa8a77d751b825b4e478d17a9242dbdb1dc0c1c65\" pid:4732 exited_at:{seconds:1742870110 nanos:156323126}" Mar 25 02:35:26.279499 containerd[1474]: time="2025-03-25T02:35:26.279427484Z" level=info msg="TaskExit event in podsandbox handler container_id:\"40a5084b1f82b7fc41162546c95425570b388d3a541529666232da4317974cb0\" id:\"1bf2aafe5fbacc513c0300086653d98f7b8cb465b612147a6e286f0bd9c962a6\" pid:4762 exited_at:{seconds:1742870126 nanos:278890087}" Mar 25 02:35:30.740445 containerd[1474]: time="2025-03-25T02:35:30.740397915Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f772cb72260b266b7dc70cdbdc27a90e10a9354f6db9703e8aa5a21684453d43\" id:\"9976bbc0088fc56321caf22071ea379b7c940ec271a937c08155d7b6e2dd6fe2\" pid:4789 exited_at:{seconds:1742870130 nanos:740004108}" Mar 25 02:35:40.174804 containerd[1474]: time="2025-03-25T02:35:40.174741683Z" level=info msg="TaskExit event in podsandbox handler container_id:\"40a5084b1f82b7fc41162546c95425570b388d3a541529666232da4317974cb0\" id:\"51afd0a03cf2bb7c8cba99e5e2d722040f9245374cd8f706634ddabd3fa253b3\" pid:4817 exited_at:{seconds:1742870140 nanos:174253711}" Mar 25 02:36:00.745079 containerd[1474]: time="2025-03-25T02:36:00.745023164Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f772cb72260b266b7dc70cdbdc27a90e10a9354f6db9703e8aa5a21684453d43\" id:\"d6ff26ad857cf623791a4a714430a3efdcf97a5192b501a9273cd31e510bee35\" pid:4842 exited_at:{seconds:1742870160 nanos:744051933}" Mar 25 02:36:10.161218 containerd[1474]: time="2025-03-25T02:36:10.161047628Z" level=info msg="TaskExit event in podsandbox handler container_id:\"40a5084b1f82b7fc41162546c95425570b388d3a541529666232da4317974cb0\" id:\"e26a1f8cd1281016b4ba93c245d7ecda701bd2039e76f2ee562e48c650d127af\" pid:4874 exited_at:{seconds:1742870170 nanos:160868792}" Mar 25 02:36:26.276692 containerd[1474]: time="2025-03-25T02:36:26.276509184Z" level=info msg="TaskExit event in podsandbox handler container_id:\"40a5084b1f82b7fc41162546c95425570b388d3a541529666232da4317974cb0\" id:\"38e816f608a6dfee039107a4429d142331bd30f2a8ad61911e30a7a44c37ece6\" pid:4918 exited_at:{seconds:1742870186 nanos:275678286}" Mar 25 02:36:30.742721 containerd[1474]: time="2025-03-25T02:36:30.742605068Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f772cb72260b266b7dc70cdbdc27a90e10a9354f6db9703e8aa5a21684453d43\" id:\"80bd15cfd174fc2642df305e69eb29bbaae1ebbfbc22171fea3f4766c2213dba\" pid:4940 exited_at:{seconds:1742870190 nanos:742145568}" Mar 25 02:36:40.173041 containerd[1474]: time="2025-03-25T02:36:40.172978897Z" level=info msg="TaskExit event in podsandbox handler container_id:\"40a5084b1f82b7fc41162546c95425570b388d3a541529666232da4317974cb0\" id:\"77571440f55cb73e6947cde4dc6b7e78808c70f27550f4495ab90677dc9c75f4\" pid:4964 exited_at:{seconds:1742870200 nanos:171328958}" Mar 25 02:36:50.153255 systemd[1]: Started sshd@9-172.24.4.54:22-172.24.4.1:40176.service - OpenSSH per-connection server daemon (172.24.4.1:40176). Mar 25 02:36:51.345339 sshd[4979]: Accepted publickey for core from 172.24.4.1 port 40176 ssh2: RSA SHA256:2p5KKBBmNEwazQvcAFKs6NISXxKbrLHbHWGQ80PLawU Mar 25 02:36:51.382489 sshd-session[4979]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:36:51.426564 systemd-logind[1458]: New session 12 of user core. Mar 25 02:36:51.439025 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 25 02:36:52.066915 sshd[4981]: Connection closed by 172.24.4.1 port 40176 Mar 25 02:36:52.068373 sshd-session[4979]: pam_unix(sshd:session): session closed for user core Mar 25 02:36:52.077859 systemd-logind[1458]: Session 12 logged out. Waiting for processes to exit. Mar 25 02:36:52.078534 systemd[1]: sshd@9-172.24.4.54:22-172.24.4.1:40176.service: Deactivated successfully. Mar 25 02:36:52.084392 systemd[1]: session-12.scope: Deactivated successfully. Mar 25 02:36:52.087374 systemd-logind[1458]: Removed session 12. Mar 25 02:36:57.087186 systemd[1]: Started sshd@10-172.24.4.54:22-172.24.4.1:39454.service - OpenSSH per-connection server daemon (172.24.4.1:39454). Mar 25 02:36:58.322848 sshd[4994]: Accepted publickey for core from 172.24.4.1 port 39454 ssh2: RSA SHA256:2p5KKBBmNEwazQvcAFKs6NISXxKbrLHbHWGQ80PLawU Mar 25 02:36:58.326402 sshd-session[4994]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:36:58.339050 systemd-logind[1458]: New session 13 of user core. Mar 25 02:36:58.345568 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 25 02:36:59.025560 sshd[4996]: Connection closed by 172.24.4.1 port 39454 Mar 25 02:36:59.025403 sshd-session[4994]: pam_unix(sshd:session): session closed for user core Mar 25 02:36:59.032629 systemd[1]: sshd@10-172.24.4.54:22-172.24.4.1:39454.service: Deactivated successfully. Mar 25 02:36:59.036641 systemd[1]: session-13.scope: Deactivated successfully. Mar 25 02:36:59.039842 systemd-logind[1458]: Session 13 logged out. Waiting for processes to exit. Mar 25 02:36:59.042618 systemd-logind[1458]: Removed session 13. Mar 25 02:37:00.736102 containerd[1474]: time="2025-03-25T02:37:00.736036503Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f772cb72260b266b7dc70cdbdc27a90e10a9354f6db9703e8aa5a21684453d43\" id:\"dc6ce5f89959a8ac9163ae7c2bdc089094c3a78ae26dfa12b7acfb89726d2d53\" pid:5023 exited_at:{seconds:1742870220 nanos:735392762}" Mar 25 02:37:04.049593 systemd[1]: Started sshd@11-172.24.4.54:22-172.24.4.1:35894.service - OpenSSH per-connection server daemon (172.24.4.1:35894). Mar 25 02:37:05.271008 sshd[5037]: Accepted publickey for core from 172.24.4.1 port 35894 ssh2: RSA SHA256:2p5KKBBmNEwazQvcAFKs6NISXxKbrLHbHWGQ80PLawU Mar 25 02:37:05.274177 sshd-session[5037]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:37:05.287474 systemd-logind[1458]: New session 14 of user core. Mar 25 02:37:05.292626 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 25 02:37:06.089069 sshd[5039]: Connection closed by 172.24.4.1 port 35894 Mar 25 02:37:06.090174 sshd-session[5037]: pam_unix(sshd:session): session closed for user core Mar 25 02:37:06.097777 systemd[1]: sshd@11-172.24.4.54:22-172.24.4.1:35894.service: Deactivated successfully. Mar 25 02:37:06.101385 systemd[1]: session-14.scope: Deactivated successfully. Mar 25 02:37:06.103554 systemd-logind[1458]: Session 14 logged out. Waiting for processes to exit. Mar 25 02:37:06.106527 systemd-logind[1458]: Removed session 14. Mar 25 02:37:10.174014 containerd[1474]: time="2025-03-25T02:37:10.173959304Z" level=info msg="TaskExit event in podsandbox handler container_id:\"40a5084b1f82b7fc41162546c95425570b388d3a541529666232da4317974cb0\" id:\"f50173e12e044be2af0fd4babd4fc9b1061fd07036a94f1decf25b75d9d876dd\" pid:5064 exited_at:{seconds:1742870230 nanos:173477841}" Mar 25 02:37:11.109336 systemd[1]: Started sshd@12-172.24.4.54:22-172.24.4.1:35904.service - OpenSSH per-connection server daemon (172.24.4.1:35904). Mar 25 02:37:12.141359 sshd[5074]: Accepted publickey for core from 172.24.4.1 port 35904 ssh2: RSA SHA256:2p5KKBBmNEwazQvcAFKs6NISXxKbrLHbHWGQ80PLawU Mar 25 02:37:12.144066 sshd-session[5074]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:37:12.157396 systemd-logind[1458]: New session 15 of user core. Mar 25 02:37:12.163065 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 25 02:37:13.021094 sshd[5076]: Connection closed by 172.24.4.1 port 35904 Mar 25 02:37:13.022658 sshd-session[5074]: pam_unix(sshd:session): session closed for user core Mar 25 02:37:13.039765 systemd[1]: sshd@12-172.24.4.54:22-172.24.4.1:35904.service: Deactivated successfully. Mar 25 02:37:13.044509 systemd[1]: session-15.scope: Deactivated successfully. Mar 25 02:37:13.047780 systemd-logind[1458]: Session 15 logged out. Waiting for processes to exit. Mar 25 02:37:13.054729 systemd[1]: Started sshd@13-172.24.4.54:22-172.24.4.1:35914.service - OpenSSH per-connection server daemon (172.24.4.1:35914). Mar 25 02:37:13.057817 systemd-logind[1458]: Removed session 15. Mar 25 02:37:14.388896 sshd[5088]: Accepted publickey for core from 172.24.4.1 port 35914 ssh2: RSA SHA256:2p5KKBBmNEwazQvcAFKs6NISXxKbrLHbHWGQ80PLawU Mar 25 02:37:14.391640 sshd-session[5088]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:37:14.403861 systemd-logind[1458]: New session 16 of user core. Mar 25 02:37:14.413597 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 25 02:37:15.404951 sshd[5091]: Connection closed by 172.24.4.1 port 35914 Mar 25 02:37:15.406181 sshd-session[5088]: pam_unix(sshd:session): session closed for user core Mar 25 02:37:15.420183 systemd[1]: sshd@13-172.24.4.54:22-172.24.4.1:35914.service: Deactivated successfully. Mar 25 02:37:15.423815 systemd[1]: session-16.scope: Deactivated successfully. Mar 25 02:37:15.428561 systemd-logind[1458]: Session 16 logged out. Waiting for processes to exit. Mar 25 02:37:15.431656 systemd[1]: Started sshd@14-172.24.4.54:22-172.24.4.1:59070.service - OpenSSH per-connection server daemon (172.24.4.1:59070). Mar 25 02:37:15.435691 systemd-logind[1458]: Removed session 16. Mar 25 02:37:16.815119 sshd[5100]: Accepted publickey for core from 172.24.4.1 port 59070 ssh2: RSA SHA256:2p5KKBBmNEwazQvcAFKs6NISXxKbrLHbHWGQ80PLawU Mar 25 02:37:16.818250 sshd-session[5100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:37:16.830791 systemd-logind[1458]: New session 17 of user core. Mar 25 02:37:16.840820 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 25 02:37:17.519652 sshd[5106]: Connection closed by 172.24.4.1 port 59070 Mar 25 02:37:17.521357 sshd-session[5100]: pam_unix(sshd:session): session closed for user core Mar 25 02:37:17.531995 systemd-logind[1458]: Session 17 logged out. Waiting for processes to exit. Mar 25 02:37:17.532875 systemd[1]: sshd@14-172.24.4.54:22-172.24.4.1:59070.service: Deactivated successfully. Mar 25 02:37:17.538898 systemd[1]: session-17.scope: Deactivated successfully. Mar 25 02:37:17.541932 systemd-logind[1458]: Removed session 17. Mar 25 02:37:22.540816 systemd[1]: Started sshd@15-172.24.4.54:22-172.24.4.1:59082.service - OpenSSH per-connection server daemon (172.24.4.1:59082). Mar 25 02:37:23.886539 sshd[5123]: Accepted publickey for core from 172.24.4.1 port 59082 ssh2: RSA SHA256:2p5KKBBmNEwazQvcAFKs6NISXxKbrLHbHWGQ80PLawU Mar 25 02:37:23.889364 sshd-session[5123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:37:23.902497 systemd-logind[1458]: New session 18 of user core. Mar 25 02:37:23.915626 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 25 02:37:24.728700 sshd[5125]: Connection closed by 172.24.4.1 port 59082 Mar 25 02:37:24.729317 sshd-session[5123]: pam_unix(sshd:session): session closed for user core Mar 25 02:37:24.732837 systemd[1]: sshd@15-172.24.4.54:22-172.24.4.1:59082.service: Deactivated successfully. Mar 25 02:37:24.735084 systemd[1]: session-18.scope: Deactivated successfully. Mar 25 02:37:24.737059 systemd-logind[1458]: Session 18 logged out. Waiting for processes to exit. Mar 25 02:37:24.738639 systemd-logind[1458]: Removed session 18. Mar 25 02:37:26.280656 containerd[1474]: time="2025-03-25T02:37:26.280568793Z" level=info msg="TaskExit event in podsandbox handler container_id:\"40a5084b1f82b7fc41162546c95425570b388d3a541529666232da4317974cb0\" id:\"36333252b64868c7bcaef605890601e070b3c06ed64a77b5a51778d2d070cfcd\" pid:5149 exited_at:{seconds:1742870246 nanos:279114888}" Mar 25 02:37:29.743041 systemd[1]: Started sshd@16-172.24.4.54:22-172.24.4.1:34552.service - OpenSSH per-connection server daemon (172.24.4.1:34552). Mar 25 02:37:30.741951 containerd[1474]: time="2025-03-25T02:37:30.741901697Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f772cb72260b266b7dc70cdbdc27a90e10a9354f6db9703e8aa5a21684453d43\" id:\"e3e38f2788ac9d01360b89c47eef4e5fbb233c4e35b9c06c949459e972485c59\" pid:5180 exited_at:{seconds:1742870250 nanos:741548699}" Mar 25 02:37:30.997248 sshd[5165]: Accepted publickey for core from 172.24.4.1 port 34552 ssh2: RSA SHA256:2p5KKBBmNEwazQvcAFKs6NISXxKbrLHbHWGQ80PLawU Mar 25 02:37:31.000883 sshd-session[5165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:37:31.012743 systemd-logind[1458]: New session 19 of user core. Mar 25 02:37:31.022641 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 25 02:37:31.689473 sshd[5193]: Connection closed by 172.24.4.1 port 34552 Mar 25 02:37:31.689997 sshd-session[5165]: pam_unix(sshd:session): session closed for user core Mar 25 02:37:31.697748 systemd[1]: sshd@16-172.24.4.54:22-172.24.4.1:34552.service: Deactivated successfully. Mar 25 02:37:31.701663 systemd[1]: session-19.scope: Deactivated successfully. Mar 25 02:37:31.704059 systemd-logind[1458]: Session 19 logged out. Waiting for processes to exit. Mar 25 02:37:31.706878 systemd-logind[1458]: Removed session 19. Mar 25 02:37:36.711006 systemd[1]: Started sshd@17-172.24.4.54:22-172.24.4.1:37890.service - OpenSSH per-connection server daemon (172.24.4.1:37890). Mar 25 02:37:37.896955 sshd[5206]: Accepted publickey for core from 172.24.4.1 port 37890 ssh2: RSA SHA256:2p5KKBBmNEwazQvcAFKs6NISXxKbrLHbHWGQ80PLawU Mar 25 02:37:37.899656 sshd-session[5206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:37:37.912631 systemd-logind[1458]: New session 20 of user core. Mar 25 02:37:37.920593 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 25 02:37:38.740527 sshd[5208]: Connection closed by 172.24.4.1 port 37890 Mar 25 02:37:38.741439 sshd-session[5206]: pam_unix(sshd:session): session closed for user core Mar 25 02:37:38.755852 systemd[1]: sshd@17-172.24.4.54:22-172.24.4.1:37890.service: Deactivated successfully. Mar 25 02:37:38.760142 systemd[1]: session-20.scope: Deactivated successfully. Mar 25 02:37:38.762422 systemd-logind[1458]: Session 20 logged out. Waiting for processes to exit. Mar 25 02:37:38.767079 systemd[1]: Started sshd@18-172.24.4.54:22-172.24.4.1:37902.service - OpenSSH per-connection server daemon (172.24.4.1:37902). Mar 25 02:37:38.771720 systemd-logind[1458]: Removed session 20. Mar 25 02:37:39.957388 sshd[5219]: Accepted publickey for core from 172.24.4.1 port 37902 ssh2: RSA SHA256:2p5KKBBmNEwazQvcAFKs6NISXxKbrLHbHWGQ80PLawU Mar 25 02:37:39.960562 sshd-session[5219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:37:39.971765 systemd-logind[1458]: New session 21 of user core. Mar 25 02:37:39.977616 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 25 02:37:40.175076 containerd[1474]: time="2025-03-25T02:37:40.175032169Z" level=info msg="TaskExit event in podsandbox handler container_id:\"40a5084b1f82b7fc41162546c95425570b388d3a541529666232da4317974cb0\" id:\"ab9b514c8d726c3a08e43175a3fce974ff5d3f57f8721b283b4401c8368e22cd\" pid:5235 exited_at:{seconds:1742870260 nanos:174717993}" Mar 25 02:37:40.880325 sshd[5222]: Connection closed by 172.24.4.1 port 37902 Mar 25 02:37:40.880854 sshd-session[5219]: pam_unix(sshd:session): session closed for user core Mar 25 02:37:40.899133 systemd[1]: sshd@18-172.24.4.54:22-172.24.4.1:37902.service: Deactivated successfully. Mar 25 02:37:40.902830 systemd[1]: session-21.scope: Deactivated successfully. Mar 25 02:37:40.905117 systemd-logind[1458]: Session 21 logged out. Waiting for processes to exit. Mar 25 02:37:40.909921 systemd[1]: Started sshd@19-172.24.4.54:22-172.24.4.1:37910.service - OpenSSH per-connection server daemon (172.24.4.1:37910). Mar 25 02:37:40.913192 systemd-logind[1458]: Removed session 21. Mar 25 02:37:42.369861 sshd[5252]: Accepted publickey for core from 172.24.4.1 port 37910 ssh2: RSA SHA256:2p5KKBBmNEwazQvcAFKs6NISXxKbrLHbHWGQ80PLawU Mar 25 02:37:42.373598 sshd-session[5252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:37:42.388081 systemd-logind[1458]: New session 22 of user core. Mar 25 02:37:42.394644 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 25 02:37:45.609186 sshd[5255]: Connection closed by 172.24.4.1 port 37910 Mar 25 02:37:45.610416 sshd-session[5252]: pam_unix(sshd:session): session closed for user core Mar 25 02:37:45.624494 systemd[1]: Started sshd@20-172.24.4.54:22-172.24.4.1:45600.service - OpenSSH per-connection server daemon (172.24.4.1:45600). Mar 25 02:37:45.624988 systemd[1]: sshd@19-172.24.4.54:22-172.24.4.1:37910.service: Deactivated successfully. Mar 25 02:37:45.627922 systemd[1]: session-22.scope: Deactivated successfully. Mar 25 02:37:45.629323 systemd[1]: session-22.scope: Consumed 930ms CPU time, 73.3M memory peak. Mar 25 02:37:45.635827 systemd-logind[1458]: Session 22 logged out. Waiting for processes to exit. Mar 25 02:37:45.639760 systemd-logind[1458]: Removed session 22. Mar 25 02:37:46.922228 sshd[5274]: Accepted publickey for core from 172.24.4.1 port 45600 ssh2: RSA SHA256:2p5KKBBmNEwazQvcAFKs6NISXxKbrLHbHWGQ80PLawU Mar 25 02:37:46.924892 sshd-session[5274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:37:46.935894 systemd-logind[1458]: New session 23 of user core. Mar 25 02:37:46.944650 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 25 02:37:47.942325 sshd[5281]: Connection closed by 172.24.4.1 port 45600 Mar 25 02:37:47.943538 sshd-session[5274]: pam_unix(sshd:session): session closed for user core Mar 25 02:37:47.957845 systemd[1]: sshd@20-172.24.4.54:22-172.24.4.1:45600.service: Deactivated successfully. Mar 25 02:37:47.962000 systemd[1]: session-23.scope: Deactivated successfully. Mar 25 02:37:47.964855 systemd-logind[1458]: Session 23 logged out. Waiting for processes to exit. Mar 25 02:37:47.969964 systemd[1]: Started sshd@21-172.24.4.54:22-172.24.4.1:45606.service - OpenSSH per-connection server daemon (172.24.4.1:45606). Mar 25 02:37:47.973923 systemd-logind[1458]: Removed session 23. Mar 25 02:37:49.070174 sshd[5291]: Accepted publickey for core from 172.24.4.1 port 45606 ssh2: RSA SHA256:2p5KKBBmNEwazQvcAFKs6NISXxKbrLHbHWGQ80PLawU Mar 25 02:37:49.072907 sshd-session[5291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:37:49.086742 systemd-logind[1458]: New session 24 of user core. Mar 25 02:37:49.096697 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 25 02:37:49.879995 sshd[5294]: Connection closed by 172.24.4.1 port 45606 Mar 25 02:37:49.881051 sshd-session[5291]: pam_unix(sshd:session): session closed for user core Mar 25 02:37:49.889427 systemd[1]: sshd@21-172.24.4.54:22-172.24.4.1:45606.service: Deactivated successfully. Mar 25 02:37:49.895130 systemd[1]: session-24.scope: Deactivated successfully. Mar 25 02:37:49.898398 systemd-logind[1458]: Session 24 logged out. Waiting for processes to exit. Mar 25 02:37:49.900961 systemd-logind[1458]: Removed session 24. Mar 25 02:37:54.894846 systemd[1]: Started sshd@22-172.24.4.54:22-172.24.4.1:38730.service - OpenSSH per-connection server daemon (172.24.4.1:38730). Mar 25 02:37:56.112440 sshd[5311]: Accepted publickey for core from 172.24.4.1 port 38730 ssh2: RSA SHA256:2p5KKBBmNEwazQvcAFKs6NISXxKbrLHbHWGQ80PLawU Mar 25 02:37:56.115454 sshd-session[5311]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:37:56.127931 systemd-logind[1458]: New session 25 of user core. Mar 25 02:37:56.137704 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 25 02:37:57.131667 sshd[5324]: Connection closed by 172.24.4.1 port 38730 Mar 25 02:37:57.131508 sshd-session[5311]: pam_unix(sshd:session): session closed for user core Mar 25 02:37:57.141994 systemd-logind[1458]: Session 25 logged out. Waiting for processes to exit. Mar 25 02:37:57.142756 systemd[1]: sshd@22-172.24.4.54:22-172.24.4.1:38730.service: Deactivated successfully. Mar 25 02:37:57.147399 systemd[1]: session-25.scope: Deactivated successfully. Mar 25 02:37:57.150657 systemd-logind[1458]: Removed session 25. Mar 25 02:38:00.740919 containerd[1474]: time="2025-03-25T02:38:00.740563942Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f772cb72260b266b7dc70cdbdc27a90e10a9354f6db9703e8aa5a21684453d43\" id:\"3b24f560fd9e2de7429137cfd1448749956d83df998216f33411053dc0585ff0\" pid:5353 exited_at:{seconds:1742870280 nanos:738315944}" Mar 25 02:38:02.150407 systemd[1]: Started sshd@23-172.24.4.54:22-172.24.4.1:38732.service - OpenSSH per-connection server daemon (172.24.4.1:38732). Mar 25 02:38:03.524155 sshd[5366]: Accepted publickey for core from 172.24.4.1 port 38732 ssh2: RSA SHA256:2p5KKBBmNEwazQvcAFKs6NISXxKbrLHbHWGQ80PLawU Mar 25 02:38:03.526974 sshd-session[5366]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:38:03.538447 systemd-logind[1458]: New session 26 of user core. Mar 25 02:38:03.546602 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 25 02:38:04.264917 sshd[5368]: Connection closed by 172.24.4.1 port 38732 Mar 25 02:38:04.265739 sshd-session[5366]: pam_unix(sshd:session): session closed for user core Mar 25 02:38:04.276912 systemd[1]: sshd@23-172.24.4.54:22-172.24.4.1:38732.service: Deactivated successfully. Mar 25 02:38:04.280931 systemd[1]: session-26.scope: Deactivated successfully. Mar 25 02:38:04.283223 systemd-logind[1458]: Session 26 logged out. Waiting for processes to exit. Mar 25 02:38:04.286071 systemd-logind[1458]: Removed session 26. Mar 25 02:38:09.286823 systemd[1]: Started sshd@24-172.24.4.54:22-172.24.4.1:50684.service - OpenSSH per-connection server daemon (172.24.4.1:50684). Mar 25 02:38:10.167567 containerd[1474]: time="2025-03-25T02:38:10.167522291Z" level=info msg="TaskExit event in podsandbox handler container_id:\"40a5084b1f82b7fc41162546c95425570b388d3a541529666232da4317974cb0\" id:\"a020727ea74ad063a9b7a256b9428e7eaa0f694494a02e53313613eb3e2f1ef6\" pid:5397 exited_at:{seconds:1742870290 nanos:166934859}" Mar 25 02:38:10.619867 sshd[5382]: Accepted publickey for core from 172.24.4.1 port 50684 ssh2: RSA SHA256:2p5KKBBmNEwazQvcAFKs6NISXxKbrLHbHWGQ80PLawU Mar 25 02:38:10.623249 sshd-session[5382]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:38:10.635762 systemd-logind[1458]: New session 27 of user core. Mar 25 02:38:10.644583 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 25 02:38:11.412086 sshd[5409]: Connection closed by 172.24.4.1 port 50684 Mar 25 02:38:11.411121 sshd-session[5382]: pam_unix(sshd:session): session closed for user core Mar 25 02:38:11.418649 systemd-logind[1458]: Session 27 logged out. Waiting for processes to exit. Mar 25 02:38:11.418973 systemd[1]: sshd@24-172.24.4.54:22-172.24.4.1:50684.service: Deactivated successfully. Mar 25 02:38:11.423175 systemd[1]: session-27.scope: Deactivated successfully. Mar 25 02:38:11.428462 systemd-logind[1458]: Removed session 27. Mar 25 02:38:16.434510 systemd[1]: Started sshd@25-172.24.4.54:22-172.24.4.1:58888.service - OpenSSH per-connection server daemon (172.24.4.1:58888). Mar 25 02:38:17.915098 sshd[5423]: Accepted publickey for core from 172.24.4.1 port 58888 ssh2: RSA SHA256:2p5KKBBmNEwazQvcAFKs6NISXxKbrLHbHWGQ80PLawU Mar 25 02:38:17.918913 sshd-session[5423]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:38:17.932453 systemd-logind[1458]: New session 28 of user core. Mar 25 02:38:17.938615 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 25 02:38:18.581355 sshd[5425]: Connection closed by 172.24.4.1 port 58888 Mar 25 02:38:18.582496 sshd-session[5423]: pam_unix(sshd:session): session closed for user core Mar 25 02:38:18.590704 systemd[1]: sshd@25-172.24.4.54:22-172.24.4.1:58888.service: Deactivated successfully. Mar 25 02:38:18.596668 systemd[1]: session-28.scope: Deactivated successfully. Mar 25 02:38:18.601409 systemd-logind[1458]: Session 28 logged out. Waiting for processes to exit. Mar 25 02:38:18.603937 systemd-logind[1458]: Removed session 28. Mar 25 02:38:23.605092 systemd[1]: Started sshd@26-172.24.4.54:22-172.24.4.1:57714.service - OpenSSH per-connection server daemon (172.24.4.1:57714). Mar 25 02:38:24.856478 sshd[5437]: Accepted publickey for core from 172.24.4.1 port 57714 ssh2: RSA SHA256:2p5KKBBmNEwazQvcAFKs6NISXxKbrLHbHWGQ80PLawU Mar 25 02:38:24.859129 sshd-session[5437]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:38:24.870760 systemd-logind[1458]: New session 29 of user core. Mar 25 02:38:24.879601 systemd[1]: Started session-29.scope - Session 29 of User core. Mar 25 02:38:25.556697 sshd[5439]: Connection closed by 172.24.4.1 port 57714 Mar 25 02:38:25.558190 sshd-session[5437]: pam_unix(sshd:session): session closed for user core Mar 25 02:38:25.565708 systemd[1]: sshd@26-172.24.4.54:22-172.24.4.1:57714.service: Deactivated successfully. Mar 25 02:38:25.570123 systemd[1]: session-29.scope: Deactivated successfully. Mar 25 02:38:25.572591 systemd-logind[1458]: Session 29 logged out. Waiting for processes to exit. Mar 25 02:38:25.575239 systemd-logind[1458]: Removed session 29. Mar 25 02:38:26.280326 containerd[1474]: time="2025-03-25T02:38:26.280059878Z" level=info msg="TaskExit event in podsandbox handler container_id:\"40a5084b1f82b7fc41162546c95425570b388d3a541529666232da4317974cb0\" id:\"d62b3ad6894f3f24839b847a600a294b5716eff2f771c86986fdfdb39ce39a92\" pid:5462 exited_at:{seconds:1742870306 nanos:279504083}" Mar 25 02:38:30.737659 containerd[1474]: time="2025-03-25T02:38:30.737452913Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f772cb72260b266b7dc70cdbdc27a90e10a9354f6db9703e8aa5a21684453d43\" id:\"b17fbb8f9dc71941baabb7ff590ed961b78f6b7fa70e366763b12a62de953308\" pid:5483 exited_at:{seconds:1742870310 nanos:737097581}" Mar 25 02:38:40.169127 containerd[1474]: time="2025-03-25T02:38:40.169024060Z" level=info msg="TaskExit event in podsandbox handler container_id:\"40a5084b1f82b7fc41162546c95425570b388d3a541529666232da4317974cb0\" id:\"546d88432eb208e6e0dabade25d6b4eb290285b8af017397457921f54701ad06\" pid:5508 exited_at:{seconds:1742870320 nanos:168801433}"