Dec 13 01:39:41.058752 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Dec 12 23:15:00 -00 2024 Dec 13 01:39:41.058790 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:39:41.058803 kernel: BIOS-provided physical RAM map: Dec 13 01:39:41.058812 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 01:39:41.058821 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 01:39:41.058830 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 01:39:41.058841 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007cfdbfff] usable Dec 13 01:39:41.058850 kernel: BIOS-e820: [mem 0x000000007cfdc000-0x000000007cffffff] reserved Dec 13 01:39:41.058863 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 13 01:39:41.058873 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 13 01:39:41.058882 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 01:39:41.058891 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 01:39:41.058900 kernel: NX (Execute Disable) protection: active Dec 13 01:39:41.058927 kernel: APIC: Static calls initialized Dec 13 01:39:41.058943 kernel: SMBIOS 2.8 present. Dec 13 01:39:41.058954 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Dec 13 01:39:41.058964 kernel: Hypervisor detected: KVM Dec 13 01:39:41.058974 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 01:39:41.058984 kernel: kvm-clock: using sched offset of 3015484482 cycles Dec 13 01:39:41.058994 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 01:39:41.059005 kernel: tsc: Detected 2495.310 MHz processor Dec 13 01:39:41.059015 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 01:39:41.059026 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 01:39:41.059041 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 Dec 13 01:39:41.059051 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 13 01:39:41.059061 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 01:39:41.059072 kernel: Using GB pages for direct mapping Dec 13 01:39:41.059082 kernel: ACPI: Early table checksum verification disabled Dec 13 01:39:41.059092 kernel: ACPI: RSDP 0x00000000000F51F0 000014 (v00 BOCHS ) Dec 13 01:39:41.059102 kernel: ACPI: RSDT 0x000000007CFE265D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:39:41.059113 kernel: ACPI: FACP 0x000000007CFE244D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:39:41.059123 kernel: ACPI: DSDT 0x000000007CFE0040 00240D (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:39:41.059137 kernel: ACPI: FACS 0x000000007CFE0000 000040 Dec 13 01:39:41.059147 kernel: ACPI: APIC 0x000000007CFE2541 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:39:41.059157 kernel: ACPI: HPET 0x000000007CFE25C1 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:39:41.059168 kernel: ACPI: MCFG 0x000000007CFE25F9 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:39:41.059178 kernel: ACPI: WAET 0x000000007CFE2635 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:39:41.059188 kernel: ACPI: Reserving FACP table memory at [mem 0x7cfe244d-0x7cfe2540] Dec 13 01:39:41.059198 kernel: ACPI: Reserving DSDT table memory at [mem 0x7cfe0040-0x7cfe244c] Dec 13 01:39:41.059209 kernel: ACPI: Reserving FACS table memory at [mem 0x7cfe0000-0x7cfe003f] Dec 13 01:39:41.059228 kernel: ACPI: Reserving APIC table memory at [mem 0x7cfe2541-0x7cfe25c0] Dec 13 01:39:41.059251 kernel: ACPI: Reserving HPET table memory at [mem 0x7cfe25c1-0x7cfe25f8] Dec 13 01:39:41.059262 kernel: ACPI: Reserving MCFG table memory at [mem 0x7cfe25f9-0x7cfe2634] Dec 13 01:39:41.059272 kernel: ACPI: Reserving WAET table memory at [mem 0x7cfe2635-0x7cfe265c] Dec 13 01:39:41.059283 kernel: No NUMA configuration found Dec 13 01:39:41.059294 kernel: Faking a node at [mem 0x0000000000000000-0x000000007cfdbfff] Dec 13 01:39:41.059308 kernel: NODE_DATA(0) allocated [mem 0x7cfd6000-0x7cfdbfff] Dec 13 01:39:41.059319 kernel: Zone ranges: Dec 13 01:39:41.059329 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 01:39:41.059340 kernel: DMA32 [mem 0x0000000001000000-0x000000007cfdbfff] Dec 13 01:39:41.059350 kernel: Normal empty Dec 13 01:39:41.059361 kernel: Movable zone start for each node Dec 13 01:39:41.059372 kernel: Early memory node ranges Dec 13 01:39:41.059382 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 01:39:41.059393 kernel: node 0: [mem 0x0000000000100000-0x000000007cfdbfff] Dec 13 01:39:41.059403 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007cfdbfff] Dec 13 01:39:41.059418 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 01:39:41.059428 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 01:39:41.059439 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Dec 13 01:39:41.059449 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 01:39:41.059460 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 01:39:41.059471 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 01:39:41.059481 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 01:39:41.059492 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 01:39:41.059502 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 01:39:41.059517 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 01:39:41.059528 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 01:39:41.059538 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 01:39:41.059549 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 01:39:41.059560 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 01:39:41.059570 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 13 01:39:41.059581 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 13 01:39:41.059591 kernel: Booting paravirtualized kernel on KVM Dec 13 01:39:41.059602 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 01:39:41.059616 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 13 01:39:41.059627 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Dec 13 01:39:41.059638 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Dec 13 01:39:41.059648 kernel: pcpu-alloc: [0] 0 1 Dec 13 01:39:41.059658 kernel: kvm-guest: PV spinlocks disabled, no host support Dec 13 01:39:41.059671 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:39:41.059683 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:39:41.059693 kernel: random: crng init done Dec 13 01:39:41.059707 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 01:39:41.059718 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 01:39:41.059729 kernel: Fallback order for Node 0: 0 Dec 13 01:39:41.059739 kernel: Built 1 zonelists, mobility grouping on. Total pages: 503708 Dec 13 01:39:41.059750 kernel: Policy zone: DMA32 Dec 13 01:39:41.059760 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:39:41.059772 kernel: Memory: 1922056K/2047464K available (12288K kernel code, 2299K rwdata, 22724K rodata, 42844K init, 2348K bss, 125148K reserved, 0K cma-reserved) Dec 13 01:39:41.059783 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 01:39:41.059793 kernel: ftrace: allocating 37902 entries in 149 pages Dec 13 01:39:41.059807 kernel: ftrace: allocated 149 pages with 4 groups Dec 13 01:39:41.059818 kernel: Dynamic Preempt: voluntary Dec 13 01:39:41.059828 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 01:39:41.059840 kernel: rcu: RCU event tracing is enabled. Dec 13 01:39:41.059851 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 01:39:41.059862 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 01:39:41.059873 kernel: Rude variant of Tasks RCU enabled. Dec 13 01:39:41.059884 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:39:41.059894 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:39:41.059909 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 01:39:41.059934 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 13 01:39:41.059945 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 01:39:41.059955 kernel: Console: colour VGA+ 80x25 Dec 13 01:39:41.059966 kernel: printk: console [tty0] enabled Dec 13 01:39:41.059976 kernel: printk: console [ttyS0] enabled Dec 13 01:39:41.059987 kernel: ACPI: Core revision 20230628 Dec 13 01:39:41.059998 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 13 01:39:41.060009 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 01:39:41.060019 kernel: x2apic enabled Dec 13 01:39:41.060034 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 01:39:41.060045 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 01:39:41.060055 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Dec 13 01:39:41.060066 kernel: Calibrating delay loop (skipped) preset value.. 4990.62 BogoMIPS (lpj=2495310) Dec 13 01:39:41.060077 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 13 01:39:41.060087 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 13 01:39:41.060098 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 13 01:39:41.060109 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 01:39:41.060135 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 01:39:41.060146 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 01:39:41.060157 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 01:39:41.060172 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Dec 13 01:39:41.060183 kernel: RETBleed: Mitigation: untrained return thunk Dec 13 01:39:41.060195 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 01:39:41.060206 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 13 01:39:41.060218 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Dec 13 01:39:41.060230 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Dec 13 01:39:41.060252 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Dec 13 01:39:41.060264 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 01:39:41.060279 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 01:39:41.060291 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 01:39:41.060302 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 01:39:41.060313 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Dec 13 01:39:41.060324 kernel: Freeing SMP alternatives memory: 32K Dec 13 01:39:41.060339 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:39:41.060350 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 01:39:41.060361 kernel: landlock: Up and running. Dec 13 01:39:41.060372 kernel: SELinux: Initializing. Dec 13 01:39:41.060383 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 01:39:41.060395 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 01:39:41.060406 kernel: smpboot: CPU0: AMD EPYC Processor (family: 0x17, model: 0x31, stepping: 0x0) Dec 13 01:39:41.060417 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:39:41.060429 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:39:41.060444 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:39:41.060455 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 13 01:39:41.060466 kernel: ... version: 0 Dec 13 01:39:41.060477 kernel: ... bit width: 48 Dec 13 01:39:41.060488 kernel: ... generic registers: 6 Dec 13 01:39:41.060499 kernel: ... value mask: 0000ffffffffffff Dec 13 01:39:41.060510 kernel: ... max period: 00007fffffffffff Dec 13 01:39:41.060521 kernel: ... fixed-purpose events: 0 Dec 13 01:39:41.060532 kernel: ... event mask: 000000000000003f Dec 13 01:39:41.060547 kernel: signal: max sigframe size: 1776 Dec 13 01:39:41.060558 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:39:41.060569 kernel: rcu: Max phase no-delay instances is 400. Dec 13 01:39:41.060581 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:39:41.060592 kernel: smpboot: x86: Booting SMP configuration: Dec 13 01:39:41.060603 kernel: .... node #0, CPUs: #1 Dec 13 01:39:41.060614 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 01:39:41.060625 kernel: smpboot: Max logical packages: 1 Dec 13 01:39:41.060636 kernel: smpboot: Total of 2 processors activated (9981.24 BogoMIPS) Dec 13 01:39:41.060651 kernel: devtmpfs: initialized Dec 13 01:39:41.060662 kernel: x86/mm: Memory block size: 128MB Dec 13 01:39:41.060673 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:39:41.060684 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 01:39:41.060695 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:39:41.060706 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:39:41.060717 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:39:41.060729 kernel: audit: type=2000 audit(1734053979.651:1): state=initialized audit_enabled=0 res=1 Dec 13 01:39:41.060740 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:39:41.060755 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 01:39:41.060766 kernel: cpuidle: using governor menu Dec 13 01:39:41.060777 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:39:41.060788 kernel: dca service started, version 1.12.1 Dec 13 01:39:41.060799 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Dec 13 01:39:41.060810 kernel: PCI: Using configuration type 1 for base access Dec 13 01:39:41.060822 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 01:39:41.060833 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:39:41.060844 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 01:39:41.060858 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:39:41.060870 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 01:39:41.060881 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:39:41.060892 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:39:41.060903 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:39:41.060936 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:39:41.060948 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 01:39:41.060959 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 01:39:41.060970 kernel: ACPI: Interpreter enabled Dec 13 01:39:41.060985 kernel: ACPI: PM: (supports S0 S5) Dec 13 01:39:41.060996 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 01:39:41.061007 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 01:39:41.061019 kernel: PCI: Using E820 reservations for host bridge windows Dec 13 01:39:41.061030 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 13 01:39:41.061041 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 01:39:41.061317 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 01:39:41.061511 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 13 01:39:41.061698 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 13 01:39:41.061713 kernel: PCI host bridge to bus 0000:00 Dec 13 01:39:41.061902 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 01:39:41.062106 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 01:39:41.062285 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 01:39:41.062449 kernel: pci_bus 0000:00: root bus resource [mem 0x7d000000-0xafffffff window] Dec 13 01:39:41.062612 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 13 01:39:41.062781 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Dec 13 01:39:41.062973 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 01:39:41.063176 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Dec 13 01:39:41.063388 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 Dec 13 01:39:41.063570 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfb800000-0xfbffffff pref] Dec 13 01:39:41.063749 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfd200000-0xfd203fff 64bit pref] Dec 13 01:39:41.065986 kernel: pci 0000:00:01.0: reg 0x20: [mem 0xfea10000-0xfea10fff] Dec 13 01:39:41.066193 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea00000-0xfea0ffff pref] Dec 13 01:39:41.066391 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 01:39:41.066583 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Dec 13 01:39:41.066763 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea11000-0xfea11fff] Dec 13 01:39:41.066999 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Dec 13 01:39:41.067180 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea12000-0xfea12fff] Dec 13 01:39:41.067392 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Dec 13 01:39:41.067571 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea13000-0xfea13fff] Dec 13 01:39:41.069259 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Dec 13 01:39:41.069463 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea14000-0xfea14fff] Dec 13 01:39:41.069662 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Dec 13 01:39:41.069842 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea15000-0xfea15fff] Dec 13 01:39:41.070059 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Dec 13 01:39:41.072260 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea16000-0xfea16fff] Dec 13 01:39:41.072476 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Dec 13 01:39:41.072660 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea17000-0xfea17fff] Dec 13 01:39:41.072850 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Dec 13 01:39:41.074084 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea18000-0xfea18fff] Dec 13 01:39:41.074303 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Dec 13 01:39:41.074486 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfea19000-0xfea19fff] Dec 13 01:39:41.074677 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Dec 13 01:39:41.074855 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 13 01:39:41.075084 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Dec 13 01:39:41.075275 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc040-0xc05f] Dec 13 01:39:41.075465 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea1a000-0xfea1afff] Dec 13 01:39:41.075650 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Dec 13 01:39:41.075829 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Dec 13 01:39:41.078128 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Dec 13 01:39:41.078343 kernel: pci 0000:01:00.0: reg 0x14: [mem 0xfe880000-0xfe880fff] Dec 13 01:39:41.078532 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Dec 13 01:39:41.078718 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfe800000-0xfe87ffff pref] Dec 13 01:39:41.078908 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Dec 13 01:39:41.079122 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Dec 13 01:39:41.079329 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Dec 13 01:39:41.079528 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Dec 13 01:39:41.079714 kernel: pci 0000:02:00.0: reg 0x10: [mem 0xfe600000-0xfe603fff 64bit] Dec 13 01:39:41.079897 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Dec 13 01:39:41.082127 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Dec 13 01:39:41.082328 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 01:39:41.082531 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Dec 13 01:39:41.082721 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfe400000-0xfe400fff] Dec 13 01:39:41.084933 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xfcc00000-0xfcc03fff 64bit pref] Dec 13 01:39:41.085138 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Dec 13 01:39:41.085336 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Dec 13 01:39:41.085514 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 01:39:41.085721 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Dec 13 01:39:41.085964 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Dec 13 01:39:41.086153 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Dec 13 01:39:41.086346 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Dec 13 01:39:41.086523 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 01:39:41.086720 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Dec 13 01:39:41.086907 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xfc800000-0xfc803fff 64bit pref] Dec 13 01:39:41.087117 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Dec 13 01:39:41.087312 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Dec 13 01:39:41.087493 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 01:39:41.087727 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Dec 13 01:39:41.089951 kernel: pci 0000:06:00.0: reg 0x14: [mem 0xfde00000-0xfde00fff] Dec 13 01:39:41.090091 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xfc600000-0xfc603fff 64bit pref] Dec 13 01:39:41.090212 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Dec 13 01:39:41.090348 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Dec 13 01:39:41.090466 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 01:39:41.090476 kernel: acpiphp: Slot [0] registered Dec 13 01:39:41.090609 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Dec 13 01:39:41.090733 kernel: pci 0000:07:00.0: reg 0x14: [mem 0xfdc80000-0xfdc80fff] Dec 13 01:39:41.090854 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xfc400000-0xfc403fff 64bit pref] Dec 13 01:39:41.090993 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfdc00000-0xfdc7ffff pref] Dec 13 01:39:41.091115 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Dec 13 01:39:41.091247 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Dec 13 01:39:41.091368 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 01:39:41.091378 kernel: acpiphp: Slot [0-2] registered Dec 13 01:39:41.091496 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Dec 13 01:39:41.091614 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Dec 13 01:39:41.091732 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 01:39:41.091742 kernel: acpiphp: Slot [0-3] registered Dec 13 01:39:41.091862 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Dec 13 01:39:41.094616 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 13 01:39:41.094739 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 01:39:41.094749 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 01:39:41.094757 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 01:39:41.094765 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 01:39:41.094773 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 01:39:41.094780 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 13 01:39:41.094788 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 13 01:39:41.094796 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 13 01:39:41.094807 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 13 01:39:41.094815 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 13 01:39:41.094823 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 13 01:39:41.094831 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 13 01:39:41.094838 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 13 01:39:41.094846 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 13 01:39:41.094854 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 13 01:39:41.094861 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 13 01:39:41.094869 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 13 01:39:41.094879 kernel: iommu: Default domain type: Translated Dec 13 01:39:41.094887 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 01:39:41.094894 kernel: PCI: Using ACPI for IRQ routing Dec 13 01:39:41.094902 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 01:39:41.094922 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 01:39:41.094930 kernel: e820: reserve RAM buffer [mem 0x7cfdc000-0x7fffffff] Dec 13 01:39:41.095053 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 13 01:39:41.095173 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 13 01:39:41.095321 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 01:39:41.095332 kernel: vgaarb: loaded Dec 13 01:39:41.095340 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 13 01:39:41.095348 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 13 01:39:41.095355 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 01:39:41.095363 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:39:41.095371 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:39:41.095379 kernel: pnp: PnP ACPI init Dec 13 01:39:41.095505 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 13 01:39:41.095520 kernel: pnp: PnP ACPI: found 5 devices Dec 13 01:39:41.095528 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 01:39:41.095535 kernel: NET: Registered PF_INET protocol family Dec 13 01:39:41.095543 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 01:39:41.095551 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 01:39:41.095559 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:39:41.095567 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 01:39:41.095574 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 13 01:39:41.095584 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 01:39:41.095592 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 01:39:41.095600 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 01:39:41.095607 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:39:41.095615 kernel: NET: Registered PF_XDP protocol family Dec 13 01:39:41.095732 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Dec 13 01:39:41.095850 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Dec 13 01:39:41.097384 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Dec 13 01:39:41.097513 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] Dec 13 01:39:41.097631 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] Dec 13 01:39:41.097747 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] Dec 13 01:39:41.097865 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Dec 13 01:39:41.098029 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Dec 13 01:39:41.098146 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Dec 13 01:39:41.098277 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Dec 13 01:39:41.098395 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Dec 13 01:39:41.098517 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 01:39:41.098635 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Dec 13 01:39:41.098751 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Dec 13 01:39:41.098866 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 01:39:41.098999 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Dec 13 01:39:41.099117 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Dec 13 01:39:41.099233 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 01:39:41.099369 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Dec 13 01:39:41.099508 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Dec 13 01:39:41.099626 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 01:39:41.099745 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Dec 13 01:39:41.099862 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Dec 13 01:39:41.102883 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 01:39:41.103025 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Dec 13 01:39:41.103144 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Dec 13 01:39:41.103270 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Dec 13 01:39:41.103387 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 01:39:41.103512 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Dec 13 01:39:41.103629 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Dec 13 01:39:41.103758 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Dec 13 01:39:41.103900 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 01:39:41.104108 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Dec 13 01:39:41.104229 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Dec 13 01:39:41.104363 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 13 01:39:41.104486 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 01:39:41.104603 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 01:39:41.104711 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 01:39:41.104823 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 01:39:41.106037 kernel: pci_bus 0000:00: resource 7 [mem 0x7d000000-0xafffffff window] Dec 13 01:39:41.106154 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 13 01:39:41.106274 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Dec 13 01:39:41.106406 kernel: pci_bus 0000:01: resource 1 [mem 0xfe800000-0xfe9fffff] Dec 13 01:39:41.106522 kernel: pci_bus 0000:01: resource 2 [mem 0xfd000000-0xfd1fffff 64bit pref] Dec 13 01:39:41.106646 kernel: pci_bus 0000:02: resource 1 [mem 0xfe600000-0xfe7fffff] Dec 13 01:39:41.106766 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 01:39:41.106931 kernel: pci_bus 0000:03: resource 1 [mem 0xfe400000-0xfe5fffff] Dec 13 01:39:41.108155 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 01:39:41.108346 kernel: pci_bus 0000:04: resource 1 [mem 0xfe200000-0xfe3fffff] Dec 13 01:39:41.108522 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 01:39:41.108706 kernel: pci_bus 0000:05: resource 1 [mem 0xfe000000-0xfe1fffff] Dec 13 01:39:41.108886 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 01:39:41.109989 kernel: pci_bus 0000:06: resource 1 [mem 0xfde00000-0xfdffffff] Dec 13 01:39:41.110152 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 01:39:41.110345 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Dec 13 01:39:41.110506 kernel: pci_bus 0000:07: resource 1 [mem 0xfdc00000-0xfddfffff] Dec 13 01:39:41.110668 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 01:39:41.110848 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Dec 13 01:39:41.112094 kernel: pci_bus 0000:08: resource 1 [mem 0xfda00000-0xfdbfffff] Dec 13 01:39:41.112218 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 01:39:41.112382 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Dec 13 01:39:41.112523 kernel: pci_bus 0000:09: resource 1 [mem 0xfd800000-0xfd9fffff] Dec 13 01:39:41.112639 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 01:39:41.112651 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 13 01:39:41.112659 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:39:41.112672 kernel: Initialise system trusted keyrings Dec 13 01:39:41.112681 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 01:39:41.112689 kernel: Key type asymmetric registered Dec 13 01:39:41.112697 kernel: Asymmetric key parser 'x509' registered Dec 13 01:39:41.112704 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 01:39:41.112712 kernel: io scheduler mq-deadline registered Dec 13 01:39:41.112720 kernel: io scheduler kyber registered Dec 13 01:39:41.112728 kernel: io scheduler bfq registered Dec 13 01:39:41.112850 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Dec 13 01:39:41.114024 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Dec 13 01:39:41.114156 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Dec 13 01:39:41.114308 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Dec 13 01:39:41.114452 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Dec 13 01:39:41.114571 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Dec 13 01:39:41.114691 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Dec 13 01:39:41.114813 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Dec 13 01:39:41.114956 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Dec 13 01:39:41.115079 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Dec 13 01:39:41.115200 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Dec 13 01:39:41.115334 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Dec 13 01:39:41.115455 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Dec 13 01:39:41.115573 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Dec 13 01:39:41.115692 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Dec 13 01:39:41.115809 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Dec 13 01:39:41.115820 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 13 01:39:41.118017 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Dec 13 01:39:41.118151 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Dec 13 01:39:41.118162 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 01:39:41.118170 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Dec 13 01:39:41.118179 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:39:41.118187 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 01:39:41.118195 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 01:39:41.118203 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 01:39:41.118211 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 01:39:41.118348 kernel: rtc_cmos 00:03: RTC can wake from S4 Dec 13 01:39:41.118361 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 01:39:41.118468 kernel: rtc_cmos 00:03: registered as rtc0 Dec 13 01:39:41.118582 kernel: rtc_cmos 00:03: setting system clock to 2024-12-13T01:39:40 UTC (1734053980) Dec 13 01:39:41.118690 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Dec 13 01:39:41.118700 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Dec 13 01:39:41.118709 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:39:41.118716 kernel: Segment Routing with IPv6 Dec 13 01:39:41.118728 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:39:41.118736 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:39:41.118744 kernel: Key type dns_resolver registered Dec 13 01:39:41.118751 kernel: IPI shorthand broadcast: enabled Dec 13 01:39:41.118759 kernel: sched_clock: Marking stable (1355006344, 152140182)->(1518509606, -11363080) Dec 13 01:39:41.118767 kernel: registered taskstats version 1 Dec 13 01:39:41.118775 kernel: Loading compiled-in X.509 certificates Dec 13 01:39:41.118783 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: c82d546f528d79a5758dcebbc47fb6daf92836a0' Dec 13 01:39:41.118791 kernel: Key type .fscrypt registered Dec 13 01:39:41.118801 kernel: Key type fscrypt-provisioning registered Dec 13 01:39:41.118809 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 01:39:41.118817 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:39:41.118825 kernel: ima: No architecture policies found Dec 13 01:39:41.118832 kernel: clk: Disabling unused clocks Dec 13 01:39:41.118840 kernel: Freeing unused kernel image (initmem) memory: 42844K Dec 13 01:39:41.118848 kernel: Write protecting the kernel read-only data: 36864k Dec 13 01:39:41.118856 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Dec 13 01:39:41.118864 kernel: Run /init as init process Dec 13 01:39:41.118874 kernel: with arguments: Dec 13 01:39:41.118882 kernel: /init Dec 13 01:39:41.118890 kernel: with environment: Dec 13 01:39:41.118897 kernel: HOME=/ Dec 13 01:39:41.118905 kernel: TERM=linux Dec 13 01:39:41.118931 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:39:41.118941 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:39:41.118952 systemd[1]: Detected virtualization kvm. Dec 13 01:39:41.118964 systemd[1]: Detected architecture x86-64. Dec 13 01:39:41.118972 systemd[1]: Running in initrd. Dec 13 01:39:41.118980 systemd[1]: No hostname configured, using default hostname. Dec 13 01:39:41.118988 systemd[1]: Hostname set to . Dec 13 01:39:41.118997 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:39:41.119005 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:39:41.119014 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:39:41.119022 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:39:41.119033 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 01:39:41.119042 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:39:41.119050 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 01:39:41.119059 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 01:39:41.119069 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 01:39:41.119078 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 01:39:41.119090 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:39:41.119102 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:39:41.119113 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:39:41.119130 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:39:41.119144 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:39:41.119154 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:39:41.119162 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:39:41.119170 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:39:41.119179 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:39:41.119192 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:39:41.119200 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:39:41.119209 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:39:41.119217 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:39:41.119226 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:39:41.119234 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 01:39:41.119256 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:39:41.119265 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 01:39:41.119273 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:39:41.119284 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:39:41.119292 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:39:41.119327 systemd-journald[188]: Collecting audit messages is disabled. Dec 13 01:39:41.119348 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:39:41.119359 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 01:39:41.119368 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:39:41.119377 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:39:41.119386 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:39:41.119397 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:39:41.119405 systemd-journald[188]: Journal started Dec 13 01:39:41.119425 systemd-journald[188]: Runtime Journal (/run/log/journal/f2e39f331aa14cb5b54a9bee00096dc9) is 4.8M, max 38.4M, 33.6M free. Dec 13 01:39:41.105437 systemd-modules-load[189]: Inserted module 'overlay' Dec 13 01:39:41.140102 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:39:41.138356 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:39:41.149925 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:39:41.149953 kernel: Bridge firewalling registered Dec 13 01:39:41.147382 systemd-modules-load[189]: Inserted module 'br_netfilter' Dec 13 01:39:41.149218 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:39:41.153115 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:39:41.154317 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:39:41.157004 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:39:41.170122 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:39:41.171309 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:39:41.175055 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 01:39:41.186187 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:39:41.188003 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:39:41.192563 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:39:41.200945 dracut-cmdline[216]: dracut-dracut-053 Dec 13 01:39:41.204001 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:39:41.202088 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:39:41.236209 systemd-resolved[225]: Positive Trust Anchors: Dec 13 01:39:41.236892 systemd-resolved[225]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:39:41.236936 systemd-resolved[225]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:39:41.243343 systemd-resolved[225]: Defaulting to hostname 'linux'. Dec 13 01:39:41.244679 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:39:41.245213 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:39:41.282002 kernel: SCSI subsystem initialized Dec 13 01:39:41.291943 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:39:41.302944 kernel: iscsi: registered transport (tcp) Dec 13 01:39:41.325059 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:39:41.325131 kernel: QLogic iSCSI HBA Driver Dec 13 01:39:41.391223 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 01:39:41.398160 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 01:39:41.445764 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:39:41.445894 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:39:41.445958 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 01:39:41.513999 kernel: raid6: avx2x4 gen() 19207 MB/s Dec 13 01:39:41.531023 kernel: raid6: avx2x2 gen() 22018 MB/s Dec 13 01:39:41.548279 kernel: raid6: avx2x1 gen() 22558 MB/s Dec 13 01:39:41.548401 kernel: raid6: using algorithm avx2x1 gen() 22558 MB/s Dec 13 01:39:41.568076 kernel: raid6: .... xor() 15041 MB/s, rmw enabled Dec 13 01:39:41.568134 kernel: raid6: using avx2x2 recovery algorithm Dec 13 01:39:41.590016 kernel: xor: automatically using best checksumming function avx Dec 13 01:39:41.761981 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 01:39:41.781933 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:39:41.788296 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:39:41.803547 systemd-udevd[405]: Using default interface naming scheme 'v255'. Dec 13 01:39:41.808409 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:39:41.818229 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 01:39:41.840823 dracut-pre-trigger[414]: rd.md=0: removing MD RAID activation Dec 13 01:39:41.890666 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:39:41.897177 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:39:41.991962 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:39:42.004684 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 01:39:42.041079 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 01:39:42.044956 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:39:42.045704 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:39:42.047160 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:39:42.057121 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 01:39:42.071698 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:39:42.096184 kernel: ACPI: bus type USB registered Dec 13 01:39:42.096246 kernel: usbcore: registered new interface driver usbfs Dec 13 01:39:42.098106 kernel: usbcore: registered new interface driver hub Dec 13 01:39:42.098130 kernel: usbcore: registered new device driver usb Dec 13 01:39:42.123409 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Dec 13 01:39:42.152568 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Dec 13 01:39:42.152773 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Dec 13 01:39:42.153012 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Dec 13 01:39:42.153207 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Dec 13 01:39:42.153414 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Dec 13 01:39:42.153607 kernel: hub 1-0:1.0: USB hub found Dec 13 01:39:42.153840 kernel: hub 1-0:1.0: 4 ports detected Dec 13 01:39:42.155202 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 01:39:42.155215 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Dec 13 01:39:42.155410 kernel: scsi host0: Virtio SCSI HBA Dec 13 01:39:42.155571 kernel: hub 2-0:1.0: USB hub found Dec 13 01:39:42.155725 kernel: hub 2-0:1.0: 4 ports detected Dec 13 01:39:42.155351 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:39:42.162320 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Dec 13 01:39:42.155478 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:39:42.157306 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:39:42.157787 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:39:42.157904 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:39:42.160711 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:39:42.204363 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 01:39:42.186665 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:39:42.207384 kernel: AES CTR mode by8 optimization enabled Dec 13 01:39:42.211948 kernel: libata version 3.00 loaded. Dec 13 01:39:42.274101 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:39:42.278000 kernel: ahci 0000:00:1f.2: version 3.0 Dec 13 01:39:42.307856 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 13 01:39:42.307881 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Dec 13 01:39:42.308059 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 13 01:39:42.308195 kernel: scsi host1: ahci Dec 13 01:39:42.308363 kernel: scsi host2: ahci Dec 13 01:39:42.308504 kernel: scsi host3: ahci Dec 13 01:39:42.308643 kernel: scsi host4: ahci Dec 13 01:39:42.308794 kernel: sd 0:0:0:0: Power-on or device reset occurred Dec 13 01:39:42.318202 kernel: scsi host5: ahci Dec 13 01:39:42.318424 kernel: sd 0:0:0:0: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Dec 13 01:39:42.318610 kernel: scsi host6: ahci Dec 13 01:39:42.318759 kernel: sd 0:0:0:0: [sda] Write Protect is off Dec 13 01:39:42.318940 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a100 irq 49 Dec 13 01:39:42.318951 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Dec 13 01:39:42.319702 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a180 irq 49 Dec 13 01:39:42.319714 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Dec 13 01:39:42.319876 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a200 irq 49 Dec 13 01:39:42.319888 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 01:39:42.319898 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a280 irq 49 Dec 13 01:39:42.319908 kernel: GPT:17805311 != 80003071 Dec 13 01:39:42.319987 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a300 irq 49 Dec 13 01:39:42.319997 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 01:39:42.320012 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a380 irq 49 Dec 13 01:39:42.320022 kernel: GPT:17805311 != 80003071 Dec 13 01:39:42.320032 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 01:39:42.320042 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:39:42.320052 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Dec 13 01:39:42.285070 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:39:42.317318 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:39:42.379955 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Dec 13 01:39:42.529980 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 01:39:42.633466 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Dec 13 01:39:42.633588 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Dec 13 01:39:42.633612 kernel: ata1.00: applying bridge limits Dec 13 01:39:42.633634 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 01:39:42.633959 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 13 01:39:42.646187 kernel: ata1.00: configured for UDMA/100 Dec 13 01:39:42.646939 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 01:39:42.650000 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 01:39:42.657018 kernel: ata3: SATA link down (SStatus 0 SControl 300) Dec 13 01:39:42.657073 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 13 01:39:42.697813 kernel: usbcore: registered new interface driver usbhid Dec 13 01:39:42.697890 kernel: usbhid: USB HID core driver Dec 13 01:39:42.720052 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Dec 13 01:39:42.720131 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Dec 13 01:39:42.734960 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Dec 13 01:39:42.751843 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 01:39:42.751992 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Dec 13 01:39:42.761556 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (457) Dec 13 01:39:42.769953 kernel: BTRFS: device fsid c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be devid 1 transid 41 /dev/sda3 scanned by (udev-worker) (459) Dec 13 01:39:42.790642 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Dec 13 01:39:42.797835 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Dec 13 01:39:42.807510 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Dec 13 01:39:42.812437 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Dec 13 01:39:42.812989 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Dec 13 01:39:42.819227 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 01:39:42.825955 disk-uuid[576]: Primary Header is updated. Dec 13 01:39:42.825955 disk-uuid[576]: Secondary Entries is updated. Dec 13 01:39:42.825955 disk-uuid[576]: Secondary Header is updated. Dec 13 01:39:42.835950 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:39:42.848960 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:39:42.857948 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:39:43.863003 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:39:43.863272 disk-uuid[578]: The operation has completed successfully. Dec 13 01:39:43.958443 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:39:43.958584 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 01:39:43.988191 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 01:39:43.995055 sh[597]: Success Dec 13 01:39:44.011102 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Dec 13 01:39:44.091135 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 01:39:44.102842 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 01:39:44.103646 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 01:39:44.137864 kernel: BTRFS info (device dm-0): first mount of filesystem c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be Dec 13 01:39:44.137935 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:39:44.137947 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 01:39:44.141683 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 01:39:44.141758 kernel: BTRFS info (device dm-0): using free space tree Dec 13 01:39:44.152961 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 13 01:39:44.154741 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 01:39:44.155901 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 01:39:44.161095 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 01:39:44.165065 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 01:39:44.187333 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:39:44.187403 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:39:44.187431 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:39:44.194585 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 01:39:44.194645 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:39:44.207633 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:39:44.210650 kernel: BTRFS info (device sda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:39:44.216324 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 01:39:44.222128 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 01:39:44.283674 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:39:44.299170 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:39:44.314854 ignition[707]: Ignition 2.19.0 Dec 13 01:39:44.314865 ignition[707]: Stage: fetch-offline Dec 13 01:39:44.314899 ignition[707]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:39:44.314908 ignition[707]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 01:39:44.318325 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:39:44.315020 ignition[707]: parsed url from cmdline: "" Dec 13 01:39:44.315023 ignition[707]: no config URL provided Dec 13 01:39:44.315028 ignition[707]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:39:44.315037 ignition[707]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:39:44.315042 ignition[707]: failed to fetch config: resource requires networking Dec 13 01:39:44.315205 ignition[707]: Ignition finished successfully Dec 13 01:39:44.327605 systemd-networkd[778]: lo: Link UP Dec 13 01:39:44.327615 systemd-networkd[778]: lo: Gained carrier Dec 13 01:39:44.330338 systemd-networkd[778]: Enumeration completed Dec 13 01:39:44.330644 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:39:44.331031 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:39:44.331035 systemd-networkd[778]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:39:44.331983 systemd-networkd[778]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:39:44.331987 systemd-networkd[778]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:39:44.332259 systemd[1]: Reached target network.target - Network. Dec 13 01:39:44.333393 systemd-networkd[778]: eth0: Link UP Dec 13 01:39:44.333397 systemd-networkd[778]: eth0: Gained carrier Dec 13 01:39:44.333404 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:39:44.338460 systemd-networkd[778]: eth1: Link UP Dec 13 01:39:44.338463 systemd-networkd[778]: eth1: Gained carrier Dec 13 01:39:44.338470 systemd-networkd[778]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:39:44.340123 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 01:39:44.353037 ignition[785]: Ignition 2.19.0 Dec 13 01:39:44.353047 ignition[785]: Stage: fetch Dec 13 01:39:44.353198 ignition[785]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:39:44.353209 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 01:39:44.353308 ignition[785]: parsed url from cmdline: "" Dec 13 01:39:44.353312 ignition[785]: no config URL provided Dec 13 01:39:44.353316 ignition[785]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:39:44.353325 ignition[785]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:39:44.353341 ignition[785]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Dec 13 01:39:44.353494 ignition[785]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Dec 13 01:39:44.384987 systemd-networkd[778]: eth1: DHCPv4 address 10.0.0.4/32, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 01:39:44.396977 systemd-networkd[778]: eth0: DHCPv4 address 49.13.58.183/32, gateway 172.31.1.1 acquired from 172.31.1.1 Dec 13 01:39:44.553762 ignition[785]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Dec 13 01:39:44.559765 ignition[785]: GET result: OK Dec 13 01:39:44.559846 ignition[785]: parsing config with SHA512: c4006aa693dc5c46569ba2eef14097eab9033f2621d7f479f3bbf25a0f57ba37e157d195b12af93a7fbe946785a3e6ed65d6569137e72d3aa06857eb516b48cd Dec 13 01:39:44.565093 unknown[785]: fetched base config from "system" Dec 13 01:39:44.565114 unknown[785]: fetched base config from "system" Dec 13 01:39:44.565743 ignition[785]: fetch: fetch complete Dec 13 01:39:44.565131 unknown[785]: fetched user config from "hetzner" Dec 13 01:39:44.565753 ignition[785]: fetch: fetch passed Dec 13 01:39:44.565825 ignition[785]: Ignition finished successfully Dec 13 01:39:44.573040 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 01:39:44.580226 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 01:39:44.620880 ignition[792]: Ignition 2.19.0 Dec 13 01:39:44.622004 ignition[792]: Stage: kargs Dec 13 01:39:44.622351 ignition[792]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:39:44.622374 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 01:39:44.623964 ignition[792]: kargs: kargs passed Dec 13 01:39:44.627438 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 01:39:44.624047 ignition[792]: Ignition finished successfully Dec 13 01:39:44.645202 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 01:39:44.675618 ignition[798]: Ignition 2.19.0 Dec 13 01:39:44.675641 ignition[798]: Stage: disks Dec 13 01:39:44.677036 ignition[798]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:39:44.677060 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 01:39:44.678774 ignition[798]: disks: disks passed Dec 13 01:39:44.678858 ignition[798]: Ignition finished successfully Dec 13 01:39:44.682265 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 01:39:44.684405 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 01:39:44.686627 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:39:44.689165 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:39:44.691543 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:39:44.693700 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:39:44.702224 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 01:39:44.742439 systemd-fsck[806]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Dec 13 01:39:44.748051 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 01:39:44.757104 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 01:39:44.890948 kernel: EXT4-fs (sda9): mounted filesystem 390119fa-ab9c-4f50-b046-3b5c76c46193 r/w with ordered data mode. Quota mode: none. Dec 13 01:39:44.891981 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 01:39:44.892984 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 01:39:44.898173 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:39:44.903112 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 01:39:44.912964 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (814) Dec 13 01:39:44.913276 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 13 01:39:44.929039 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:39:44.929063 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:39:44.929074 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:39:44.929085 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 01:39:44.929095 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:39:44.914599 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:39:44.914656 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:39:44.935451 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:39:44.937109 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 01:39:44.948096 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 01:39:44.997116 coreos-metadata[816]: Dec 13 01:39:44.997 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Dec 13 01:39:44.998688 coreos-metadata[816]: Dec 13 01:39:44.998 INFO Fetch successful Dec 13 01:39:45.000164 coreos-metadata[816]: Dec 13 01:39:45.000 INFO wrote hostname ci-4081-2-1-6-fe8de2e796 to /sysroot/etc/hostname Dec 13 01:39:45.003381 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 01:39:45.013386 initrd-setup-root[842]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:39:45.020681 initrd-setup-root[849]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:39:45.026627 initrd-setup-root[856]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:39:45.032948 initrd-setup-root[863]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:39:45.183980 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 01:39:45.190031 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 01:39:45.193880 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 01:39:45.199562 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 01:39:45.200454 kernel: BTRFS info (device sda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:39:45.232899 ignition[930]: INFO : Ignition 2.19.0 Dec 13 01:39:45.234100 ignition[930]: INFO : Stage: mount Dec 13 01:39:45.235301 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:39:45.235301 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 01:39:45.235301 ignition[930]: INFO : mount: mount passed Dec 13 01:39:45.238670 ignition[930]: INFO : Ignition finished successfully Dec 13 01:39:45.238546 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 01:39:45.239423 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 01:39:45.246020 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 01:39:45.254686 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:39:45.281979 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (943) Dec 13 01:39:45.286388 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:39:45.286417 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:39:45.290192 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:39:45.296220 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 01:39:45.296260 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:39:45.301318 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:39:45.327270 ignition[960]: INFO : Ignition 2.19.0 Dec 13 01:39:45.327270 ignition[960]: INFO : Stage: files Dec 13 01:39:45.329835 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:39:45.329835 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 01:39:45.329835 ignition[960]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:39:45.333415 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:39:45.333415 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:39:45.336265 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:39:45.337529 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:39:45.337529 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:39:45.337188 unknown[960]: wrote ssh authorized keys file for user: core Dec 13 01:39:45.341062 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 01:39:45.341062 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 01:39:45.341062 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:39:45.345361 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:39:45.345361 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:39:45.345361 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:39:45.345361 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:39:45.345361 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:39:45.345361 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:39:45.345361 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 01:39:45.639460 systemd-networkd[778]: eth1: Gained IPv6LL Dec 13 01:39:46.023322 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Dec 13 01:39:46.280271 systemd-networkd[778]: eth0: Gained IPv6LL Dec 13 01:39:47.104770 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:39:47.104770 ignition[960]: INFO : files: op(8): [started] processing unit "containerd.service" Dec 13 01:39:47.109694 ignition[960]: INFO : files: op(8): op(9): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 01:39:47.109694 ignition[960]: INFO : files: op(8): op(9): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 01:39:47.109694 ignition[960]: INFO : files: op(8): [finished] processing unit "containerd.service" Dec 13 01:39:47.109694 ignition[960]: INFO : files: op(a): [started] processing unit "coreos-metadata.service" Dec 13 01:39:47.109694 ignition[960]: INFO : files: op(a): op(b): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Dec 13 01:39:47.109694 ignition[960]: INFO : files: op(a): op(b): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Dec 13 01:39:47.109694 ignition[960]: INFO : files: op(a): [finished] processing unit "coreos-metadata.service" Dec 13 01:39:47.109694 ignition[960]: INFO : files: createResultFile: createFiles: op(c): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:39:47.109694 ignition[960]: INFO : files: createResultFile: createFiles: op(c): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:39:47.109694 ignition[960]: INFO : files: files passed Dec 13 01:39:47.109694 ignition[960]: INFO : Ignition finished successfully Dec 13 01:39:47.114139 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 01:39:47.131325 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 01:39:47.136905 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 01:39:47.159798 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:39:47.160817 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 01:39:47.173654 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:39:47.173654 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:39:47.177540 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:39:47.181063 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:39:47.183448 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 01:39:47.190155 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 01:39:47.254053 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:39:47.254319 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 01:39:47.256821 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 01:39:47.258639 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 01:39:47.260874 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 01:39:47.267214 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 01:39:47.298802 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:39:47.307206 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 01:39:47.324441 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:39:47.327471 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:39:47.328953 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 01:39:47.331357 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:39:47.331612 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:39:47.334290 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 01:39:47.335874 systemd[1]: Stopped target basic.target - Basic System. Dec 13 01:39:47.338362 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 01:39:47.340858 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:39:47.343292 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 01:39:47.345362 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 01:39:47.347314 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:39:47.349372 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 01:39:47.351306 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 01:39:47.353262 systemd[1]: Stopped target swap.target - Swaps. Dec 13 01:39:47.355098 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:39:47.355352 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:39:47.358403 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:39:47.360029 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:39:47.362559 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 01:39:47.362774 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:39:47.365702 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:39:47.365963 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 01:39:47.369777 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:39:47.370058 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:39:47.371634 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:39:47.371968 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 01:39:47.373965 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 13 01:39:47.374199 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 01:39:47.384338 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 01:39:47.386350 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:39:47.386583 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:39:47.397374 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 01:39:47.399236 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:39:47.400295 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:39:47.403000 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:39:47.403269 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:39:47.418484 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:39:47.419460 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 01:39:47.428941 ignition[1012]: INFO : Ignition 2.19.0 Dec 13 01:39:47.428941 ignition[1012]: INFO : Stage: umount Dec 13 01:39:47.428941 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:39:47.428941 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 01:39:47.437109 ignition[1012]: INFO : umount: umount passed Dec 13 01:39:47.437109 ignition[1012]: INFO : Ignition finished successfully Dec 13 01:39:47.431749 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:39:47.431980 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 01:39:47.435940 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:39:47.436057 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 01:39:47.440402 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:39:47.440511 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 01:39:47.441440 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 01:39:47.441518 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 01:39:47.444234 systemd[1]: Stopped target network.target - Network. Dec 13 01:39:47.445050 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:39:47.445148 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:39:47.448051 systemd[1]: Stopped target paths.target - Path Units. Dec 13 01:39:47.455941 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:39:47.461997 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:39:47.464322 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 01:39:47.470348 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 01:39:47.470838 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:39:47.470894 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:39:47.472725 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:39:47.472780 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:39:47.475816 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:39:47.475905 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 01:39:47.478479 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 01:39:47.478580 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 01:39:47.479604 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 01:39:47.485767 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 01:39:47.487895 systemd-networkd[778]: eth1: DHCPv6 lease lost Dec 13 01:39:47.494164 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:39:47.495613 systemd-networkd[778]: eth0: DHCPv6 lease lost Dec 13 01:39:47.497237 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:39:47.497457 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 01:39:47.502892 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:39:47.503101 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 01:39:47.504072 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:39:47.504216 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 01:39:47.507330 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:39:47.507376 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:39:47.508518 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:39:47.508575 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 01:39:47.527271 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 01:39:47.527745 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:39:47.527803 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:39:47.528346 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:39:47.528391 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:39:47.528853 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:39:47.528897 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 01:39:47.529447 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 01:39:47.529493 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:39:47.530904 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:39:47.543384 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:39:47.543567 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:39:47.547568 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:39:47.548068 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 01:39:47.549954 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:39:47.550001 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:39:47.551113 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:39:47.551166 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:39:47.552408 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:39:47.552456 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 01:39:47.554122 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:39:47.554173 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:39:47.561442 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 01:39:47.561980 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:39:47.562039 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:39:47.565435 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:39:47.565485 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:39:47.567383 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:39:47.567508 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 01:39:47.592500 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:39:47.592635 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 01:39:47.593577 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 01:39:47.602261 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 01:39:47.613457 systemd[1]: Switching root. Dec 13 01:39:47.648877 systemd-journald[188]: Journal stopped Dec 13 01:39:49.050876 systemd-journald[188]: Received SIGTERM from PID 1 (systemd). Dec 13 01:39:49.051010 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 01:39:49.051028 kernel: SELinux: policy capability open_perms=1 Dec 13 01:39:49.051043 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 01:39:49.051053 kernel: SELinux: policy capability always_check_network=0 Dec 13 01:39:49.051064 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 01:39:49.051082 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 01:39:49.051099 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 01:39:49.051110 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 01:39:49.051121 kernel: audit: type=1403 audit(1734053987.918:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 01:39:49.051140 systemd[1]: Successfully loaded SELinux policy in 70.380ms. Dec 13 01:39:49.051162 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 21.284ms. Dec 13 01:39:49.051175 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:39:49.051187 systemd[1]: Detected virtualization kvm. Dec 13 01:39:49.051198 systemd[1]: Detected architecture x86-64. Dec 13 01:39:49.051210 systemd[1]: Detected first boot. Dec 13 01:39:49.051232 systemd[1]: Hostname set to . Dec 13 01:39:49.051245 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:39:49.051256 zram_generator::config[1074]: No configuration found. Dec 13 01:39:49.051271 systemd[1]: Populated /etc with preset unit settings. Dec 13 01:39:49.051283 systemd[1]: Queued start job for default target multi-user.target. Dec 13 01:39:49.051298 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Dec 13 01:39:49.051314 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 01:39:49.051334 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 01:39:49.051354 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 01:39:49.051369 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 01:39:49.051388 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 01:39:49.051402 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 01:39:49.051417 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 01:39:49.051429 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 01:39:49.051443 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:39:49.051457 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:39:49.051470 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 01:39:49.051482 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 01:39:49.051493 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 01:39:49.051505 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:39:49.051519 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 01:39:49.051531 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:39:49.051545 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 01:39:49.051560 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:39:49.051581 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:39:49.051601 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:39:49.051616 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:39:49.051637 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 01:39:49.051654 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 01:39:49.051670 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:39:49.051686 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:39:49.051702 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:39:49.051720 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:39:49.051736 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:39:49.051751 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 01:39:49.051767 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 01:39:49.051785 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 01:39:49.051801 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 01:39:49.051816 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:39:49.051831 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 01:39:49.051847 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 01:39:49.051863 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 01:39:49.051878 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 01:39:49.051894 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:39:49.051928 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:39:49.051950 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 01:39:49.051966 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:39:49.051988 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:39:49.052004 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:39:49.052020 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 01:39:49.052035 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:39:49.052059 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 01:39:49.052079 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Dec 13 01:39:49.052105 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Dec 13 01:39:49.052123 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:39:49.052140 kernel: fuse: init (API version 7.39) Dec 13 01:39:49.052157 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:39:49.052173 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 01:39:49.052188 kernel: ACPI: bus type drm_connector registered Dec 13 01:39:49.052206 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 01:39:49.052234 kernel: loop: module loaded Dec 13 01:39:49.052269 systemd-journald[1174]: Collecting audit messages is disabled. Dec 13 01:39:49.052293 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:39:49.052305 systemd-journald[1174]: Journal started Dec 13 01:39:49.052327 systemd-journald[1174]: Runtime Journal (/run/log/journal/f2e39f331aa14cb5b54a9bee00096dc9) is 4.8M, max 38.4M, 33.6M free. Dec 13 01:39:49.055949 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:39:49.062956 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:39:49.063094 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 01:39:49.063699 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 01:39:49.064276 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 01:39:49.064800 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 01:39:49.065390 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 01:39:49.065971 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 01:39:49.066782 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 01:39:49.067660 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:39:49.068532 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 01:39:49.068739 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 01:39:49.069809 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:39:49.070025 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:39:49.070802 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:39:49.071109 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:39:49.071853 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:39:49.072056 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:39:49.072883 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 01:39:49.073090 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 01:39:49.073889 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:39:49.078353 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:39:49.079875 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:39:49.081091 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 01:39:49.082465 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 01:39:49.102545 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 01:39:49.109048 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 01:39:49.116604 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 01:39:49.117308 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 01:39:49.132068 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 01:39:49.137074 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 01:39:49.140901 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:39:49.152213 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 01:39:49.153050 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:39:49.169043 systemd-journald[1174]: Time spent on flushing to /var/log/journal/f2e39f331aa14cb5b54a9bee00096dc9 is 38.451ms for 1102 entries. Dec 13 01:39:49.169043 systemd-journald[1174]: System Journal (/var/log/journal/f2e39f331aa14cb5b54a9bee00096dc9) is 8.0M, max 584.8M, 576.8M free. Dec 13 01:39:49.238335 systemd-journald[1174]: Received client request to flush runtime journal. Dec 13 01:39:49.166112 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:39:49.176159 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:39:49.181768 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 01:39:49.186158 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 01:39:49.210721 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 01:39:49.211469 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 01:39:49.225466 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:39:49.249117 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 01:39:49.263354 systemd-tmpfiles[1215]: ACLs are not supported, ignoring. Dec 13 01:39:49.263374 systemd-tmpfiles[1215]: ACLs are not supported, ignoring. Dec 13 01:39:49.274456 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:39:49.279017 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:39:49.290164 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 01:39:49.296050 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 01:39:49.324174 udevadm[1233]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 01:39:49.343575 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 01:39:49.350248 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:39:49.369170 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. Dec 13 01:39:49.369556 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. Dec 13 01:39:49.376103 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:39:49.804637 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 01:39:49.819163 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:39:49.845200 systemd-udevd[1243]: Using default interface naming scheme 'v255'. Dec 13 01:39:49.870054 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:39:49.884271 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:39:49.923291 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 01:39:49.955475 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Dec 13 01:39:49.958967 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1254) Dec 13 01:39:49.963128 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1254) Dec 13 01:39:50.002154 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 01:39:50.086941 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1246) Dec 13 01:39:50.104602 systemd-networkd[1251]: lo: Link UP Dec 13 01:39:50.104614 systemd-networkd[1251]: lo: Gained carrier Dec 13 01:39:50.107578 systemd-networkd[1251]: Enumeration completed Dec 13 01:39:50.107717 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:39:50.109096 systemd-networkd[1251]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:39:50.109101 systemd-networkd[1251]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:39:50.109872 systemd-networkd[1251]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:39:50.109877 systemd-networkd[1251]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:39:50.110523 systemd-networkd[1251]: eth0: Link UP Dec 13 01:39:50.110528 systemd-networkd[1251]: eth0: Gained carrier Dec 13 01:39:50.110542 systemd-networkd[1251]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:39:50.116032 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Dec 13 01:39:50.114266 systemd-networkd[1251]: eth1: Link UP Dec 13 01:39:50.114272 systemd-networkd[1251]: eth1: Gained carrier Dec 13 01:39:50.114288 systemd-networkd[1251]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:39:50.116715 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 01:39:50.121952 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 13 01:39:50.125934 kernel: ACPI: button: Power Button [PWRF] Dec 13 01:39:50.148056 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Dec 13 01:39:50.148328 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 01:39:50.158992 systemd-networkd[1251]: eth1: DHCPv4 address 10.0.0.4/32, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 01:39:50.166119 systemd[1]: Condition check resulted in dev-vport2p1.device - /dev/vport2p1 being skipped. Dec 13 01:39:50.166963 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Dec 13 01:39:50.168474 systemd-networkd[1251]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:39:50.169447 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:39:50.169592 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:39:50.176085 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:39:50.177254 systemd-networkd[1251]: eth0: DHCPv4 address 49.13.58.183/32, gateway 172.31.1.1 acquired from 172.31.1.1 Dec 13 01:39:50.180790 systemd-networkd[1251]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:39:50.202429 kernel: Console: switching to colour dummy device 80x25 Dec 13 01:39:50.202495 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Dec 13 01:39:50.202512 kernel: [drm] features: -context_init Dec 13 01:39:50.207039 kernel: EDAC MC: Ver: 3.0.0 Dec 13 01:39:50.208259 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:39:50.223159 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:39:50.223356 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 01:39:50.223398 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 01:39:50.223440 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:39:50.223995 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:39:50.224228 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:39:50.229247 kernel: [drm] number of scanouts: 1 Dec 13 01:39:50.229360 kernel: [drm] number of cap sets: 0 Dec 13 01:39:50.233045 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Dec 13 01:39:50.237037 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Dec 13 01:39:50.241002 kernel: Console: switching to colour frame buffer device 160x50 Dec 13 01:39:50.240718 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:39:50.240982 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:39:50.252392 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Dec 13 01:39:50.255294 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:39:50.257128 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:39:50.258254 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:39:50.258316 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:39:50.273906 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Dec 13 01:39:50.278240 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Dec 13 01:39:50.278290 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 13 01:39:50.285080 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Dec 13 01:39:50.285270 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 13 01:39:50.299241 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:39:50.321704 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:39:50.322039 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:39:50.328132 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:39:50.413265 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:39:50.459337 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 01:39:50.469316 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 01:39:50.487427 lvm[1311]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:39:50.536201 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 01:39:50.537823 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:39:50.544271 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 01:39:50.573720 lvm[1314]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:39:50.621340 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 01:39:50.623781 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:39:50.626339 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 01:39:50.626555 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:39:50.626811 systemd[1]: Reached target machines.target - Containers. Dec 13 01:39:50.629771 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 01:39:50.642093 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 01:39:50.645249 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 01:39:50.648852 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:39:50.655873 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 01:39:50.674151 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 01:39:50.690188 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 01:39:50.704410 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 01:39:50.707723 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 01:39:50.734954 kernel: loop0: detected capacity change from 0 to 140768 Dec 13 01:39:50.745532 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 01:39:50.746628 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 01:39:50.775974 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 01:39:50.797096 kernel: loop1: detected capacity change from 0 to 142488 Dec 13 01:39:50.833345 kernel: loop2: detected capacity change from 0 to 8 Dec 13 01:39:50.852972 kernel: loop3: detected capacity change from 0 to 211296 Dec 13 01:39:50.911076 kernel: loop4: detected capacity change from 0 to 140768 Dec 13 01:39:50.941962 kernel: loop5: detected capacity change from 0 to 142488 Dec 13 01:39:50.972998 kernel: loop6: detected capacity change from 0 to 8 Dec 13 01:39:50.979971 kernel: loop7: detected capacity change from 0 to 211296 Dec 13 01:39:51.009464 (sd-merge)[1335]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Dec 13 01:39:51.010511 (sd-merge)[1335]: Merged extensions into '/usr'. Dec 13 01:39:51.017729 systemd[1]: Reloading requested from client PID 1322 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 01:39:51.017748 systemd[1]: Reloading... Dec 13 01:39:51.117997 zram_generator::config[1363]: No configuration found. Dec 13 01:39:51.204815 ldconfig[1318]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 01:39:51.271250 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:39:51.342762 systemd[1]: Reloading finished in 324 ms. Dec 13 01:39:51.362543 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 01:39:51.366667 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 01:39:51.380114 systemd[1]: Starting ensure-sysext.service... Dec 13 01:39:51.394882 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:39:51.408077 systemd[1]: Reloading requested from client PID 1413 ('systemctl') (unit ensure-sysext.service)... Dec 13 01:39:51.408097 systemd[1]: Reloading... Dec 13 01:39:51.447655 systemd-tmpfiles[1414]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 01:39:51.448829 systemd-tmpfiles[1414]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 01:39:51.450557 systemd-tmpfiles[1414]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 01:39:51.453169 systemd-tmpfiles[1414]: ACLs are not supported, ignoring. Dec 13 01:39:51.453395 systemd-tmpfiles[1414]: ACLs are not supported, ignoring. Dec 13 01:39:51.459085 systemd-tmpfiles[1414]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:39:51.459266 systemd-tmpfiles[1414]: Skipping /boot Dec 13 01:39:51.475963 systemd-tmpfiles[1414]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:39:51.476101 systemd-tmpfiles[1414]: Skipping /boot Dec 13 01:39:51.494999 zram_generator::config[1439]: No configuration found. Dec 13 01:39:51.644382 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:39:51.717707 systemd[1]: Reloading finished in 309 ms. Dec 13 01:39:51.738264 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:39:51.761103 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:39:51.778113 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 01:39:51.787497 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 01:39:51.802950 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:39:51.813482 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 01:39:51.825623 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:39:51.825812 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:39:51.830126 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:39:51.845597 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:39:51.851444 augenrules[1515]: No rules Dec 13 01:39:51.850806 systemd-networkd[1251]: eth0: Gained IPv6LL Dec 13 01:39:51.861160 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:39:51.865493 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:39:51.865611 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:39:51.877872 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 01:39:51.884665 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:39:51.886428 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 01:39:51.890431 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:39:51.890643 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:39:51.897434 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:39:51.897988 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:39:51.903308 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:39:51.903544 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:39:51.925776 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 01:39:51.930549 systemd[1]: Finished ensure-sysext.service. Dec 13 01:39:51.938719 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:39:51.938935 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:39:51.947194 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:39:51.950783 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:39:51.963116 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:39:51.970066 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:39:51.970886 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:39:51.981041 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 01:39:51.989054 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 01:39:51.994586 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:39:51.999125 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:39:51.999423 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:39:52.005740 systemd-resolved[1508]: Positive Trust Anchors: Dec 13 01:39:52.005754 systemd-resolved[1508]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:39:52.005787 systemd-resolved[1508]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:39:52.008165 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:39:52.008415 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:39:52.012365 systemd-resolved[1508]: Using system hostname 'ci-4081-2-1-6-fe8de2e796'. Dec 13 01:39:52.012560 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:39:52.012793 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:39:52.025681 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:39:52.027001 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 01:39:52.027891 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:39:52.029760 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:39:52.039301 systemd[1]: Reached target network.target - Network. Dec 13 01:39:52.041928 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 01:39:52.042434 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:39:52.044128 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:39:52.044255 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:39:52.044284 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:39:52.045355 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 01:39:52.110347 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 01:39:52.111846 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:39:52.114466 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 01:39:52.116870 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 01:39:52.119477 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 01:39:52.121805 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 01:39:52.121859 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:39:52.123335 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 01:39:52.125958 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 01:39:52.128665 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 01:39:52.131264 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:39:52.136090 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 01:39:52.144369 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 01:39:52.151023 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 01:39:52.156586 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 01:39:52.157938 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:39:52.160543 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:39:52.163000 systemd[1]: System is tainted: cgroupsv1 Dec 13 01:39:52.163057 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:39:52.163082 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:39:52.168062 systemd-networkd[1251]: eth1: Gained IPv6LL Dec 13 01:39:52.170041 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 01:39:52.181103 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 13 01:39:52.192170 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 01:39:52.201526 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 01:39:52.214149 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 01:39:52.215014 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 01:39:52.225137 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:39:52.235269 jq[1567]: false Dec 13 01:39:52.236112 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 01:39:52.236955 coreos-metadata[1562]: Dec 13 01:39:52.236 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Dec 13 01:39:52.242638 coreos-metadata[1562]: Dec 13 01:39:52.241 INFO Fetch successful Dec 13 01:39:52.242638 coreos-metadata[1562]: Dec 13 01:39:52.241 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Dec 13 01:39:52.649352 coreos-metadata[1562]: Dec 13 01:39:52.242 INFO Fetch successful Dec 13 01:39:52.649179 systemd-resolved[1508]: Clock change detected. Flushing caches. Dec 13 01:39:52.649351 systemd-timesyncd[1544]: Contacted time server 157.90.24.29:123 (0.flatcar.pool.ntp.org). Dec 13 01:39:52.649419 systemd-timesyncd[1544]: Initial clock synchronization to Fri 2024-12-13 01:39:52.648169 UTC. Dec 13 01:39:52.659842 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 01:39:52.663013 dbus-daemon[1564]: [system] SELinux support is enabled Dec 13 01:39:52.668561 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Dec 13 01:39:52.678203 extend-filesystems[1568]: Found loop4 Dec 13 01:39:52.678203 extend-filesystems[1568]: Found loop5 Dec 13 01:39:52.678203 extend-filesystems[1568]: Found loop6 Dec 13 01:39:52.678203 extend-filesystems[1568]: Found loop7 Dec 13 01:39:52.678203 extend-filesystems[1568]: Found sda Dec 13 01:39:52.678203 extend-filesystems[1568]: Found sda1 Dec 13 01:39:52.678017 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 01:39:52.724007 extend-filesystems[1568]: Found sda2 Dec 13 01:39:52.724007 extend-filesystems[1568]: Found sda3 Dec 13 01:39:52.724007 extend-filesystems[1568]: Found usr Dec 13 01:39:52.724007 extend-filesystems[1568]: Found sda4 Dec 13 01:39:52.724007 extend-filesystems[1568]: Found sda6 Dec 13 01:39:52.724007 extend-filesystems[1568]: Found sda7 Dec 13 01:39:52.724007 extend-filesystems[1568]: Found sda9 Dec 13 01:39:52.724007 extend-filesystems[1568]: Checking size of /dev/sda9 Dec 13 01:39:52.724007 extend-filesystems[1568]: Resized partition /dev/sda9 Dec 13 01:39:52.773271 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Dec 13 01:39:52.693350 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 01:39:52.773542 extend-filesystems[1596]: resize2fs 1.47.1 (20-May-2024) Dec 13 01:39:52.716441 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 01:39:52.718596 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 01:39:52.732866 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 01:39:52.751167 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 01:39:52.757855 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 01:39:52.790711 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 01:39:52.795165 jq[1600]: true Dec 13 01:39:52.791019 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 01:39:52.795351 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 01:39:52.795680 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 01:39:52.802062 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 01:39:52.803554 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 01:39:52.833892 jq[1610]: true Dec 13 01:39:52.837239 update_engine[1594]: I20241213 01:39:52.834417 1594 main.cc:92] Flatcar Update Engine starting Dec 13 01:39:52.844660 (ntainerd)[1611]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 01:39:52.847866 update_engine[1594]: I20241213 01:39:52.846875 1594 update_check_scheduler.cc:74] Next update check in 6m41s Dec 13 01:39:52.851883 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 01:39:52.865130 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1255) Dec 13 01:39:52.934799 systemd[1]: Started update-engine.service - Update Engine. Dec 13 01:39:52.939915 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 01:39:52.941158 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 01:39:52.941922 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 01:39:52.944730 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 01:39:52.949585 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 01:39:52.958374 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 01:39:53.021775 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 13 01:39:53.028045 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 01:39:53.058447 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Dec 13 01:39:53.091254 extend-filesystems[1596]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Dec 13 01:39:53.091254 extend-filesystems[1596]: old_desc_blocks = 1, new_desc_blocks = 5 Dec 13 01:39:53.091254 extend-filesystems[1596]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Dec 13 01:39:53.115691 extend-filesystems[1568]: Resized filesystem in /dev/sda9 Dec 13 01:39:53.115691 extend-filesystems[1568]: Found sr0 Dec 13 01:39:53.159936 bash[1648]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:39:53.094410 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 01:39:53.094721 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 01:39:53.103715 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 01:39:53.121879 systemd-logind[1592]: New seat seat0. Dec 13 01:39:53.139482 systemd[1]: Starting sshkeys.service... Dec 13 01:39:53.166126 systemd-logind[1592]: Watching system buttons on /dev/input/event2 (Power Button) Dec 13 01:39:53.166158 systemd-logind[1592]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 01:39:53.166446 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 01:39:53.176289 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 13 01:39:53.192384 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 13 01:39:53.202216 locksmithd[1640]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 01:39:53.244325 coreos-metadata[1666]: Dec 13 01:39:53.244 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Dec 13 01:39:53.248959 coreos-metadata[1666]: Dec 13 01:39:53.248 INFO Fetch successful Dec 13 01:39:53.254931 unknown[1666]: wrote ssh authorized keys file for user: core Dec 13 01:39:53.302759 update-ssh-keys[1672]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:39:53.304563 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 13 01:39:53.315519 systemd[1]: Finished sshkeys.service. Dec 13 01:39:53.338541 sshd_keygen[1602]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 01:39:53.350438 containerd[1611]: time="2024-12-13T01:39:53.349855085Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 01:39:53.380833 containerd[1611]: time="2024-12-13T01:39:53.377498443Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:39:53.380833 containerd[1611]: time="2024-12-13T01:39:53.379376766Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:39:53.380833 containerd[1611]: time="2024-12-13T01:39:53.379411861Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 01:39:53.380833 containerd[1611]: time="2024-12-13T01:39:53.379425718Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 01:39:53.380833 containerd[1611]: time="2024-12-13T01:39:53.379605916Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 01:39:53.380833 containerd[1611]: time="2024-12-13T01:39:53.379621004Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 01:39:53.380833 containerd[1611]: time="2024-12-13T01:39:53.379687017Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:39:53.380833 containerd[1611]: time="2024-12-13T01:39:53.379698218Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:39:53.380833 containerd[1611]: time="2024-12-13T01:39:53.379944621Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:39:53.380833 containerd[1611]: time="2024-12-13T01:39:53.379957395Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 01:39:53.380833 containerd[1611]: time="2024-12-13T01:39:53.379970820Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:39:53.381165 containerd[1611]: time="2024-12-13T01:39:53.379993292Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 01:39:53.381165 containerd[1611]: time="2024-12-13T01:39:53.380076538Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:39:53.381165 containerd[1611]: time="2024-12-13T01:39:53.380307271Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:39:53.381165 containerd[1611]: time="2024-12-13T01:39:53.380486938Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:39:53.381165 containerd[1611]: time="2024-12-13T01:39:53.380499000Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 01:39:53.381165 containerd[1611]: time="2024-12-13T01:39:53.380592596Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 01:39:53.381165 containerd[1611]: time="2024-12-13T01:39:53.380644163Z" level=info msg="metadata content store policy set" policy=shared Dec 13 01:39:53.386477 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 01:39:53.395560 containerd[1611]: time="2024-12-13T01:39:53.395507991Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 01:39:53.395735 containerd[1611]: time="2024-12-13T01:39:53.395581379Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 01:39:53.395735 containerd[1611]: time="2024-12-13T01:39:53.395597720Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 01:39:53.395735 containerd[1611]: time="2024-12-13T01:39:53.395612738Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 01:39:53.395735 containerd[1611]: time="2024-12-13T01:39:53.395627055Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 01:39:53.395870 containerd[1611]: time="2024-12-13T01:39:53.395804678Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 01:39:53.396543 containerd[1611]: time="2024-12-13T01:39:53.396082429Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 01:39:53.396543 containerd[1611]: time="2024-12-13T01:39:53.396242068Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 01:39:53.396543 containerd[1611]: time="2024-12-13T01:39:53.396256796Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 01:39:53.396543 containerd[1611]: time="2024-12-13T01:39:53.396271634Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 01:39:53.396543 containerd[1611]: time="2024-12-13T01:39:53.396284187Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 01:39:53.396543 containerd[1611]: time="2024-12-13T01:39:53.396298654Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 01:39:53.396543 containerd[1611]: time="2024-12-13T01:39:53.396309685Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 01:39:53.396543 containerd[1611]: time="2024-12-13T01:39:53.396322269Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 01:39:53.396543 containerd[1611]: time="2024-12-13T01:39:53.396335243Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 01:39:53.396543 containerd[1611]: time="2024-12-13T01:39:53.396347586Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 01:39:53.396543 containerd[1611]: time="2024-12-13T01:39:53.396359108Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 01:39:53.396543 containerd[1611]: time="2024-12-13T01:39:53.396370689Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 01:39:53.396543 containerd[1611]: time="2024-12-13T01:39:53.396401798Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 01:39:53.396543 containerd[1611]: time="2024-12-13T01:39:53.396415644Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 01:39:53.397011 containerd[1611]: time="2024-12-13T01:39:53.396427396Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 01:39:53.397011 containerd[1611]: time="2024-12-13T01:39:53.396440821Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 01:39:53.397011 containerd[1611]: time="2024-12-13T01:39:53.396451882Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 01:39:53.397011 containerd[1611]: time="2024-12-13T01:39:53.396464576Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 01:39:53.397011 containerd[1611]: time="2024-12-13T01:39:53.396475586Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 01:39:53.397011 containerd[1611]: time="2024-12-13T01:39:53.396486637Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 01:39:53.397011 containerd[1611]: time="2024-12-13T01:39:53.396497778Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 01:39:53.397011 containerd[1611]: time="2024-12-13T01:39:53.396512185Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 01:39:53.397011 containerd[1611]: time="2024-12-13T01:39:53.396537402Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 01:39:53.397011 containerd[1611]: time="2024-12-13T01:39:53.396549355Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 01:39:53.397011 containerd[1611]: time="2024-12-13T01:39:53.396560445Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 01:39:53.397011 containerd[1611]: time="2024-12-13T01:39:53.396579551Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 01:39:53.397011 containerd[1611]: time="2024-12-13T01:39:53.396597766Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 01:39:53.397011 containerd[1611]: time="2024-12-13T01:39:53.396609958Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 01:39:53.397011 containerd[1611]: time="2024-12-13T01:39:53.396625106Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 01:39:53.397330 containerd[1611]: time="2024-12-13T01:39:53.396671514Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 01:39:53.397330 containerd[1611]: time="2024-12-13T01:39:53.396689267Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 01:39:53.397330 containerd[1611]: time="2024-12-13T01:39:53.396699185Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 01:39:53.397330 containerd[1611]: time="2024-12-13T01:39:53.396711689Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 01:39:53.397330 containerd[1611]: time="2024-12-13T01:39:53.396722128Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 01:39:53.397330 containerd[1611]: time="2024-12-13T01:39:53.396732939Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 01:39:53.397330 containerd[1611]: time="2024-12-13T01:39:53.396742437Z" level=info msg="NRI interface is disabled by configuration." Dec 13 01:39:53.397330 containerd[1611]: time="2024-12-13T01:39:53.396752606Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 01:39:53.397486 containerd[1611]: time="2024-12-13T01:39:53.397032851Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 01:39:53.400252 containerd[1611]: time="2024-12-13T01:39:53.398314695Z" level=info msg="Connect containerd service" Dec 13 01:39:53.400252 containerd[1611]: time="2024-12-13T01:39:53.398446092Z" level=info msg="using legacy CRI server" Dec 13 01:39:53.400252 containerd[1611]: time="2024-12-13T01:39:53.398456702Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 01:39:53.400252 containerd[1611]: time="2024-12-13T01:39:53.398539788Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 01:39:53.398400 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 01:39:53.401354 containerd[1611]: time="2024-12-13T01:39:53.400890297Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:39:53.401354 containerd[1611]: time="2024-12-13T01:39:53.400974174Z" level=info msg="Start subscribing containerd event" Dec 13 01:39:53.401354 containerd[1611]: time="2024-12-13T01:39:53.401014690Z" level=info msg="Start recovering state" Dec 13 01:39:53.401354 containerd[1611]: time="2024-12-13T01:39:53.401077277Z" level=info msg="Start event monitor" Dec 13 01:39:53.401354 containerd[1611]: time="2024-12-13T01:39:53.401103817Z" level=info msg="Start snapshots syncer" Dec 13 01:39:53.401354 containerd[1611]: time="2024-12-13T01:39:53.401112784Z" level=info msg="Start cni network conf syncer for default" Dec 13 01:39:53.401354 containerd[1611]: time="2024-12-13T01:39:53.401120288Z" level=info msg="Start streaming server" Dec 13 01:39:53.402612 containerd[1611]: time="2024-12-13T01:39:53.402596506Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 01:39:53.402871 containerd[1611]: time="2024-12-13T01:39:53.402807132Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 01:39:53.403131 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 01:39:53.409324 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 01:39:53.411154 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 01:39:53.421169 containerd[1611]: time="2024-12-13T01:39:53.419978969Z" level=info msg="containerd successfully booted in 0.071289s" Dec 13 01:39:53.425414 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 01:39:53.447625 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 01:39:53.457820 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 01:39:53.467788 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 01:39:53.468745 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 01:39:54.326475 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:39:54.326801 (kubelet)[1711]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:39:54.331879 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 01:39:54.335845 systemd[1]: Startup finished in 8.666s (kernel) + 6.079s (userspace) = 14.746s. Dec 13 01:39:55.415152 kubelet[1711]: E1213 01:39:55.414901 1711 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:39:55.422813 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:39:55.425152 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:40:05.473171 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 01:40:05.483402 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:40:05.732320 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:40:05.738860 (kubelet)[1736]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:40:05.831659 kubelet[1736]: E1213 01:40:05.831525 1736 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:40:05.847400 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:40:05.848244 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:40:15.973275 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 01:40:15.982597 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:40:16.185279 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:40:16.193465 (kubelet)[1757]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:40:16.259956 kubelet[1757]: E1213 01:40:16.259775 1757 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:40:16.269791 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:40:16.271006 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:40:26.472660 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 01:40:26.478234 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:40:26.671466 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:40:26.683794 (kubelet)[1779]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:40:26.741082 kubelet[1779]: E1213 01:40:26.740881 1779 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:40:26.747343 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:40:26.747942 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:40:36.973278 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 13 01:40:36.980294 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:40:37.222618 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:40:37.222943 (kubelet)[1801]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:40:37.296267 kubelet[1801]: E1213 01:40:37.296179 1801 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:40:37.304766 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:40:37.306364 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:40:38.635790 update_engine[1594]: I20241213 01:40:38.635535 1594 update_attempter.cc:509] Updating boot flags... Dec 13 01:40:38.744314 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1819) Dec 13 01:40:38.803139 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1823) Dec 13 01:40:38.857130 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1823) Dec 13 01:40:47.473137 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Dec 13 01:40:47.480379 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:40:47.706458 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:40:47.723999 (kubelet)[1841]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:40:47.784188 kubelet[1841]: E1213 01:40:47.784119 1841 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:40:47.789581 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:40:47.789847 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:40:57.972573 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Dec 13 01:40:57.978329 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:40:58.134425 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:40:58.134713 (kubelet)[1864]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:40:58.181365 kubelet[1864]: E1213 01:40:58.181240 1864 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:40:58.186450 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:40:58.186751 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:41:08.222983 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Dec 13 01:41:08.230024 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:41:08.424241 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:41:08.427656 (kubelet)[1886]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:41:08.505467 kubelet[1886]: E1213 01:41:08.505243 1886 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:41:08.514416 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:41:08.515554 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:41:18.723422 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Dec 13 01:41:18.730756 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:41:18.971381 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:41:18.984043 (kubelet)[1907]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:41:19.057494 kubelet[1907]: E1213 01:41:19.057408 1907 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:41:19.063738 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:41:19.065282 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:41:29.223190 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Dec 13 01:41:29.231812 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:41:29.456676 (kubelet)[1928]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:41:29.457591 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:41:29.514087 kubelet[1928]: E1213 01:41:29.513784 1928 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:41:29.518444 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:41:29.519001 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:41:39.723596 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Dec 13 01:41:39.737032 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:41:39.922352 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:41:39.935695 (kubelet)[1949]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:41:39.989602 kubelet[1949]: E1213 01:41:39.989383 1949 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:41:39.993562 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:41:39.994077 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:41:44.871263 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 01:41:44.878544 systemd[1]: Started sshd@0-49.13.58.183:22-147.75.109.163:48732.service - OpenSSH per-connection server daemon (147.75.109.163:48732). Dec 13 01:41:45.893469 sshd[1959]: Accepted publickey for core from 147.75.109.163 port 48732 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:41:45.898856 sshd[1959]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:41:45.916921 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 01:41:45.924840 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 01:41:45.929896 systemd-logind[1592]: New session 1 of user core. Dec 13 01:41:45.963181 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 01:41:45.977033 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 01:41:45.992283 (systemd)[1965]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:41:46.134694 systemd[1965]: Queued start job for default target default.target. Dec 13 01:41:46.135129 systemd[1965]: Created slice app.slice - User Application Slice. Dec 13 01:41:46.135156 systemd[1965]: Reached target paths.target - Paths. Dec 13 01:41:46.135173 systemd[1965]: Reached target timers.target - Timers. Dec 13 01:41:46.140286 systemd[1965]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 01:41:46.152699 systemd[1965]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 01:41:46.152851 systemd[1965]: Reached target sockets.target - Sockets. Dec 13 01:41:46.152891 systemd[1965]: Reached target basic.target - Basic System. Dec 13 01:41:46.153546 systemd[1965]: Reached target default.target - Main User Target. Dec 13 01:41:46.153589 systemd[1965]: Startup finished in 150ms. Dec 13 01:41:46.154299 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 01:41:46.167643 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 01:41:46.853497 systemd[1]: Started sshd@1-49.13.58.183:22-147.75.109.163:60674.service - OpenSSH per-connection server daemon (147.75.109.163:60674). Dec 13 01:41:47.833900 sshd[1977]: Accepted publickey for core from 147.75.109.163 port 60674 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:41:47.836772 sshd[1977]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:41:47.844355 systemd-logind[1592]: New session 2 of user core. Dec 13 01:41:47.850531 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 01:41:48.516553 sshd[1977]: pam_unix(sshd:session): session closed for user core Dec 13 01:41:48.521235 systemd[1]: sshd@1-49.13.58.183:22-147.75.109.163:60674.service: Deactivated successfully. Dec 13 01:41:48.525937 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 01:41:48.527440 systemd-logind[1592]: Session 2 logged out. Waiting for processes to exit. Dec 13 01:41:48.529754 systemd-logind[1592]: Removed session 2. Dec 13 01:41:48.682473 systemd[1]: Started sshd@2-49.13.58.183:22-147.75.109.163:60682.service - OpenSSH per-connection server daemon (147.75.109.163:60682). Dec 13 01:41:49.679825 sshd[1985]: Accepted publickey for core from 147.75.109.163 port 60682 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:41:49.682814 sshd[1985]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:41:49.691633 systemd-logind[1592]: New session 3 of user core. Dec 13 01:41:49.697724 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 01:41:50.222573 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Dec 13 01:41:50.232378 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:41:50.356405 sshd[1985]: pam_unix(sshd:session): session closed for user core Dec 13 01:41:50.373580 systemd[1]: sshd@2-49.13.58.183:22-147.75.109.163:60682.service: Deactivated successfully. Dec 13 01:41:50.384841 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 01:41:50.386530 systemd-logind[1592]: Session 3 logged out. Waiting for processes to exit. Dec 13 01:41:50.389989 systemd-logind[1592]: Removed session 3. Dec 13 01:41:50.397343 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:41:50.397826 (kubelet)[2005]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:41:50.451423 kubelet[2005]: E1213 01:41:50.451284 2005 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:41:50.455740 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:41:50.456223 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:41:50.523486 systemd[1]: Started sshd@3-49.13.58.183:22-147.75.109.163:60686.service - OpenSSH per-connection server daemon (147.75.109.163:60686). Dec 13 01:41:51.533801 sshd[2014]: Accepted publickey for core from 147.75.109.163 port 60686 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:41:51.537761 sshd[2014]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:41:51.549228 systemd-logind[1592]: New session 4 of user core. Dec 13 01:41:51.552694 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 01:41:52.229524 sshd[2014]: pam_unix(sshd:session): session closed for user core Dec 13 01:41:52.238859 systemd[1]: sshd@3-49.13.58.183:22-147.75.109.163:60686.service: Deactivated successfully. Dec 13 01:41:52.244056 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 01:41:52.245546 systemd-logind[1592]: Session 4 logged out. Waiting for processes to exit. Dec 13 01:41:52.247746 systemd-logind[1592]: Removed session 4. Dec 13 01:41:52.400578 systemd[1]: Started sshd@4-49.13.58.183:22-147.75.109.163:60692.service - OpenSSH per-connection server daemon (147.75.109.163:60692). Dec 13 01:41:53.413832 sshd[2022]: Accepted publickey for core from 147.75.109.163 port 60692 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:41:53.416856 sshd[2022]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:41:53.426601 systemd-logind[1592]: New session 5 of user core. Dec 13 01:41:53.437614 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 01:41:53.963009 sudo[2026]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 01:41:53.963759 sudo[2026]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:41:53.992652 sudo[2026]: pam_unix(sudo:session): session closed for user root Dec 13 01:41:54.154922 sshd[2022]: pam_unix(sshd:session): session closed for user core Dec 13 01:41:54.163790 systemd[1]: sshd@4-49.13.58.183:22-147.75.109.163:60692.service: Deactivated successfully. Dec 13 01:41:54.168907 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 01:41:54.170417 systemd-logind[1592]: Session 5 logged out. Waiting for processes to exit. Dec 13 01:41:54.172040 systemd-logind[1592]: Removed session 5. Dec 13 01:41:54.320531 systemd[1]: Started sshd@5-49.13.58.183:22-147.75.109.163:60694.service - OpenSSH per-connection server daemon (147.75.109.163:60694). Dec 13 01:41:55.326303 sshd[2031]: Accepted publickey for core from 147.75.109.163 port 60694 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:41:55.330050 sshd[2031]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:41:55.338793 systemd-logind[1592]: New session 6 of user core. Dec 13 01:41:55.347647 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 01:41:55.852600 sudo[2036]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 01:41:55.853018 sudo[2036]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:41:55.859035 sudo[2036]: pam_unix(sudo:session): session closed for user root Dec 13 01:41:55.872672 sudo[2035]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 01:41:55.873439 sudo[2035]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:41:55.891634 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 13 01:41:55.895985 auditctl[2039]: No rules Dec 13 01:41:55.896942 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 01:41:55.897730 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 13 01:41:55.911144 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:41:55.967004 augenrules[2058]: No rules Dec 13 01:41:55.968729 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:41:55.973976 sudo[2035]: pam_unix(sudo:session): session closed for user root Dec 13 01:41:56.136192 sshd[2031]: pam_unix(sshd:session): session closed for user core Dec 13 01:41:56.145680 systemd[1]: sshd@5-49.13.58.183:22-147.75.109.163:60694.service: Deactivated successfully. Dec 13 01:41:56.151166 systemd-logind[1592]: Session 6 logged out. Waiting for processes to exit. Dec 13 01:41:56.151520 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 01:41:56.153710 systemd-logind[1592]: Removed session 6. Dec 13 01:41:56.300567 systemd[1]: Started sshd@6-49.13.58.183:22-147.75.109.163:60058.service - OpenSSH per-connection server daemon (147.75.109.163:60058). Dec 13 01:41:57.299036 sshd[2067]: Accepted publickey for core from 147.75.109.163 port 60058 ssh2: RSA SHA256:7MYu5z3X6ozTHsm4XD3jShFQ82oedB8NoqjW9/hHJEw Dec 13 01:41:57.301787 sshd[2067]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:41:57.308768 systemd-logind[1592]: New session 7 of user core. Dec 13 01:41:57.312518 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 01:41:57.822028 sudo[2071]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 01:41:57.822678 sudo[2071]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:41:58.769995 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:41:58.787519 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:41:58.827680 systemd[1]: Reloading requested from client PID 2111 ('systemctl') (unit session-7.scope)... Dec 13 01:41:58.827887 systemd[1]: Reloading... Dec 13 01:41:58.961129 zram_generator::config[2150]: No configuration found. Dec 13 01:41:59.103448 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:41:59.187028 systemd[1]: Reloading finished in 358 ms. Dec 13 01:41:59.236057 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 01:41:59.236188 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 01:41:59.236765 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:41:59.242809 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:41:59.426245 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:41:59.444628 (kubelet)[2213]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:41:59.483221 kubelet[2213]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:41:59.483221 kubelet[2213]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:41:59.483221 kubelet[2213]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:41:59.484049 kubelet[2213]: I1213 01:41:59.483463 2213 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:41:59.753733 kubelet[2213]: I1213 01:41:59.753675 2213 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 01:41:59.753733 kubelet[2213]: I1213 01:41:59.753704 2213 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:41:59.754021 kubelet[2213]: I1213 01:41:59.753906 2213 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 01:41:59.779126 kubelet[2213]: I1213 01:41:59.778922 2213 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:41:59.796747 kubelet[2213]: I1213 01:41:59.796688 2213 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:41:59.797892 kubelet[2213]: I1213 01:41:59.797859 2213 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:41:59.798318 kubelet[2213]: I1213 01:41:59.798280 2213 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:41:59.798398 kubelet[2213]: I1213 01:41:59.798326 2213 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:41:59.798398 kubelet[2213]: I1213 01:41:59.798347 2213 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:41:59.798601 kubelet[2213]: I1213 01:41:59.798568 2213 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:41:59.798774 kubelet[2213]: I1213 01:41:59.798748 2213 kubelet.go:396] "Attempting to sync node with API server" Dec 13 01:41:59.798821 kubelet[2213]: I1213 01:41:59.798782 2213 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:41:59.799372 kubelet[2213]: I1213 01:41:59.798842 2213 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:41:59.799372 kubelet[2213]: I1213 01:41:59.798881 2213 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:41:59.799372 kubelet[2213]: E1213 01:41:59.799155 2213 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:41:59.799372 kubelet[2213]: E1213 01:41:59.799201 2213 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:41:59.801766 kubelet[2213]: I1213 01:41:59.801749 2213 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:41:59.806064 kubelet[2213]: I1213 01:41:59.804973 2213 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:41:59.806064 kubelet[2213]: W1213 01:41:59.805038 2213 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 01:41:59.806064 kubelet[2213]: I1213 01:41:59.805697 2213 server.go:1256] "Started kubelet" Dec 13 01:41:59.806064 kubelet[2213]: W1213 01:41:59.806053 2213 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 13 01:41:59.806262 kubelet[2213]: E1213 01:41:59.806126 2213 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 13 01:41:59.807466 kubelet[2213]: I1213 01:41:59.806964 2213 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:41:59.809627 kubelet[2213]: I1213 01:41:59.808914 2213 server.go:461] "Adding debug handlers to kubelet server" Dec 13 01:41:59.812107 kubelet[2213]: I1213 01:41:59.810640 2213 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:41:59.812107 kubelet[2213]: I1213 01:41:59.810925 2213 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:41:59.814207 kubelet[2213]: I1213 01:41:59.814159 2213 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:41:59.816534 kubelet[2213]: W1213 01:41:59.816518 2213 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes "10.0.0.4" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Dec 13 01:41:59.816770 kubelet[2213]: E1213 01:41:59.816600 2213 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.4" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Dec 13 01:41:59.818202 kubelet[2213]: E1213 01:41:59.818187 2213 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.4.18109908d851d0e7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.4,UID:10.0.0.4,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.4,},FirstTimestamp:2024-12-13 01:41:59.805669607 +0000 UTC m=+0.357018106,LastTimestamp:2024-12-13 01:41:59.805669607 +0000 UTC m=+0.357018106,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.4,}" Dec 13 01:41:59.819839 kubelet[2213]: E1213 01:41:59.819620 2213 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:41:59.823105 kubelet[2213]: E1213 01:41:59.823072 2213 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Dec 13 01:41:59.823223 kubelet[2213]: I1213 01:41:59.823208 2213 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:41:59.823838 kubelet[2213]: I1213 01:41:59.823398 2213 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 01:41:59.823838 kubelet[2213]: I1213 01:41:59.823447 2213 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 01:41:59.824562 kubelet[2213]: I1213 01:41:59.824535 2213 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:41:59.824763 kubelet[2213]: I1213 01:41:59.824745 2213 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:41:59.830613 kubelet[2213]: I1213 01:41:59.830595 2213 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:41:59.859992 kubelet[2213]: E1213 01:41:59.859964 2213 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.4\" not found" node="10.0.0.4" Dec 13 01:41:59.869244 kubelet[2213]: I1213 01:41:59.868812 2213 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:41:59.869244 kubelet[2213]: I1213 01:41:59.868851 2213 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:41:59.869244 kubelet[2213]: I1213 01:41:59.868871 2213 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:41:59.873223 kubelet[2213]: I1213 01:41:59.872692 2213 policy_none.go:49] "None policy: Start" Dec 13 01:41:59.874189 kubelet[2213]: I1213 01:41:59.874164 2213 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:41:59.874242 kubelet[2213]: I1213 01:41:59.874196 2213 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:41:59.881456 kubelet[2213]: I1213 01:41:59.881439 2213 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:41:59.882303 kubelet[2213]: I1213 01:41:59.882288 2213 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:41:59.888280 kubelet[2213]: E1213 01:41:59.888267 2213 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.4\" not found" Dec 13 01:41:59.924927 kubelet[2213]: I1213 01:41:59.924901 2213 kubelet_node_status.go:73] "Attempting to register node" node="10.0.0.4" Dec 13 01:41:59.928147 kubelet[2213]: I1213 01:41:59.928131 2213 kubelet_node_status.go:76] "Successfully registered node" node="10.0.0.4" Dec 13 01:41:59.935704 kubelet[2213]: E1213 01:41:59.935665 2213 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Dec 13 01:41:59.940238 kubelet[2213]: I1213 01:41:59.940193 2213 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:41:59.943284 kubelet[2213]: I1213 01:41:59.943240 2213 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:41:59.943351 kubelet[2213]: I1213 01:41:59.943299 2213 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:41:59.943351 kubelet[2213]: I1213 01:41:59.943325 2213 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 01:41:59.943515 kubelet[2213]: E1213 01:41:59.943484 2213 kubelet.go:2353] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Dec 13 01:42:00.037013 kubelet[2213]: E1213 01:42:00.036780 2213 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Dec 13 01:42:00.137914 kubelet[2213]: E1213 01:42:00.137836 2213 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Dec 13 01:42:00.238596 kubelet[2213]: E1213 01:42:00.238513 2213 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Dec 13 01:42:00.338871 kubelet[2213]: E1213 01:42:00.338705 2213 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Dec 13 01:42:00.439913 kubelet[2213]: E1213 01:42:00.439849 2213 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Dec 13 01:42:00.540854 kubelet[2213]: E1213 01:42:00.540770 2213 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Dec 13 01:42:00.642025 kubelet[2213]: E1213 01:42:00.641829 2213 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Dec 13 01:42:00.742940 kubelet[2213]: E1213 01:42:00.742863 2213 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Dec 13 01:42:00.767187 kubelet[2213]: I1213 01:42:00.767132 2213 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 13 01:42:00.767493 kubelet[2213]: W1213 01:42:00.767429 2213 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.CSIDriver ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Dec 13 01:42:00.767493 kubelet[2213]: W1213 01:42:00.767432 2213 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.RuntimeClass ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Dec 13 01:42:00.767493 kubelet[2213]: W1213 01:42:00.767488 2213 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.Service ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Dec 13 01:42:00.799898 kubelet[2213]: E1213 01:42:00.799725 2213 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:42:00.843929 kubelet[2213]: E1213 01:42:00.843855 2213 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Dec 13 01:42:00.944473 kubelet[2213]: E1213 01:42:00.944243 2213 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Dec 13 01:42:00.954122 sudo[2071]: pam_unix(sudo:session): session closed for user root Dec 13 01:42:01.044921 kubelet[2213]: E1213 01:42:01.044821 2213 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Dec 13 01:42:01.113579 sshd[2067]: pam_unix(sshd:session): session closed for user core Dec 13 01:42:01.122925 systemd[1]: sshd@6-49.13.58.183:22-147.75.109.163:60058.service: Deactivated successfully. Dec 13 01:42:01.129083 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 01:42:01.129674 systemd-logind[1592]: Session 7 logged out. Waiting for processes to exit. Dec 13 01:42:01.133424 systemd-logind[1592]: Removed session 7. Dec 13 01:42:01.145933 kubelet[2213]: E1213 01:42:01.145161 2213 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Dec 13 01:42:01.246931 kubelet[2213]: I1213 01:42:01.246711 2213 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Dec 13 01:42:01.247670 containerd[1611]: time="2024-12-13T01:42:01.247592996Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 01:42:01.248588 kubelet[2213]: I1213 01:42:01.247848 2213 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Dec 13 01:42:01.800160 kubelet[2213]: E1213 01:42:01.800047 2213 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:42:01.801433 kubelet[2213]: I1213 01:42:01.801356 2213 apiserver.go:52] "Watching apiserver" Dec 13 01:42:01.808424 kubelet[2213]: I1213 01:42:01.808361 2213 topology_manager.go:215] "Topology Admit Handler" podUID="7ff62acb-5301-4c5f-b552-cdc301cf9f14" podNamespace="calico-system" podName="calico-node-2qpwf" Dec 13 01:42:01.808635 kubelet[2213]: I1213 01:42:01.808601 2213 topology_manager.go:215] "Topology Admit Handler" podUID="0ba65a02-8118-4915-9dfa-fbbe2bbe53d0" podNamespace="calico-system" podName="csi-node-driver-sdf67" Dec 13 01:42:01.808740 kubelet[2213]: I1213 01:42:01.808689 2213 topology_manager.go:215] "Topology Admit Handler" podUID="c3e9fb36-d77b-4a50-8a03-38032ba4faba" podNamespace="kube-system" podName="kube-proxy-ngg74" Dec 13 01:42:01.809619 kubelet[2213]: E1213 01:42:01.808956 2213 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sdf67" podUID="0ba65a02-8118-4915-9dfa-fbbe2bbe53d0" Dec 13 01:42:01.825129 kubelet[2213]: I1213 01:42:01.824978 2213 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 01:42:01.837551 kubelet[2213]: I1213 01:42:01.837169 2213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7ff62acb-5301-4c5f-b552-cdc301cf9f14-var-lib-calico\") pod \"calico-node-2qpwf\" (UID: \"7ff62acb-5301-4c5f-b552-cdc301cf9f14\") " pod="calico-system/calico-node-2qpwf" Dec 13 01:42:01.837551 kubelet[2213]: I1213 01:42:01.837284 2213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/7ff62acb-5301-4c5f-b552-cdc301cf9f14-cni-log-dir\") pod \"calico-node-2qpwf\" (UID: \"7ff62acb-5301-4c5f-b552-cdc301cf9f14\") " pod="calico-system/calico-node-2qpwf" Dec 13 01:42:01.837551 kubelet[2213]: I1213 01:42:01.837338 2213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0ba65a02-8118-4915-9dfa-fbbe2bbe53d0-kubelet-dir\") pod \"csi-node-driver-sdf67\" (UID: \"0ba65a02-8118-4915-9dfa-fbbe2bbe53d0\") " pod="calico-system/csi-node-driver-sdf67" Dec 13 01:42:01.837551 kubelet[2213]: I1213 01:42:01.837433 2213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0ba65a02-8118-4915-9dfa-fbbe2bbe53d0-socket-dir\") pod \"csi-node-driver-sdf67\" (UID: \"0ba65a02-8118-4915-9dfa-fbbe2bbe53d0\") " pod="calico-system/csi-node-driver-sdf67" Dec 13 01:42:01.837950 kubelet[2213]: I1213 01:42:01.837924 2213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5qgq\" (UniqueName: \"kubernetes.io/projected/c3e9fb36-d77b-4a50-8a03-38032ba4faba-kube-api-access-x5qgq\") pod \"kube-proxy-ngg74\" (UID: \"c3e9fb36-d77b-4a50-8a03-38032ba4faba\") " pod="kube-system/kube-proxy-ngg74" Dec 13 01:42:01.839163 kubelet[2213]: I1213 01:42:01.838163 2213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7ff62acb-5301-4c5f-b552-cdc301cf9f14-xtables-lock\") pod \"calico-node-2qpwf\" (UID: \"7ff62acb-5301-4c5f-b552-cdc301cf9f14\") " pod="calico-system/calico-node-2qpwf" Dec 13 01:42:01.839163 kubelet[2213]: I1213 01:42:01.838232 2213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/7ff62acb-5301-4c5f-b552-cdc301cf9f14-var-run-calico\") pod \"calico-node-2qpwf\" (UID: \"7ff62acb-5301-4c5f-b552-cdc301cf9f14\") " pod="calico-system/calico-node-2qpwf" Dec 13 01:42:01.839163 kubelet[2213]: I1213 01:42:01.838457 2213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/7ff62acb-5301-4c5f-b552-cdc301cf9f14-cni-bin-dir\") pod \"calico-node-2qpwf\" (UID: \"7ff62acb-5301-4c5f-b552-cdc301cf9f14\") " pod="calico-system/calico-node-2qpwf" Dec 13 01:42:01.839163 kubelet[2213]: I1213 01:42:01.838515 2213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c3e9fb36-d77b-4a50-8a03-38032ba4faba-kube-proxy\") pod \"kube-proxy-ngg74\" (UID: \"c3e9fb36-d77b-4a50-8a03-38032ba4faba\") " pod="kube-system/kube-proxy-ngg74" Dec 13 01:42:01.839163 kubelet[2213]: I1213 01:42:01.838568 2213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c3e9fb36-d77b-4a50-8a03-38032ba4faba-lib-modules\") pod \"kube-proxy-ngg74\" (UID: \"c3e9fb36-d77b-4a50-8a03-38032ba4faba\") " pod="kube-system/kube-proxy-ngg74" Dec 13 01:42:01.839511 kubelet[2213]: I1213 01:42:01.838620 2213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/7ff62acb-5301-4c5f-b552-cdc301cf9f14-node-certs\") pod \"calico-node-2qpwf\" (UID: \"7ff62acb-5301-4c5f-b552-cdc301cf9f14\") " pod="calico-system/calico-node-2qpwf" Dec 13 01:42:01.839511 kubelet[2213]: I1213 01:42:01.838670 2213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/7ff62acb-5301-4c5f-b552-cdc301cf9f14-cni-net-dir\") pod \"calico-node-2qpwf\" (UID: \"7ff62acb-5301-4c5f-b552-cdc301cf9f14\") " pod="calico-system/calico-node-2qpwf" Dec 13 01:42:01.839511 kubelet[2213]: I1213 01:42:01.838714 2213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/7ff62acb-5301-4c5f-b552-cdc301cf9f14-flexvol-driver-host\") pod \"calico-node-2qpwf\" (UID: \"7ff62acb-5301-4c5f-b552-cdc301cf9f14\") " pod="calico-system/calico-node-2qpwf" Dec 13 01:42:01.839511 kubelet[2213]: I1213 01:42:01.838844 2213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5drv\" (UniqueName: \"kubernetes.io/projected/0ba65a02-8118-4915-9dfa-fbbe2bbe53d0-kube-api-access-n5drv\") pod \"csi-node-driver-sdf67\" (UID: \"0ba65a02-8118-4915-9dfa-fbbe2bbe53d0\") " pod="calico-system/csi-node-driver-sdf67" Dec 13 01:42:01.839511 kubelet[2213]: I1213 01:42:01.838895 2213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c3e9fb36-d77b-4a50-8a03-38032ba4faba-xtables-lock\") pod \"kube-proxy-ngg74\" (UID: \"c3e9fb36-d77b-4a50-8a03-38032ba4faba\") " pod="kube-system/kube-proxy-ngg74" Dec 13 01:42:01.839766 kubelet[2213]: I1213 01:42:01.838948 2213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0ba65a02-8118-4915-9dfa-fbbe2bbe53d0-registration-dir\") pod \"csi-node-driver-sdf67\" (UID: \"0ba65a02-8118-4915-9dfa-fbbe2bbe53d0\") " pod="calico-system/csi-node-driver-sdf67" Dec 13 01:42:01.839766 kubelet[2213]: I1213 01:42:01.839000 2213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7ff62acb-5301-4c5f-b552-cdc301cf9f14-lib-modules\") pod \"calico-node-2qpwf\" (UID: \"7ff62acb-5301-4c5f-b552-cdc301cf9f14\") " pod="calico-system/calico-node-2qpwf" Dec 13 01:42:01.839901 kubelet[2213]: I1213 01:42:01.839082 2213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/7ff62acb-5301-4c5f-b552-cdc301cf9f14-policysync\") pod \"calico-node-2qpwf\" (UID: \"7ff62acb-5301-4c5f-b552-cdc301cf9f14\") " pod="calico-system/calico-node-2qpwf" Dec 13 01:42:01.840160 kubelet[2213]: I1213 01:42:01.840117 2213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ff62acb-5301-4c5f-b552-cdc301cf9f14-tigera-ca-bundle\") pod \"calico-node-2qpwf\" (UID: \"7ff62acb-5301-4c5f-b552-cdc301cf9f14\") " pod="calico-system/calico-node-2qpwf" Dec 13 01:42:01.840251 kubelet[2213]: I1213 01:42:01.840185 2213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vk466\" (UniqueName: \"kubernetes.io/projected/7ff62acb-5301-4c5f-b552-cdc301cf9f14-kube-api-access-vk466\") pod \"calico-node-2qpwf\" (UID: \"7ff62acb-5301-4c5f-b552-cdc301cf9f14\") " pod="calico-system/calico-node-2qpwf" Dec 13 01:42:01.840251 kubelet[2213]: I1213 01:42:01.840236 2213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/0ba65a02-8118-4915-9dfa-fbbe2bbe53d0-varrun\") pod \"csi-node-driver-sdf67\" (UID: \"0ba65a02-8118-4915-9dfa-fbbe2bbe53d0\") " pod="calico-system/csi-node-driver-sdf67" Dec 13 01:42:01.944135 kubelet[2213]: E1213 01:42:01.944033 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:42:01.944135 kubelet[2213]: W1213 01:42:01.944075 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:42:01.944135 kubelet[2213]: E1213 01:42:01.944146 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:42:01.945401 kubelet[2213]: E1213 01:42:01.944538 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:42:01.945401 kubelet[2213]: W1213 01:42:01.944553 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:42:01.945401 kubelet[2213]: E1213 01:42:01.944577 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:42:01.945401 kubelet[2213]: E1213 01:42:01.944953 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:42:01.945401 kubelet[2213]: W1213 01:42:01.944974 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:42:01.945401 kubelet[2213]: E1213 01:42:01.945002 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:42:01.945780 kubelet[2213]: E1213 01:42:01.945521 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:42:01.945780 kubelet[2213]: W1213 01:42:01.945540 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:42:01.945780 kubelet[2213]: E1213 01:42:01.945568 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:42:01.948170 kubelet[2213]: E1213 01:42:01.946368 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:42:01.948170 kubelet[2213]: W1213 01:42:01.946394 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:42:01.948170 kubelet[2213]: E1213 01:42:01.946424 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:42:01.948170 kubelet[2213]: E1213 01:42:01.946810 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:42:01.948170 kubelet[2213]: W1213 01:42:01.946826 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:42:01.948170 kubelet[2213]: E1213 01:42:01.946849 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:42:01.948170 kubelet[2213]: E1213 01:42:01.947356 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:42:01.948170 kubelet[2213]: W1213 01:42:01.947376 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:42:01.948170 kubelet[2213]: E1213 01:42:01.947408 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:42:01.948170 kubelet[2213]: E1213 01:42:01.947923 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:42:01.948892 kubelet[2213]: W1213 01:42:01.947941 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:42:01.948892 kubelet[2213]: E1213 01:42:01.947962 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:42:01.948892 kubelet[2213]: E1213 01:42:01.948470 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:42:01.948892 kubelet[2213]: W1213 01:42:01.948489 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:42:01.948892 kubelet[2213]: E1213 01:42:01.948525 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:42:01.949297 kubelet[2213]: E1213 01:42:01.948941 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:42:01.949297 kubelet[2213]: W1213 01:42:01.948961 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:42:01.949297 kubelet[2213]: E1213 01:42:01.948988 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:42:01.949501 kubelet[2213]: E1213 01:42:01.949460 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:42:01.949501 kubelet[2213]: W1213 01:42:01.949480 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:42:01.949636 kubelet[2213]: E1213 01:42:01.949519 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:42:01.950035 kubelet[2213]: E1213 01:42:01.949995 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:42:01.950035 kubelet[2213]: W1213 01:42:01.950033 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:42:01.950249 kubelet[2213]: E1213 01:42:01.950081 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:42:01.950973 kubelet[2213]: E1213 01:42:01.950783 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:42:01.950973 kubelet[2213]: W1213 01:42:01.950809 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:42:01.950973 kubelet[2213]: E1213 01:42:01.950837 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:42:01.951633 kubelet[2213]: E1213 01:42:01.951324 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:42:01.951633 kubelet[2213]: W1213 01:42:01.951350 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:42:01.951633 kubelet[2213]: E1213 01:42:01.951425 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:42:01.952012 kubelet[2213]: E1213 01:42:01.951858 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:42:01.952012 kubelet[2213]: W1213 01:42:01.951880 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:42:01.952012 kubelet[2213]: E1213 01:42:01.951907 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:42:01.952858 kubelet[2213]: E1213 01:42:01.952450 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:42:01.952858 kubelet[2213]: W1213 01:42:01.952472 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:42:01.952858 kubelet[2213]: E1213 01:42:01.952496 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:42:01.952858 kubelet[2213]: E1213 01:42:01.952823 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:42:01.952858 kubelet[2213]: W1213 01:42:01.952836 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:42:01.952858 kubelet[2213]: E1213 01:42:01.952856 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:42:01.954648 kubelet[2213]: E1213 01:42:01.953345 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:42:01.954648 kubelet[2213]: W1213 01:42:01.953366 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:42:01.954648 kubelet[2213]: E1213 01:42:01.953395 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:42:01.954648 kubelet[2213]: E1213 01:42:01.953864 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:42:01.954648 kubelet[2213]: W1213 01:42:01.953883 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:42:01.954648 kubelet[2213]: E1213 01:42:01.953910 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:42:01.954648 kubelet[2213]: E1213 01:42:01.954540 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:42:01.954648 kubelet[2213]: W1213 01:42:01.954562 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:42:01.954648 kubelet[2213]: E1213 01:42:01.954590 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:42:01.955313 kubelet[2213]: E1213 01:42:01.955015 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:42:01.955313 kubelet[2213]: W1213 01:42:01.955036 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:42:01.955313 kubelet[2213]: E1213 01:42:01.955063 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:42:01.956323 kubelet[2213]: E1213 01:42:01.955525 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:42:01.956323 kubelet[2213]: W1213 01:42:01.955546 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:42:01.956323 kubelet[2213]: E1213 01:42:01.955568 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:42:01.956323 kubelet[2213]: E1213 01:42:01.956009 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:42:01.956323 kubelet[2213]: W1213 01:42:01.956031 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:42:01.956323 kubelet[2213]: E1213 01:42:01.956061 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:42:01.956760 kubelet[2213]: E1213 01:42:01.956564 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:42:01.956760 kubelet[2213]: W1213 01:42:01.956584 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:42:01.956760 kubelet[2213]: E1213 01:42:01.956614 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:42:01.959988 kubelet[2213]: E1213 01:42:01.959053 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:42:01.959988 kubelet[2213]: W1213 01:42:01.959076 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:42:01.959988 kubelet[2213]: E1213 01:42:01.959152 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:42:01.959988 kubelet[2213]: E1213 01:42:01.959748 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:42:01.959988 kubelet[2213]: W1213 01:42:01.959777 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:42:01.959988 kubelet[2213]: E1213 01:42:01.959808 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:42:01.960464 kubelet[2213]: E1213 01:42:01.960302 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:42:01.960464 kubelet[2213]: W1213 01:42:01.960321 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:42:01.960464 kubelet[2213]: E1213 01:42:01.960352 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:42:01.960738 kubelet[2213]: E1213 01:42:01.960715 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:42:01.960795 kubelet[2213]: W1213 01:42:01.960739 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:42:01.960795 kubelet[2213]: E1213 01:42:01.960764 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:42:01.961730 kubelet[2213]: E1213 01:42:01.961340 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:42:01.961730 kubelet[2213]: W1213 01:42:01.961369 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:42:01.961730 kubelet[2213]: E1213 01:42:01.961399 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:42:01.962318 kubelet[2213]: E1213 01:42:01.962257 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:42:01.962318 kubelet[2213]: W1213 01:42:01.962314 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:42:01.962468 kubelet[2213]: E1213 01:42:01.962351 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:42:01.966123 kubelet[2213]: E1213 01:42:01.964471 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:42:01.966123 kubelet[2213]: W1213 01:42:01.964502 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:42:01.966123 kubelet[2213]: E1213 01:42:01.964533 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:42:01.966123 kubelet[2213]: E1213 01:42:01.964930 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:42:01.966123 kubelet[2213]: W1213 01:42:01.964952 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:42:01.966123 kubelet[2213]: E1213 01:42:01.964980 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:42:01.966466 kubelet[2213]: E1213 01:42:01.966243 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:42:01.966466 kubelet[2213]: W1213 01:42:01.966288 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:42:01.966466 kubelet[2213]: E1213 01:42:01.966320 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:42:01.968503 kubelet[2213]: E1213 01:42:01.966746 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:42:01.968503 kubelet[2213]: W1213 01:42:01.966772 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:42:01.968503 kubelet[2213]: E1213 01:42:01.966804 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:42:01.971188 kubelet[2213]: E1213 01:42:01.969254 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:42:01.971188 kubelet[2213]: W1213 01:42:01.969297 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:42:01.971188 kubelet[2213]: E1213 01:42:01.969329 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:42:01.971188 kubelet[2213]: E1213 01:42:01.970795 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:42:01.971188 kubelet[2213]: W1213 01:42:01.970816 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:42:01.971188 kubelet[2213]: E1213 01:42:01.970846 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:42:01.974758 kubelet[2213]: E1213 01:42:01.974726 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:42:01.974758 kubelet[2213]: W1213 01:42:01.974755 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:42:01.979189 kubelet[2213]: E1213 01:42:01.975887 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:42:01.979189 kubelet[2213]: W1213 01:42:01.975910 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:42:01.979189 kubelet[2213]: E1213 01:42:01.977459 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:42:01.979189 kubelet[2213]: W1213 01:42:01.977481 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:42:01.979189 kubelet[2213]: E1213 01:42:01.978961 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:42:01.979189 kubelet[2213]: W1213 01:42:01.978982 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:42:01.983309 kubelet[2213]: E1213 01:42:01.983230 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:42:01.983309 kubelet[2213]: W1213 01:42:01.983292 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:42:01.983487 kubelet[2213]: E1213 01:42:01.983333 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:42:01.989429 kubelet[2213]: E1213 01:42:01.987490 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:42:01.989429 kubelet[2213]: W1213 01:42:01.987523 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:42:01.989429 kubelet[2213]: E1213 01:42:01.987562 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:42:01.992808 kubelet[2213]: E1213 01:42:01.990164 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:42:01.992808 kubelet[2213]: E1213 01:42:01.990208 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:42:01.992808 kubelet[2213]: E1213 01:42:01.990231 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:42:01.992808 kubelet[2213]: E1213 01:42:01.990251 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:42:01.992808 kubelet[2213]: E1213 01:42:01.991159 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:42:01.992808 kubelet[2213]: W1213 01:42:01.991175 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:42:01.995297 kubelet[2213]: E1213 01:42:01.995170 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:42:01.995461 kubelet[2213]: E1213 01:42:01.995440 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:42:01.995512 kubelet[2213]: W1213 01:42:01.995462 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:42:01.995689 kubelet[2213]: E1213 01:42:01.995590 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:42:02.000121 kubelet[2213]: E1213 01:42:01.996951 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:42:02.000121 kubelet[2213]: W1213 01:42:01.996974 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:42:02.000121 kubelet[2213]: E1213 01:42:01.998083 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:42:02.000121 kubelet[2213]: E1213 01:42:01.998192 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:42:02.000121 kubelet[2213]: W1213 01:42:01.998205 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:42:02.000121 kubelet[2213]: E1213 01:42:01.998375 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:42:02.000121 kubelet[2213]: E1213 01:42:01.998595 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:42:02.000121 kubelet[2213]: W1213 01:42:01.998612 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:42:02.000121 kubelet[2213]: E1213 01:42:01.998733 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:42:02.000121 kubelet[2213]: E1213 01:42:01.999152 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:42:02.000547 kubelet[2213]: W1213 01:42:01.999186 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:42:02.000547 kubelet[2213]: E1213 01:42:01.999312 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:42:02.000547 kubelet[2213]: E1213 01:42:01.999734 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:42:02.000547 kubelet[2213]: W1213 01:42:01.999748 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:42:02.000547 kubelet[2213]: E1213 01:42:01.999863 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:42:02.000547 kubelet[2213]: E1213 01:42:02.000228 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:42:02.000547 kubelet[2213]: W1213 01:42:02.000243 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:42:02.000547 kubelet[2213]: E1213 01:42:02.000369 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:42:02.000822 kubelet[2213]: E1213 01:42:02.000678 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:42:02.000822 kubelet[2213]: W1213 01:42:02.000693 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:42:02.000822 kubelet[2213]: E1213 01:42:02.000813 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:42:02.004161 kubelet[2213]: E1213 01:42:02.001178 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:42:02.004161 kubelet[2213]: W1213 01:42:02.001200 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:42:02.004161 kubelet[2213]: E1213 01:42:02.001329 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:42:02.004161 kubelet[2213]: E1213 01:42:02.002024 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:42:02.004161 kubelet[2213]: W1213 01:42:02.002041 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:42:02.004161 kubelet[2213]: E1213 01:42:02.002321 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:42:02.004161 kubelet[2213]: E1213 01:42:02.002723 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:42:02.004161 kubelet[2213]: W1213 01:42:02.002739 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:42:02.004161 kubelet[2213]: E1213 01:42:02.002908 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:42:02.004161 kubelet[2213]: E1213 01:42:02.003391 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:42:02.004736 kubelet[2213]: W1213 01:42:02.003409 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:42:02.004736 kubelet[2213]: E1213 01:42:02.003737 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:42:02.004736 kubelet[2213]: E1213 01:42:02.003847 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:42:02.004736 kubelet[2213]: W1213 01:42:02.003863 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:42:02.004736 kubelet[2213]: E1213 01:42:02.003982 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:42:02.006359 kubelet[2213]: E1213 01:42:02.006257 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:42:02.006359 kubelet[2213]: W1213 01:42:02.006297 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:42:02.006359 kubelet[2213]: E1213 01:42:02.006320 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:42:02.117157 containerd[1611]: time="2024-12-13T01:42:02.116957874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2qpwf,Uid:7ff62acb-5301-4c5f-b552-cdc301cf9f14,Namespace:calico-system,Attempt:0,}" Dec 13 01:42:02.120032 containerd[1611]: time="2024-12-13T01:42:02.119727575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ngg74,Uid:c3e9fb36-d77b-4a50-8a03-38032ba4faba,Namespace:kube-system,Attempt:0,}" Dec 13 01:42:02.770182 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount737058069.mount: Deactivated successfully. Dec 13 01:42:02.779049 containerd[1611]: time="2024-12-13T01:42:02.778969895Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:42:02.787839 containerd[1611]: time="2024-12-13T01:42:02.787242137Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312076" Dec 13 01:42:02.788568 containerd[1611]: time="2024-12-13T01:42:02.788489527Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:42:02.789449 containerd[1611]: time="2024-12-13T01:42:02.789404853Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:42:02.789903 containerd[1611]: time="2024-12-13T01:42:02.789812000Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:42:02.793580 containerd[1611]: time="2024-12-13T01:42:02.793517575Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:42:02.795605 containerd[1611]: time="2024-12-13T01:42:02.795156494Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 677.992561ms" Dec 13 01:42:02.796425 containerd[1611]: time="2024-12-13T01:42:02.796372416Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 676.556665ms" Dec 13 01:42:02.800732 kubelet[2213]: E1213 01:42:02.800658 2213 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:42:02.963471 containerd[1611]: time="2024-12-13T01:42:02.963340450Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:42:02.963471 containerd[1611]: time="2024-12-13T01:42:02.963402386Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:42:02.963471 containerd[1611]: time="2024-12-13T01:42:02.963434437Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:42:02.965395 containerd[1611]: time="2024-12-13T01:42:02.965213039Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:42:02.967832 containerd[1611]: time="2024-12-13T01:42:02.967651636Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:42:02.967832 containerd[1611]: time="2024-12-13T01:42:02.967694126Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:42:02.967832 containerd[1611]: time="2024-12-13T01:42:02.967707431Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:42:02.967832 containerd[1611]: time="2024-12-13T01:42:02.967772383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:42:03.048416 systemd[1]: run-containerd-runc-k8s.io-e50d8b9d2bf968aead137f584f12c011dbfd664d2e6c4051cdef5dc5c064ab87-runc.EWXw35.mount: Deactivated successfully. Dec 13 01:42:03.079226 containerd[1611]: time="2024-12-13T01:42:03.079002886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2qpwf,Uid:7ff62acb-5301-4c5f-b552-cdc301cf9f14,Namespace:calico-system,Attempt:0,} returns sandbox id \"e50d8b9d2bf968aead137f584f12c011dbfd664d2e6c4051cdef5dc5c064ab87\"" Dec 13 01:42:03.083644 containerd[1611]: time="2024-12-13T01:42:03.083428215Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Dec 13 01:42:03.090354 containerd[1611]: time="2024-12-13T01:42:03.090322138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ngg74,Uid:c3e9fb36-d77b-4a50-8a03-38032ba4faba,Namespace:kube-system,Attempt:0,} returns sandbox id \"d58e30c532bc85b766f91c9320741c3b2590dd0f1e466a9e950e172373268bb1\"" Dec 13 01:42:03.801742 kubelet[2213]: E1213 01:42:03.801664 2213 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:42:03.944365 kubelet[2213]: E1213 01:42:03.943819 2213 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sdf67" podUID="0ba65a02-8118-4915-9dfa-fbbe2bbe53d0" Dec 13 01:42:04.589636 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3380229514.mount: Deactivated successfully. Dec 13 01:42:04.700186 containerd[1611]: time="2024-12-13T01:42:04.700119917Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:42:04.701526 containerd[1611]: time="2024-12-13T01:42:04.701436278Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Dec 13 01:42:04.702634 containerd[1611]: time="2024-12-13T01:42:04.702583480Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:42:04.706554 containerd[1611]: time="2024-12-13T01:42:04.705884680Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:42:04.706554 containerd[1611]: time="2024-12-13T01:42:04.706387639Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.622919969s" Dec 13 01:42:04.706554 containerd[1611]: time="2024-12-13T01:42:04.706415260Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Dec 13 01:42:04.707916 containerd[1611]: time="2024-12-13T01:42:04.707898686Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 01:42:04.708897 containerd[1611]: time="2024-12-13T01:42:04.708841592Z" level=info msg="CreateContainer within sandbox \"e50d8b9d2bf968aead137f584f12c011dbfd664d2e6c4051cdef5dc5c064ab87\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 01:42:04.726874 containerd[1611]: time="2024-12-13T01:42:04.726796858Z" level=info msg="CreateContainer within sandbox \"e50d8b9d2bf968aead137f584f12c011dbfd664d2e6c4051cdef5dc5c064ab87\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a5cc7c31af1399b2aa38680849fea80676541c5cd853687028abbead95d5cbaf\"" Dec 13 01:42:04.727958 containerd[1611]: time="2024-12-13T01:42:04.727913291Z" level=info msg="StartContainer for \"a5cc7c31af1399b2aa38680849fea80676541c5cd853687028abbead95d5cbaf\"" Dec 13 01:42:04.802223 containerd[1611]: time="2024-12-13T01:42:04.801541175Z" level=info msg="StartContainer for \"a5cc7c31af1399b2aa38680849fea80676541c5cd853687028abbead95d5cbaf\" returns successfully" Dec 13 01:42:04.802508 kubelet[2213]: E1213 01:42:04.802489 2213 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:42:04.864730 containerd[1611]: time="2024-12-13T01:42:04.864552955Z" level=info msg="shim disconnected" id=a5cc7c31af1399b2aa38680849fea80676541c5cd853687028abbead95d5cbaf namespace=k8s.io Dec 13 01:42:04.864730 containerd[1611]: time="2024-12-13T01:42:04.864637474Z" level=warning msg="cleaning up after shim disconnected" id=a5cc7c31af1399b2aa38680849fea80676541c5cd853687028abbead95d5cbaf namespace=k8s.io Dec 13 01:42:04.864730 containerd[1611]: time="2024-12-13T01:42:04.864652012Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:42:04.882385 containerd[1611]: time="2024-12-13T01:42:04.882300678Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:42:04Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 01:42:05.541431 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a5cc7c31af1399b2aa38680849fea80676541c5cd853687028abbead95d5cbaf-rootfs.mount: Deactivated successfully. Dec 13 01:42:05.803207 kubelet[2213]: E1213 01:42:05.803034 2213 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:42:05.833454 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3170574921.mount: Deactivated successfully. Dec 13 01:42:05.945922 kubelet[2213]: E1213 01:42:05.945882 2213 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sdf67" podUID="0ba65a02-8118-4915-9dfa-fbbe2bbe53d0" Dec 13 01:42:06.201463 containerd[1611]: time="2024-12-13T01:42:06.201037986Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:42:06.202159 containerd[1611]: time="2024-12-13T01:42:06.202107521Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=28619984" Dec 13 01:42:06.203072 containerd[1611]: time="2024-12-13T01:42:06.203018017Z" level=info msg="ImageCreate event name:\"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:42:06.205010 containerd[1611]: time="2024-12-13T01:42:06.204959815Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:42:06.206984 containerd[1611]: time="2024-12-13T01:42:06.206880624Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"28618977\" in 1.498861942s" Dec 13 01:42:06.206984 containerd[1611]: time="2024-12-13T01:42:06.206919418Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Dec 13 01:42:06.210340 containerd[1611]: time="2024-12-13T01:42:06.210141689Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Dec 13 01:42:06.211237 containerd[1611]: time="2024-12-13T01:42:06.211133266Z" level=info msg="CreateContainer within sandbox \"d58e30c532bc85b766f91c9320741c3b2590dd0f1e466a9e950e172373268bb1\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 01:42:06.225579 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1308856559.mount: Deactivated successfully. Dec 13 01:42:06.231412 containerd[1611]: time="2024-12-13T01:42:06.231344744Z" level=info msg="CreateContainer within sandbox \"d58e30c532bc85b766f91c9320741c3b2590dd0f1e466a9e950e172373268bb1\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1811cc23eaef87d3df8221e35087a744ed7c6e58c07d29eacd58b8055c47a065\"" Dec 13 01:42:06.232023 containerd[1611]: time="2024-12-13T01:42:06.231889050Z" level=info msg="StartContainer for \"1811cc23eaef87d3df8221e35087a744ed7c6e58c07d29eacd58b8055c47a065\"" Dec 13 01:42:06.293470 containerd[1611]: time="2024-12-13T01:42:06.293355464Z" level=info msg="StartContainer for \"1811cc23eaef87d3df8221e35087a744ed7c6e58c07d29eacd58b8055c47a065\" returns successfully" Dec 13 01:42:06.803775 kubelet[2213]: E1213 01:42:06.803707 2213 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:42:07.021059 kubelet[2213]: I1213 01:42:07.020962 2213 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-ngg74" podStartSLOduration=4.90331549 podStartE2EDuration="8.020857398s" podCreationTimestamp="2024-12-13 01:41:59 +0000 UTC" firstStartedPulling="2024-12-13 01:42:03.091721255 +0000 UTC m=+3.643069754" lastFinishedPulling="2024-12-13 01:42:06.209263162 +0000 UTC m=+6.760611662" observedRunningTime="2024-12-13 01:42:07.020493121 +0000 UTC m=+7.571841661" watchObservedRunningTime="2024-12-13 01:42:07.020857398 +0000 UTC m=+7.572205927" Dec 13 01:42:07.804390 kubelet[2213]: E1213 01:42:07.804200 2213 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:42:07.944496 kubelet[2213]: E1213 01:42:07.944032 2213 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sdf67" podUID="0ba65a02-8118-4915-9dfa-fbbe2bbe53d0" Dec 13 01:42:08.804648 kubelet[2213]: E1213 01:42:08.804606 2213 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:42:09.804772 kubelet[2213]: E1213 01:42:09.804725 2213 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:42:09.946117 kubelet[2213]: E1213 01:42:09.945241 2213 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sdf67" podUID="0ba65a02-8118-4915-9dfa-fbbe2bbe53d0" Dec 13 01:42:10.649754 containerd[1611]: time="2024-12-13T01:42:10.649704287Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:42:10.650832 containerd[1611]: time="2024-12-13T01:42:10.650670858Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Dec 13 01:42:10.651572 containerd[1611]: time="2024-12-13T01:42:10.651529705Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:42:10.653569 containerd[1611]: time="2024-12-13T01:42:10.653523601Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:42:10.654112 containerd[1611]: time="2024-12-13T01:42:10.654059310Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 4.44389012s" Dec 13 01:42:10.654112 containerd[1611]: time="2024-12-13T01:42:10.654084397Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Dec 13 01:42:10.655937 containerd[1611]: time="2024-12-13T01:42:10.655899124Z" level=info msg="CreateContainer within sandbox \"e50d8b9d2bf968aead137f584f12c011dbfd664d2e6c4051cdef5dc5c064ab87\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 01:42:10.672499 containerd[1611]: time="2024-12-13T01:42:10.672450685Z" level=info msg="CreateContainer within sandbox \"e50d8b9d2bf968aead137f584f12c011dbfd664d2e6c4051cdef5dc5c064ab87\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"0a6ed1210a73db8bc20b864865cd7d30969f6bdd7a27ce304a50621916442cae\"" Dec 13 01:42:10.674169 containerd[1611]: time="2024-12-13T01:42:10.672871909Z" level=info msg="StartContainer for \"0a6ed1210a73db8bc20b864865cd7d30969f6bdd7a27ce304a50621916442cae\"" Dec 13 01:42:10.701639 systemd[1]: run-containerd-runc-k8s.io-0a6ed1210a73db8bc20b864865cd7d30969f6bdd7a27ce304a50621916442cae-runc.F9z3br.mount: Deactivated successfully. Dec 13 01:42:10.732282 containerd[1611]: time="2024-12-13T01:42:10.732212074Z" level=info msg="StartContainer for \"0a6ed1210a73db8bc20b864865cd7d30969f6bdd7a27ce304a50621916442cae\" returns successfully" Dec 13 01:42:10.808229 kubelet[2213]: E1213 01:42:10.808162 2213 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:42:11.325040 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0a6ed1210a73db8bc20b864865cd7d30969f6bdd7a27ce304a50621916442cae-rootfs.mount: Deactivated successfully. Dec 13 01:42:11.335259 containerd[1611]: time="2024-12-13T01:42:11.335164595Z" level=info msg="shim disconnected" id=0a6ed1210a73db8bc20b864865cd7d30969f6bdd7a27ce304a50621916442cae namespace=k8s.io Dec 13 01:42:11.335455 containerd[1611]: time="2024-12-13T01:42:11.335257359Z" level=warning msg="cleaning up after shim disconnected" id=0a6ed1210a73db8bc20b864865cd7d30969f6bdd7a27ce304a50621916442cae namespace=k8s.io Dec 13 01:42:11.335455 containerd[1611]: time="2024-12-13T01:42:11.335270143Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:42:11.350158 kubelet[2213]: I1213 01:42:11.350122 2213 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 01:42:11.809703 kubelet[2213]: E1213 01:42:11.809605 2213 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:42:11.949406 containerd[1611]: time="2024-12-13T01:42:11.949353412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sdf67,Uid:0ba65a02-8118-4915-9dfa-fbbe2bbe53d0,Namespace:calico-system,Attempt:0,}" Dec 13 01:42:12.046949 containerd[1611]: time="2024-12-13T01:42:12.046304694Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Dec 13 01:42:12.079803 containerd[1611]: time="2024-12-13T01:42:12.079519707Z" level=error msg="Failed to destroy network for sandbox \"587452bb9cac560a1711ef15564288611643ba47686d985c56c2cbdfa56ce623\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:42:12.081730 containerd[1611]: time="2024-12-13T01:42:12.081596408Z" level=error msg="encountered an error cleaning up failed sandbox \"587452bb9cac560a1711ef15564288611643ba47686d985c56c2cbdfa56ce623\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:42:12.081730 containerd[1611]: time="2024-12-13T01:42:12.081707807Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sdf67,Uid:0ba65a02-8118-4915-9dfa-fbbe2bbe53d0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"587452bb9cac560a1711ef15564288611643ba47686d985c56c2cbdfa56ce623\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:42:12.083855 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-587452bb9cac560a1711ef15564288611643ba47686d985c56c2cbdfa56ce623-shm.mount: Deactivated successfully. Dec 13 01:42:12.084314 kubelet[2213]: E1213 01:42:12.084268 2213 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"587452bb9cac560a1711ef15564288611643ba47686d985c56c2cbdfa56ce623\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:42:12.084471 kubelet[2213]: E1213 01:42:12.084403 2213 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"587452bb9cac560a1711ef15564288611643ba47686d985c56c2cbdfa56ce623\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-sdf67" Dec 13 01:42:12.084502 kubelet[2213]: E1213 01:42:12.084485 2213 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"587452bb9cac560a1711ef15564288611643ba47686d985c56c2cbdfa56ce623\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-sdf67" Dec 13 01:42:12.084702 kubelet[2213]: E1213 01:42:12.084598 2213 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-sdf67_calico-system(0ba65a02-8118-4915-9dfa-fbbe2bbe53d0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-sdf67_calico-system(0ba65a02-8118-4915-9dfa-fbbe2bbe53d0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"587452bb9cac560a1711ef15564288611643ba47686d985c56c2cbdfa56ce623\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-sdf67" podUID="0ba65a02-8118-4915-9dfa-fbbe2bbe53d0" Dec 13 01:42:12.810844 kubelet[2213]: E1213 01:42:12.810710 2213 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:42:13.046904 kubelet[2213]: I1213 01:42:13.046825 2213 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="587452bb9cac560a1711ef15564288611643ba47686d985c56c2cbdfa56ce623" Dec 13 01:42:13.048054 containerd[1611]: time="2024-12-13T01:42:13.047993835Z" level=info msg="StopPodSandbox for \"587452bb9cac560a1711ef15564288611643ba47686d985c56c2cbdfa56ce623\"" Dec 13 01:42:13.048491 containerd[1611]: time="2024-12-13T01:42:13.048396434Z" level=info msg="Ensure that sandbox 587452bb9cac560a1711ef15564288611643ba47686d985c56c2cbdfa56ce623 in task-service has been cleanup successfully" Dec 13 01:42:13.095375 containerd[1611]: time="2024-12-13T01:42:13.094731916Z" level=error msg="StopPodSandbox for \"587452bb9cac560a1711ef15564288611643ba47686d985c56c2cbdfa56ce623\" failed" error="failed to destroy network for sandbox \"587452bb9cac560a1711ef15564288611643ba47686d985c56c2cbdfa56ce623\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:42:13.095540 kubelet[2213]: E1213 01:42:13.095001 2213 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"587452bb9cac560a1711ef15564288611643ba47686d985c56c2cbdfa56ce623\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="587452bb9cac560a1711ef15564288611643ba47686d985c56c2cbdfa56ce623" Dec 13 01:42:13.095540 kubelet[2213]: E1213 01:42:13.095121 2213 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"587452bb9cac560a1711ef15564288611643ba47686d985c56c2cbdfa56ce623"} Dec 13 01:42:13.095540 kubelet[2213]: E1213 01:42:13.095172 2213 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0ba65a02-8118-4915-9dfa-fbbe2bbe53d0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"587452bb9cac560a1711ef15564288611643ba47686d985c56c2cbdfa56ce623\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:42:13.095540 kubelet[2213]: E1213 01:42:13.095211 2213 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0ba65a02-8118-4915-9dfa-fbbe2bbe53d0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"587452bb9cac560a1711ef15564288611643ba47686d985c56c2cbdfa56ce623\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-sdf67" podUID="0ba65a02-8118-4915-9dfa-fbbe2bbe53d0" Dec 13 01:42:13.812302 kubelet[2213]: E1213 01:42:13.812205 2213 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:42:14.813739 kubelet[2213]: E1213 01:42:14.813634 2213 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:42:15.814444 kubelet[2213]: E1213 01:42:15.814369 2213 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:42:16.815338 kubelet[2213]: E1213 01:42:16.815275 2213 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:42:17.816965 kubelet[2213]: E1213 01:42:17.816905 2213 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:42:18.818028 kubelet[2213]: E1213 01:42:18.817962 2213 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:42:19.013299 kubelet[2213]: I1213 01:42:19.013252 2213 topology_manager.go:215] "Topology Admit Handler" podUID="97990401-e34a-45ad-b932-28c3a2fd54e7" podNamespace="default" podName="nginx-deployment-6d5f899847-nzs5p" Dec 13 01:42:19.052377 kubelet[2213]: I1213 01:42:19.052306 2213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gld65\" (UniqueName: \"kubernetes.io/projected/97990401-e34a-45ad-b932-28c3a2fd54e7-kube-api-access-gld65\") pod \"nginx-deployment-6d5f899847-nzs5p\" (UID: \"97990401-e34a-45ad-b932-28c3a2fd54e7\") " pod="default/nginx-deployment-6d5f899847-nzs5p" Dec 13 01:42:19.326210 containerd[1611]: time="2024-12-13T01:42:19.326148928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-nzs5p,Uid:97990401-e34a-45ad-b932-28c3a2fd54e7,Namespace:default,Attempt:0,}" Dec 13 01:42:19.445636 containerd[1611]: time="2024-12-13T01:42:19.445572289Z" level=error msg="Failed to destroy network for sandbox \"9c6385a798eef28e85ab5c43aa4d9bb76f80034da986f6d64b9e4c221096680e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:42:19.448618 containerd[1611]: time="2024-12-13T01:42:19.448494420Z" level=error msg="encountered an error cleaning up failed sandbox \"9c6385a798eef28e85ab5c43aa4d9bb76f80034da986f6d64b9e4c221096680e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:42:19.448618 containerd[1611]: time="2024-12-13T01:42:19.448548471Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-nzs5p,Uid:97990401-e34a-45ad-b932-28c3a2fd54e7,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9c6385a798eef28e85ab5c43aa4d9bb76f80034da986f6d64b9e4c221096680e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:42:19.449515 kubelet[2213]: E1213 01:42:19.448842 2213 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c6385a798eef28e85ab5c43aa4d9bb76f80034da986f6d64b9e4c221096680e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:42:19.449515 kubelet[2213]: E1213 01:42:19.448912 2213 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c6385a798eef28e85ab5c43aa4d9bb76f80034da986f6d64b9e4c221096680e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-nzs5p" Dec 13 01:42:19.449515 kubelet[2213]: E1213 01:42:19.448934 2213 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c6385a798eef28e85ab5c43aa4d9bb76f80034da986f6d64b9e4c221096680e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-nzs5p" Dec 13 01:42:19.449445 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9c6385a798eef28e85ab5c43aa4d9bb76f80034da986f6d64b9e4c221096680e-shm.mount: Deactivated successfully. Dec 13 01:42:19.450006 kubelet[2213]: E1213 01:42:19.449000 2213 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-nzs5p_default(97990401-e34a-45ad-b932-28c3a2fd54e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-nzs5p_default(97990401-e34a-45ad-b932-28c3a2fd54e7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9c6385a798eef28e85ab5c43aa4d9bb76f80034da986f6d64b9e4c221096680e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-nzs5p" podUID="97990401-e34a-45ad-b932-28c3a2fd54e7" Dec 13 01:42:19.800507 kubelet[2213]: E1213 01:42:19.800445 2213 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:42:19.818480 kubelet[2213]: E1213 01:42:19.818406 2213 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:42:20.062678 kubelet[2213]: I1213 01:42:20.062068 2213 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c6385a798eef28e85ab5c43aa4d9bb76f80034da986f6d64b9e4c221096680e" Dec 13 01:42:20.062792 containerd[1611]: time="2024-12-13T01:42:20.062713331Z" level=info msg="StopPodSandbox for \"9c6385a798eef28e85ab5c43aa4d9bb76f80034da986f6d64b9e4c221096680e\"" Dec 13 01:42:20.062941 containerd[1611]: time="2024-12-13T01:42:20.062868543Z" level=info msg="Ensure that sandbox 9c6385a798eef28e85ab5c43aa4d9bb76f80034da986f6d64b9e4c221096680e in task-service has been cleanup successfully" Dec 13 01:42:20.117902 containerd[1611]: time="2024-12-13T01:42:20.117859387Z" level=error msg="StopPodSandbox for \"9c6385a798eef28e85ab5c43aa4d9bb76f80034da986f6d64b9e4c221096680e\" failed" error="failed to destroy network for sandbox \"9c6385a798eef28e85ab5c43aa4d9bb76f80034da986f6d64b9e4c221096680e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:42:20.118375 kubelet[2213]: E1213 01:42:20.118271 2213 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9c6385a798eef28e85ab5c43aa4d9bb76f80034da986f6d64b9e4c221096680e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9c6385a798eef28e85ab5c43aa4d9bb76f80034da986f6d64b9e4c221096680e" Dec 13 01:42:20.118375 kubelet[2213]: E1213 01:42:20.118326 2213 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9c6385a798eef28e85ab5c43aa4d9bb76f80034da986f6d64b9e4c221096680e"} Dec 13 01:42:20.118510 kubelet[2213]: E1213 01:42:20.118393 2213 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"97990401-e34a-45ad-b932-28c3a2fd54e7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9c6385a798eef28e85ab5c43aa4d9bb76f80034da986f6d64b9e4c221096680e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:42:20.118510 kubelet[2213]: E1213 01:42:20.118433 2213 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"97990401-e34a-45ad-b932-28c3a2fd54e7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9c6385a798eef28e85ab5c43aa4d9bb76f80034da986f6d64b9e4c221096680e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-nzs5p" podUID="97990401-e34a-45ad-b932-28c3a2fd54e7" Dec 13 01:42:20.210791 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount121752631.mount: Deactivated successfully. Dec 13 01:42:20.275416 containerd[1611]: time="2024-12-13T01:42:20.275336998Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:42:20.276401 containerd[1611]: time="2024-12-13T01:42:20.276351126Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Dec 13 01:42:20.277802 containerd[1611]: time="2024-12-13T01:42:20.277765949Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:42:20.281508 containerd[1611]: time="2024-12-13T01:42:20.281419865Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:42:20.283008 containerd[1611]: time="2024-12-13T01:42:20.282042377Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 8.235652141s" Dec 13 01:42:20.283008 containerd[1611]: time="2024-12-13T01:42:20.282071030Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Dec 13 01:42:20.299257 containerd[1611]: time="2024-12-13T01:42:20.299197189Z" level=info msg="CreateContainer within sandbox \"e50d8b9d2bf968aead137f584f12c011dbfd664d2e6c4051cdef5dc5c064ab87\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 01:42:20.328680 containerd[1611]: time="2024-12-13T01:42:20.327967812Z" level=info msg="CreateContainer within sandbox \"e50d8b9d2bf968aead137f584f12c011dbfd664d2e6c4051cdef5dc5c064ab87\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d62024ff009d128b544b789908ecf28379021e9d2f3e44303f12d46626722b25\"" Dec 13 01:42:20.329830 containerd[1611]: time="2024-12-13T01:42:20.329458246Z" level=info msg="StartContainer for \"d62024ff009d128b544b789908ecf28379021e9d2f3e44303f12d46626722b25\"" Dec 13 01:42:20.444413 containerd[1611]: time="2024-12-13T01:42:20.444303849Z" level=info msg="StartContainer for \"d62024ff009d128b544b789908ecf28379021e9d2f3e44303f12d46626722b25\" returns successfully" Dec 13 01:42:20.555086 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 13 01:42:20.555250 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 13 01:42:20.819456 kubelet[2213]: E1213 01:42:20.819339 2213 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:42:21.083086 kubelet[2213]: I1213 01:42:21.082885 2213 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-2qpwf" podStartSLOduration=4.883086807 podStartE2EDuration="22.082819596s" podCreationTimestamp="2024-12-13 01:41:59 +0000 UTC" firstStartedPulling="2024-12-13 01:42:03.082534642 +0000 UTC m=+3.633883140" lastFinishedPulling="2024-12-13 01:42:20.28226743 +0000 UTC m=+20.833615929" observedRunningTime="2024-12-13 01:42:21.082644727 +0000 UTC m=+21.633993256" watchObservedRunningTime="2024-12-13 01:42:21.082819596 +0000 UTC m=+21.634168115" Dec 13 01:42:21.820329 kubelet[2213]: E1213 01:42:21.820260 2213 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:42:22.223335 kernel: bpftool[3034]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Dec 13 01:42:22.484058 systemd-networkd[1251]: vxlan.calico: Link UP Dec 13 01:42:22.484069 systemd-networkd[1251]: vxlan.calico: Gained carrier Dec 13 01:42:22.821422 kubelet[2213]: E1213 01:42:22.820413 2213 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:42:23.821511 kubelet[2213]: E1213 01:42:23.821409 2213 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:42:24.188468 systemd-networkd[1251]: vxlan.calico: Gained IPv6LL Dec 13 01:42:24.822146 kubelet[2213]: E1213 01:42:24.822056 2213 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:42:25.823302 kubelet[2213]: E1213 01:42:25.823188 2213 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:42:26.823563 kubelet[2213]: E1213 01:42:26.823456 2213 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:42:27.824672 kubelet[2213]: E1213 01:42:27.824595 2213 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:42:27.946353 containerd[1611]: time="2024-12-13T01:42:27.946184917Z" level=info msg="StopPodSandbox for \"587452bb9cac560a1711ef15564288611643ba47686d985c56c2cbdfa56ce623\"" Dec 13 01:42:28.093542 containerd[1611]: 2024-12-13 01:42:28.027 [INFO][3127] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="587452bb9cac560a1711ef15564288611643ba47686d985c56c2cbdfa56ce623" Dec 13 01:42:28.093542 containerd[1611]: 2024-12-13 01:42:28.027 [INFO][3127] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="587452bb9cac560a1711ef15564288611643ba47686d985c56c2cbdfa56ce623" iface="eth0" netns="/var/run/netns/cni-cc60e526-63b6-9c5d-3656-e1c44b794a45" Dec 13 01:42:28.093542 containerd[1611]: 2024-12-13 01:42:28.028 [INFO][3127] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="587452bb9cac560a1711ef15564288611643ba47686d985c56c2cbdfa56ce623" iface="eth0" netns="/var/run/netns/cni-cc60e526-63b6-9c5d-3656-e1c44b794a45" Dec 13 01:42:28.093542 containerd[1611]: 2024-12-13 01:42:28.029 [INFO][3127] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="587452bb9cac560a1711ef15564288611643ba47686d985c56c2cbdfa56ce623" iface="eth0" netns="/var/run/netns/cni-cc60e526-63b6-9c5d-3656-e1c44b794a45" Dec 13 01:42:28.093542 containerd[1611]: 2024-12-13 01:42:28.029 [INFO][3127] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="587452bb9cac560a1711ef15564288611643ba47686d985c56c2cbdfa56ce623" Dec 13 01:42:28.093542 containerd[1611]: 2024-12-13 01:42:28.029 [INFO][3127] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="587452bb9cac560a1711ef15564288611643ba47686d985c56c2cbdfa56ce623" Dec 13 01:42:28.093542 containerd[1611]: 2024-12-13 01:42:28.075 [INFO][3133] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="587452bb9cac560a1711ef15564288611643ba47686d985c56c2cbdfa56ce623" HandleID="k8s-pod-network.587452bb9cac560a1711ef15564288611643ba47686d985c56c2cbdfa56ce623" Workload="10.0.0.4-k8s-csi--node--driver--sdf67-eth0" Dec 13 01:42:28.093542 containerd[1611]: 2024-12-13 01:42:28.075 [INFO][3133] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:42:28.093542 containerd[1611]: 2024-12-13 01:42:28.075 [INFO][3133] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:42:28.093542 containerd[1611]: 2024-12-13 01:42:28.083 [WARNING][3133] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="587452bb9cac560a1711ef15564288611643ba47686d985c56c2cbdfa56ce623" HandleID="k8s-pod-network.587452bb9cac560a1711ef15564288611643ba47686d985c56c2cbdfa56ce623" Workload="10.0.0.4-k8s-csi--node--driver--sdf67-eth0" Dec 13 01:42:28.093542 containerd[1611]: 2024-12-13 01:42:28.084 [INFO][3133] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="587452bb9cac560a1711ef15564288611643ba47686d985c56c2cbdfa56ce623" HandleID="k8s-pod-network.587452bb9cac560a1711ef15564288611643ba47686d985c56c2cbdfa56ce623" Workload="10.0.0.4-k8s-csi--node--driver--sdf67-eth0" Dec 13 01:42:28.093542 containerd[1611]: 2024-12-13 01:42:28.085 [INFO][3133] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:42:28.093542 containerd[1611]: 2024-12-13 01:42:28.089 [INFO][3127] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="587452bb9cac560a1711ef15564288611643ba47686d985c56c2cbdfa56ce623" Dec 13 01:42:28.096193 containerd[1611]: time="2024-12-13T01:42:28.096152601Z" level=info msg="TearDown network for sandbox \"587452bb9cac560a1711ef15564288611643ba47686d985c56c2cbdfa56ce623\" successfully" Dec 13 01:42:28.096193 containerd[1611]: time="2024-12-13T01:42:28.096190231Z" level=info msg="StopPodSandbox for \"587452bb9cac560a1711ef15564288611643ba47686d985c56c2cbdfa56ce623\" returns successfully" Dec 13 01:42:28.096895 systemd[1]: run-netns-cni\x2dcc60e526\x2d63b6\x2d9c5d\x2d3656\x2de1c44b794a45.mount: Deactivated successfully. Dec 13 01:42:28.097270 containerd[1611]: time="2024-12-13T01:42:28.096992851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sdf67,Uid:0ba65a02-8118-4915-9dfa-fbbe2bbe53d0,Namespace:calico-system,Attempt:1,}" Dec 13 01:42:28.243857 systemd-networkd[1251]: calibc9f782e33f: Link UP Dec 13 01:42:28.246428 systemd-networkd[1251]: calibc9f782e33f: Gained carrier Dec 13 01:42:28.281125 containerd[1611]: 2024-12-13 01:42:28.159 [INFO][3139] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.4-k8s-csi--node--driver--sdf67-eth0 csi-node-driver- calico-system 0ba65a02-8118-4915-9dfa-fbbe2bbe53d0 1641 0 2024-12-13 01:41:59 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.0.0.4 csi-node-driver-sdf67 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calibc9f782e33f [] []}} ContainerID="24074a630c43e32e2b25b01c2cc8bbf45249676b6ab790fc56cbd24e17d192e0" Namespace="calico-system" Pod="csi-node-driver-sdf67" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--sdf67-" Dec 13 01:42:28.281125 containerd[1611]: 2024-12-13 01:42:28.159 [INFO][3139] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="24074a630c43e32e2b25b01c2cc8bbf45249676b6ab790fc56cbd24e17d192e0" Namespace="calico-system" Pod="csi-node-driver-sdf67" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--sdf67-eth0" Dec 13 01:42:28.281125 containerd[1611]: 2024-12-13 01:42:28.194 [INFO][3150] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="24074a630c43e32e2b25b01c2cc8bbf45249676b6ab790fc56cbd24e17d192e0" HandleID="k8s-pod-network.24074a630c43e32e2b25b01c2cc8bbf45249676b6ab790fc56cbd24e17d192e0" Workload="10.0.0.4-k8s-csi--node--driver--sdf67-eth0" Dec 13 01:42:28.281125 containerd[1611]: 2024-12-13 01:42:28.206 [INFO][3150] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="24074a630c43e32e2b25b01c2cc8bbf45249676b6ab790fc56cbd24e17d192e0" HandleID="k8s-pod-network.24074a630c43e32e2b25b01c2cc8bbf45249676b6ab790fc56cbd24e17d192e0" Workload="10.0.0.4-k8s-csi--node--driver--sdf67-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290ab0), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.4", "pod":"csi-node-driver-sdf67", "timestamp":"2024-12-13 01:42:28.19486609 +0000 UTC"}, Hostname:"10.0.0.4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:42:28.281125 containerd[1611]: 2024-12-13 01:42:28.206 [INFO][3150] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:42:28.281125 containerd[1611]: 2024-12-13 01:42:28.206 [INFO][3150] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:42:28.281125 containerd[1611]: 2024-12-13 01:42:28.206 [INFO][3150] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.4' Dec 13 01:42:28.281125 containerd[1611]: 2024-12-13 01:42:28.209 [INFO][3150] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.24074a630c43e32e2b25b01c2cc8bbf45249676b6ab790fc56cbd24e17d192e0" host="10.0.0.4" Dec 13 01:42:28.281125 containerd[1611]: 2024-12-13 01:42:28.215 [INFO][3150] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.4" Dec 13 01:42:28.281125 containerd[1611]: 2024-12-13 01:42:28.220 [INFO][3150] ipam/ipam.go 489: Trying affinity for 192.168.99.192/26 host="10.0.0.4" Dec 13 01:42:28.281125 containerd[1611]: 2024-12-13 01:42:28.222 [INFO][3150] ipam/ipam.go 155: Attempting to load block cidr=192.168.99.192/26 host="10.0.0.4" Dec 13 01:42:28.281125 containerd[1611]: 2024-12-13 01:42:28.224 [INFO][3150] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.99.192/26 host="10.0.0.4" Dec 13 01:42:28.281125 containerd[1611]: 2024-12-13 01:42:28.224 [INFO][3150] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.99.192/26 handle="k8s-pod-network.24074a630c43e32e2b25b01c2cc8bbf45249676b6ab790fc56cbd24e17d192e0" host="10.0.0.4" Dec 13 01:42:28.281125 containerd[1611]: 2024-12-13 01:42:28.225 [INFO][3150] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.24074a630c43e32e2b25b01c2cc8bbf45249676b6ab790fc56cbd24e17d192e0 Dec 13 01:42:28.281125 containerd[1611]: 2024-12-13 01:42:28.229 [INFO][3150] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.99.192/26 handle="k8s-pod-network.24074a630c43e32e2b25b01c2cc8bbf45249676b6ab790fc56cbd24e17d192e0" host="10.0.0.4" Dec 13 01:42:28.281125 containerd[1611]: 2024-12-13 01:42:28.234 [INFO][3150] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.99.193/26] block=192.168.99.192/26 handle="k8s-pod-network.24074a630c43e32e2b25b01c2cc8bbf45249676b6ab790fc56cbd24e17d192e0" host="10.0.0.4" Dec 13 01:42:28.281125 containerd[1611]: 2024-12-13 01:42:28.234 [INFO][3150] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.99.193/26] handle="k8s-pod-network.24074a630c43e32e2b25b01c2cc8bbf45249676b6ab790fc56cbd24e17d192e0" host="10.0.0.4" Dec 13 01:42:28.281125 containerd[1611]: 2024-12-13 01:42:28.234 [INFO][3150] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:42:28.281125 containerd[1611]: 2024-12-13 01:42:28.234 [INFO][3150] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.193/26] IPv6=[] ContainerID="24074a630c43e32e2b25b01c2cc8bbf45249676b6ab790fc56cbd24e17d192e0" HandleID="k8s-pod-network.24074a630c43e32e2b25b01c2cc8bbf45249676b6ab790fc56cbd24e17d192e0" Workload="10.0.0.4-k8s-csi--node--driver--sdf67-eth0" Dec 13 01:42:28.281988 containerd[1611]: 2024-12-13 01:42:28.239 [INFO][3139] cni-plugin/k8s.go 386: Populated endpoint ContainerID="24074a630c43e32e2b25b01c2cc8bbf45249676b6ab790fc56cbd24e17d192e0" Namespace="calico-system" Pod="csi-node-driver-sdf67" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--sdf67-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-csi--node--driver--sdf67-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0ba65a02-8118-4915-9dfa-fbbe2bbe53d0", ResourceVersion:"1641", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 41, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"", Pod:"csi-node-driver-sdf67", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.99.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibc9f782e33f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:42:28.281988 containerd[1611]: 2024-12-13 01:42:28.240 [INFO][3139] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.99.193/32] ContainerID="24074a630c43e32e2b25b01c2cc8bbf45249676b6ab790fc56cbd24e17d192e0" Namespace="calico-system" Pod="csi-node-driver-sdf67" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--sdf67-eth0" Dec 13 01:42:28.281988 containerd[1611]: 2024-12-13 01:42:28.240 [INFO][3139] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibc9f782e33f ContainerID="24074a630c43e32e2b25b01c2cc8bbf45249676b6ab790fc56cbd24e17d192e0" Namespace="calico-system" Pod="csi-node-driver-sdf67" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--sdf67-eth0" Dec 13 01:42:28.281988 containerd[1611]: 2024-12-13 01:42:28.247 [INFO][3139] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="24074a630c43e32e2b25b01c2cc8bbf45249676b6ab790fc56cbd24e17d192e0" Namespace="calico-system" Pod="csi-node-driver-sdf67" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--sdf67-eth0" Dec 13 01:42:28.281988 containerd[1611]: 2024-12-13 01:42:28.253 [INFO][3139] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="24074a630c43e32e2b25b01c2cc8bbf45249676b6ab790fc56cbd24e17d192e0" Namespace="calico-system" Pod="csi-node-driver-sdf67" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--sdf67-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-csi--node--driver--sdf67-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0ba65a02-8118-4915-9dfa-fbbe2bbe53d0", ResourceVersion:"1641", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 41, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"24074a630c43e32e2b25b01c2cc8bbf45249676b6ab790fc56cbd24e17d192e0", Pod:"csi-node-driver-sdf67", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.99.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibc9f782e33f", MAC:"fe:0d:81:26:23:82", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:42:28.281988 containerd[1611]: 2024-12-13 01:42:28.264 [INFO][3139] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="24074a630c43e32e2b25b01c2cc8bbf45249676b6ab790fc56cbd24e17d192e0" Namespace="calico-system" Pod="csi-node-driver-sdf67" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--sdf67-eth0" Dec 13 01:42:28.302809 containerd[1611]: time="2024-12-13T01:42:28.302639292Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:42:28.302809 containerd[1611]: time="2024-12-13T01:42:28.302718380Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:42:28.302809 containerd[1611]: time="2024-12-13T01:42:28.302752103Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:42:28.303038 containerd[1611]: time="2024-12-13T01:42:28.302895283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:42:28.345907 containerd[1611]: time="2024-12-13T01:42:28.345672147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sdf67,Uid:0ba65a02-8118-4915-9dfa-fbbe2bbe53d0,Namespace:calico-system,Attempt:1,} returns sandbox id \"24074a630c43e32e2b25b01c2cc8bbf45249676b6ab790fc56cbd24e17d192e0\"" Dec 13 01:42:28.348031 containerd[1611]: time="2024-12-13T01:42:28.347921718Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Dec 13 01:42:28.825899 kubelet[2213]: E1213 01:42:28.825762 2213 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:42:29.692596 systemd-networkd[1251]: calibc9f782e33f: Gained IPv6LL Dec 13 01:42:29.826646 kubelet[2213]: E1213 01:42:29.826436 2213 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:42:29.974297 containerd[1611]: time="2024-12-13T01:42:29.974237584Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:42:29.975683 containerd[1611]: time="2024-12-13T01:42:29.975586931Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Dec 13 01:42:29.977265 containerd[1611]: time="2024-12-13T01:42:29.977196459Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:42:29.979693 containerd[1611]: time="2024-12-13T01:42:29.979659812Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:42:29.981019 containerd[1611]: time="2024-12-13T01:42:29.980274699Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.632270196s" Dec 13 01:42:29.981019 containerd[1611]: time="2024-12-13T01:42:29.980316087Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Dec 13 01:42:29.982049 containerd[1611]: time="2024-12-13T01:42:29.982010795Z" level=info msg="CreateContainer within sandbox \"24074a630c43e32e2b25b01c2cc8bbf45249676b6ab790fc56cbd24e17d192e0\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Dec 13 01:42:29.998052 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2035612256.mount: Deactivated successfully. Dec 13 01:42:30.002490 containerd[1611]: time="2024-12-13T01:42:30.002397791Z" level=info msg="CreateContainer within sandbox \"24074a630c43e32e2b25b01c2cc8bbf45249676b6ab790fc56cbd24e17d192e0\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"b2e612a09cf96578ea68a2b58a8197ffd04b288dba5ccccdc002d20cdfe76413\"" Dec 13 01:42:30.003618 containerd[1611]: time="2024-12-13T01:42:30.003552422Z" level=info msg="StartContainer for \"b2e612a09cf96578ea68a2b58a8197ffd04b288dba5ccccdc002d20cdfe76413\"" Dec 13 01:42:30.084537 containerd[1611]: time="2024-12-13T01:42:30.084439275Z" level=info msg="StartContainer for \"b2e612a09cf96578ea68a2b58a8197ffd04b288dba5ccccdc002d20cdfe76413\" returns successfully" Dec 13 01:42:30.086255 containerd[1611]: time="2024-12-13T01:42:30.086136688Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Dec 13 01:42:30.827638 kubelet[2213]: E1213 01:42:30.827541 2213 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:42:31.827813 kubelet[2213]: E1213 01:42:31.827751 2213 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:42:31.846255 containerd[1611]: time="2024-12-13T01:42:31.846185380Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:42:31.847840 containerd[1611]: time="2024-12-13T01:42:31.847765471Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Dec 13 01:42:31.849456 containerd[1611]: time="2024-12-13T01:42:31.849378995Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:42:31.851828 containerd[1611]: time="2024-12-13T01:42:31.851795110Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:42:31.852471 containerd[1611]: time="2024-12-13T01:42:31.852319005Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.766116923s" Dec 13 01:42:31.852471 containerd[1611]: time="2024-12-13T01:42:31.852345295Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Dec 13 01:42:31.854324 containerd[1611]: time="2024-12-13T01:42:31.854289331Z" level=info msg="CreateContainer within sandbox \"24074a630c43e32e2b25b01c2cc8bbf45249676b6ab790fc56cbd24e17d192e0\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Dec 13 01:42:31.873585 containerd[1611]: time="2024-12-13T01:42:31.873538493Z" level=info msg="CreateContainer within sandbox \"24074a630c43e32e2b25b01c2cc8bbf45249676b6ab790fc56cbd24e17d192e0\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"a26ce40b7d77f815e866bd6532cb9ba5b1c70e1b7d3e6867367488c30c5306c8\"" Dec 13 01:42:31.874082 containerd[1611]: time="2024-12-13T01:42:31.874053862Z" level=info msg="StartContainer for \"a26ce40b7d77f815e866bd6532cb9ba5b1c70e1b7d3e6867367488c30c5306c8\"" Dec 13 01:42:31.942756 containerd[1611]: time="2024-12-13T01:42:31.942623242Z" level=info msg="StartContainer for \"a26ce40b7d77f815e866bd6532cb9ba5b1c70e1b7d3e6867367488c30c5306c8\" returns successfully" Dec 13 01:42:32.115027 kubelet[2213]: I1213 01:42:32.114842 2213 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-sdf67" podStartSLOduration=29.609500976 podStartE2EDuration="33.114775644s" podCreationTimestamp="2024-12-13 01:41:59 +0000 UTC" firstStartedPulling="2024-12-13 01:42:28.34730645 +0000 UTC m=+28.898654949" lastFinishedPulling="2024-12-13 01:42:31.852581098 +0000 UTC m=+32.403929617" observedRunningTime="2024-12-13 01:42:32.11323101 +0000 UTC m=+32.664579539" watchObservedRunningTime="2024-12-13 01:42:32.114775644 +0000 UTC m=+32.666124164" Dec 13 01:42:32.829278 kubelet[2213]: E1213 01:42:32.829206 2213 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:42:32.904301 kubelet[2213]: I1213 01:42:32.904244 2213 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Dec 13 01:42:32.906724 kubelet[2213]: I1213 01:42:32.906655 2213 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Dec 13 01:42:32.945657 containerd[1611]: time="2024-12-13T01:42:32.945576315Z" level=info msg="StopPodSandbox for \"9c6385a798eef28e85ab5c43aa4d9bb76f80034da986f6d64b9e4c221096680e\"" Dec 13 01:42:33.084497 containerd[1611]: 2024-12-13 01:42:33.023 [INFO][3308] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9c6385a798eef28e85ab5c43aa4d9bb76f80034da986f6d64b9e4c221096680e" Dec 13 01:42:33.084497 containerd[1611]: 2024-12-13 01:42:33.023 [INFO][3308] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9c6385a798eef28e85ab5c43aa4d9bb76f80034da986f6d64b9e4c221096680e" iface="eth0" netns="/var/run/netns/cni-e1a7be19-02fb-1f59-5fcc-a59373b90369" Dec 13 01:42:33.084497 containerd[1611]: 2024-12-13 01:42:33.024 [INFO][3308] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9c6385a798eef28e85ab5c43aa4d9bb76f80034da986f6d64b9e4c221096680e" iface="eth0" netns="/var/run/netns/cni-e1a7be19-02fb-1f59-5fcc-a59373b90369" Dec 13 01:42:33.084497 containerd[1611]: 2024-12-13 01:42:33.024 [INFO][3308] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9c6385a798eef28e85ab5c43aa4d9bb76f80034da986f6d64b9e4c221096680e" iface="eth0" netns="/var/run/netns/cni-e1a7be19-02fb-1f59-5fcc-a59373b90369" Dec 13 01:42:33.084497 containerd[1611]: 2024-12-13 01:42:33.024 [INFO][3308] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9c6385a798eef28e85ab5c43aa4d9bb76f80034da986f6d64b9e4c221096680e" Dec 13 01:42:33.084497 containerd[1611]: 2024-12-13 01:42:33.024 [INFO][3308] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9c6385a798eef28e85ab5c43aa4d9bb76f80034da986f6d64b9e4c221096680e" Dec 13 01:42:33.084497 containerd[1611]: 2024-12-13 01:42:33.068 [INFO][3315] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9c6385a798eef28e85ab5c43aa4d9bb76f80034da986f6d64b9e4c221096680e" HandleID="k8s-pod-network.9c6385a798eef28e85ab5c43aa4d9bb76f80034da986f6d64b9e4c221096680e" Workload="10.0.0.4-k8s-nginx--deployment--6d5f899847--nzs5p-eth0" Dec 13 01:42:33.084497 containerd[1611]: 2024-12-13 01:42:33.068 [INFO][3315] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:42:33.084497 containerd[1611]: 2024-12-13 01:42:33.068 [INFO][3315] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:42:33.084497 containerd[1611]: 2024-12-13 01:42:33.074 [WARNING][3315] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9c6385a798eef28e85ab5c43aa4d9bb76f80034da986f6d64b9e4c221096680e" HandleID="k8s-pod-network.9c6385a798eef28e85ab5c43aa4d9bb76f80034da986f6d64b9e4c221096680e" Workload="10.0.0.4-k8s-nginx--deployment--6d5f899847--nzs5p-eth0" Dec 13 01:42:33.084497 containerd[1611]: 2024-12-13 01:42:33.074 [INFO][3315] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9c6385a798eef28e85ab5c43aa4d9bb76f80034da986f6d64b9e4c221096680e" HandleID="k8s-pod-network.9c6385a798eef28e85ab5c43aa4d9bb76f80034da986f6d64b9e4c221096680e" Workload="10.0.0.4-k8s-nginx--deployment--6d5f899847--nzs5p-eth0" Dec 13 01:42:33.084497 containerd[1611]: 2024-12-13 01:42:33.076 [INFO][3315] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:42:33.084497 containerd[1611]: 2024-12-13 01:42:33.081 [INFO][3308] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9c6385a798eef28e85ab5c43aa4d9bb76f80034da986f6d64b9e4c221096680e" Dec 13 01:42:33.090372 containerd[1611]: time="2024-12-13T01:42:33.087241670Z" level=info msg="TearDown network for sandbox \"9c6385a798eef28e85ab5c43aa4d9bb76f80034da986f6d64b9e4c221096680e\" successfully" Dec 13 01:42:33.090372 containerd[1611]: time="2024-12-13T01:42:33.087282807Z" level=info msg="StopPodSandbox for \"9c6385a798eef28e85ab5c43aa4d9bb76f80034da986f6d64b9e4c221096680e\" returns successfully" Dec 13 01:42:33.088261 systemd[1]: run-netns-cni\x2de1a7be19\x2d02fb\x2d1f59\x2d5fcc\x2da59373b90369.mount: Deactivated successfully. Dec 13 01:42:33.090914 containerd[1611]: time="2024-12-13T01:42:33.090450424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-nzs5p,Uid:97990401-e34a-45ad-b932-28c3a2fd54e7,Namespace:default,Attempt:1,}" Dec 13 01:42:33.249399 systemd-networkd[1251]: cali5a17b6b0a7d: Link UP Dec 13 01:42:33.254058 systemd-networkd[1251]: cali5a17b6b0a7d: Gained carrier Dec 13 01:42:33.276999 containerd[1611]: 2024-12-13 01:42:33.153 [INFO][3322] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.4-k8s-nginx--deployment--6d5f899847--nzs5p-eth0 nginx-deployment-6d5f899847- default 97990401-e34a-45ad-b932-28c3a2fd54e7 1668 0 2024-12-13 01:42:19 +0000 UTC map[app:nginx pod-template-hash:6d5f899847 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.4 nginx-deployment-6d5f899847-nzs5p eth0 default [] [] [kns.default ksa.default.default] cali5a17b6b0a7d [] []}} ContainerID="d31d500ec622fb988c356120227d482d8431537ee10e158b164f2faf7fe03b7e" Namespace="default" Pod="nginx-deployment-6d5f899847-nzs5p" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--6d5f899847--nzs5p-" Dec 13 01:42:33.276999 containerd[1611]: 2024-12-13 01:42:33.153 [INFO][3322] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d31d500ec622fb988c356120227d482d8431537ee10e158b164f2faf7fe03b7e" Namespace="default" Pod="nginx-deployment-6d5f899847-nzs5p" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--6d5f899847--nzs5p-eth0" Dec 13 01:42:33.276999 containerd[1611]: 2024-12-13 01:42:33.194 [INFO][3333] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d31d500ec622fb988c356120227d482d8431537ee10e158b164f2faf7fe03b7e" HandleID="k8s-pod-network.d31d500ec622fb988c356120227d482d8431537ee10e158b164f2faf7fe03b7e" Workload="10.0.0.4-k8s-nginx--deployment--6d5f899847--nzs5p-eth0" Dec 13 01:42:33.276999 containerd[1611]: 2024-12-13 01:42:33.205 [INFO][3333] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d31d500ec622fb988c356120227d482d8431537ee10e158b164f2faf7fe03b7e" HandleID="k8s-pod-network.d31d500ec622fb988c356120227d482d8431537ee10e158b164f2faf7fe03b7e" Workload="10.0.0.4-k8s-nginx--deployment--6d5f899847--nzs5p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004cecf0), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.4", "pod":"nginx-deployment-6d5f899847-nzs5p", "timestamp":"2024-12-13 01:42:33.194523399 +0000 UTC"}, Hostname:"10.0.0.4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:42:33.276999 containerd[1611]: 2024-12-13 01:42:33.205 [INFO][3333] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:42:33.276999 containerd[1611]: 2024-12-13 01:42:33.205 [INFO][3333] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:42:33.276999 containerd[1611]: 2024-12-13 01:42:33.205 [INFO][3333] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.4' Dec 13 01:42:33.276999 containerd[1611]: 2024-12-13 01:42:33.207 [INFO][3333] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d31d500ec622fb988c356120227d482d8431537ee10e158b164f2faf7fe03b7e" host="10.0.0.4" Dec 13 01:42:33.276999 containerd[1611]: 2024-12-13 01:42:33.211 [INFO][3333] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.4" Dec 13 01:42:33.276999 containerd[1611]: 2024-12-13 01:42:33.215 [INFO][3333] ipam/ipam.go 489: Trying affinity for 192.168.99.192/26 host="10.0.0.4" Dec 13 01:42:33.276999 containerd[1611]: 2024-12-13 01:42:33.217 [INFO][3333] ipam/ipam.go 155: Attempting to load block cidr=192.168.99.192/26 host="10.0.0.4" Dec 13 01:42:33.276999 containerd[1611]: 2024-12-13 01:42:33.221 [INFO][3333] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.99.192/26 host="10.0.0.4" Dec 13 01:42:33.276999 containerd[1611]: 2024-12-13 01:42:33.221 [INFO][3333] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.99.192/26 handle="k8s-pod-network.d31d500ec622fb988c356120227d482d8431537ee10e158b164f2faf7fe03b7e" host="10.0.0.4" Dec 13 01:42:33.276999 containerd[1611]: 2024-12-13 01:42:33.224 [INFO][3333] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d31d500ec622fb988c356120227d482d8431537ee10e158b164f2faf7fe03b7e Dec 13 01:42:33.276999 containerd[1611]: 2024-12-13 01:42:33.228 [INFO][3333] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.99.192/26 handle="k8s-pod-network.d31d500ec622fb988c356120227d482d8431537ee10e158b164f2faf7fe03b7e" host="10.0.0.4" Dec 13 01:42:33.276999 containerd[1611]: 2024-12-13 01:42:33.235 [INFO][3333] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.99.194/26] block=192.168.99.192/26 handle="k8s-pod-network.d31d500ec622fb988c356120227d482d8431537ee10e158b164f2faf7fe03b7e" host="10.0.0.4" Dec 13 01:42:33.276999 containerd[1611]: 2024-12-13 01:42:33.235 [INFO][3333] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.99.194/26] handle="k8s-pod-network.d31d500ec622fb988c356120227d482d8431537ee10e158b164f2faf7fe03b7e" host="10.0.0.4" Dec 13 01:42:33.276999 containerd[1611]: 2024-12-13 01:42:33.235 [INFO][3333] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:42:33.276999 containerd[1611]: 2024-12-13 01:42:33.235 [INFO][3333] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.194/26] IPv6=[] ContainerID="d31d500ec622fb988c356120227d482d8431537ee10e158b164f2faf7fe03b7e" HandleID="k8s-pod-network.d31d500ec622fb988c356120227d482d8431537ee10e158b164f2faf7fe03b7e" Workload="10.0.0.4-k8s-nginx--deployment--6d5f899847--nzs5p-eth0" Dec 13 01:42:33.277869 containerd[1611]: 2024-12-13 01:42:33.240 [INFO][3322] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d31d500ec622fb988c356120227d482d8431537ee10e158b164f2faf7fe03b7e" Namespace="default" Pod="nginx-deployment-6d5f899847-nzs5p" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--6d5f899847--nzs5p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-nginx--deployment--6d5f899847--nzs5p-eth0", GenerateName:"nginx-deployment-6d5f899847-", Namespace:"default", SelfLink:"", UID:"97990401-e34a-45ad-b932-28c3a2fd54e7", ResourceVersion:"1668", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 42, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"6d5f899847", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"", Pod:"nginx-deployment-6d5f899847-nzs5p", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.99.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5a17b6b0a7d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:42:33.277869 containerd[1611]: 2024-12-13 01:42:33.241 [INFO][3322] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.99.194/32] ContainerID="d31d500ec622fb988c356120227d482d8431537ee10e158b164f2faf7fe03b7e" Namespace="default" Pod="nginx-deployment-6d5f899847-nzs5p" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--6d5f899847--nzs5p-eth0" Dec 13 01:42:33.277869 containerd[1611]: 2024-12-13 01:42:33.241 [INFO][3322] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5a17b6b0a7d ContainerID="d31d500ec622fb988c356120227d482d8431537ee10e158b164f2faf7fe03b7e" Namespace="default" Pod="nginx-deployment-6d5f899847-nzs5p" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--6d5f899847--nzs5p-eth0" Dec 13 01:42:33.277869 containerd[1611]: 2024-12-13 01:42:33.255 [INFO][3322] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d31d500ec622fb988c356120227d482d8431537ee10e158b164f2faf7fe03b7e" Namespace="default" Pod="nginx-deployment-6d5f899847-nzs5p" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--6d5f899847--nzs5p-eth0" Dec 13 01:42:33.277869 containerd[1611]: 2024-12-13 01:42:33.257 [INFO][3322] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d31d500ec622fb988c356120227d482d8431537ee10e158b164f2faf7fe03b7e" Namespace="default" Pod="nginx-deployment-6d5f899847-nzs5p" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--6d5f899847--nzs5p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-nginx--deployment--6d5f899847--nzs5p-eth0", GenerateName:"nginx-deployment-6d5f899847-", Namespace:"default", SelfLink:"", UID:"97990401-e34a-45ad-b932-28c3a2fd54e7", ResourceVersion:"1668", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 42, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"6d5f899847", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"d31d500ec622fb988c356120227d482d8431537ee10e158b164f2faf7fe03b7e", Pod:"nginx-deployment-6d5f899847-nzs5p", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.99.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5a17b6b0a7d", MAC:"9a:65:01:d9:b7:ef", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:42:33.277869 containerd[1611]: 2024-12-13 01:42:33.269 [INFO][3322] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d31d500ec622fb988c356120227d482d8431537ee10e158b164f2faf7fe03b7e" Namespace="default" Pod="nginx-deployment-6d5f899847-nzs5p" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--6d5f899847--nzs5p-eth0" Dec 13 01:42:33.312373 containerd[1611]: time="2024-12-13T01:42:33.312046959Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:42:33.312373 containerd[1611]: time="2024-12-13T01:42:33.312321035Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:42:33.312640 containerd[1611]: time="2024-12-13T01:42:33.312356802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:42:33.312640 containerd[1611]: time="2024-12-13T01:42:33.312520481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:42:33.382799 containerd[1611]: time="2024-12-13T01:42:33.382749671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-nzs5p,Uid:97990401-e34a-45ad-b932-28c3a2fd54e7,Namespace:default,Attempt:1,} returns sandbox id \"d31d500ec622fb988c356120227d482d8431537ee10e158b164f2faf7fe03b7e\"" Dec 13 01:42:33.384842 containerd[1611]: time="2024-12-13T01:42:33.384623524Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 01:42:33.830372 kubelet[2213]: E1213 01:42:33.830284 2213 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:42:34.684394 systemd-networkd[1251]: cali5a17b6b0a7d: Gained IPv6LL Dec 13 01:42:34.831251 kubelet[2213]: E1213 01:42:34.831166 2213 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:42:35.832445 kubelet[2213]: E1213 01:42:35.832325 2213 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:42:36.453227 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount938805037.mount: Deactivated successfully. Dec 13 01:42:36.833012 kubelet[2213]: E1213 01:42:36.832852 2213 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:42:37.568146 containerd[1611]: time="2024-12-13T01:42:37.568053892Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:42:37.569844 containerd[1611]: time="2024-12-13T01:42:37.569787071Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=71036027" Dec 13 01:42:37.571228 containerd[1611]: time="2024-12-13T01:42:37.571170392Z" level=info msg="ImageCreate event name:\"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:42:37.575831 containerd[1611]: time="2024-12-13T01:42:37.575760062Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:42:37.577149 containerd[1611]: time="2024-12-13T01:42:37.577072499Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1\", size \"71035905\" in 4.192416043s" Dec 13 01:42:37.577149 containerd[1611]: time="2024-12-13T01:42:37.577133493Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 01:42:37.647851 containerd[1611]: time="2024-12-13T01:42:37.647810645Z" level=info msg="CreateContainer within sandbox \"d31d500ec622fb988c356120227d482d8431537ee10e158b164f2faf7fe03b7e\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Dec 13 01:42:37.672131 containerd[1611]: time="2024-12-13T01:42:37.664064577Z" level=info msg="CreateContainer within sandbox \"d31d500ec622fb988c356120227d482d8431537ee10e158b164f2faf7fe03b7e\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"f3f28e655293f62d806322ba609262efbbcba9bb5ddc0565238f80a5c9cc7950\"" Dec 13 01:42:37.672131 containerd[1611]: time="2024-12-13T01:42:37.667493905Z" level=info msg="StartContainer for \"f3f28e655293f62d806322ba609262efbbcba9bb5ddc0565238f80a5c9cc7950\"" Dec 13 01:42:37.744899 containerd[1611]: time="2024-12-13T01:42:37.744784842Z" level=info msg="StartContainer for \"f3f28e655293f62d806322ba609262efbbcba9bb5ddc0565238f80a5c9cc7950\" returns successfully" Dec 13 01:42:37.834465 kubelet[2213]: E1213 01:42:37.834205 2213 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:42:38.138487 kubelet[2213]: I1213 01:42:38.138233 2213 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-6d5f899847-nzs5p" podStartSLOduration=14.944394108000001 podStartE2EDuration="19.138169558s" podCreationTimestamp="2024-12-13 01:42:19 +0000 UTC" firstStartedPulling="2024-12-13 01:42:33.384079801 +0000 UTC m=+33.935428310" lastFinishedPulling="2024-12-13 01:42:37.577855222 +0000 UTC m=+38.129203760" observedRunningTime="2024-12-13 01:42:38.137945467 +0000 UTC m=+38.689293986" watchObservedRunningTime="2024-12-13 01:42:38.138169558 +0000 UTC m=+38.689518077" Dec 13 01:42:38.834820 kubelet[2213]: E1213 01:42:38.834738 2213 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:42:39.799289 kubelet[2213]: E1213 01:42:39.799159 2213 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:42:39.835516 kubelet[2213]: E1213 01:42:39.835420 2213 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:42:40.836685 kubelet[2213]: E1213 01:42:40.836613 2213 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:42:41.837959 kubelet[2213]: E1213 01:42:41.837841 2213 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:42:42.838316 kubelet[2213]: E1213 01:42:42.838237 2213 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:42:43.838793 kubelet[2213]: E1213 01:42:43.838731 2213 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:42:44.839461 kubelet[2213]: E1213 01:42:44.839360 2213 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:42:45.839774 kubelet[2213]: E1213 01:42:45.839629 2213 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:42:46.839993 kubelet[2213]: E1213 01:42:46.839913 2213 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:42:47.290911 kubelet[2213]: I1213 01:42:47.290863 2213 topology_manager.go:215] "Topology Admit Handler" podUID="f9f8c2ec-da07-4420-bf91-25b016ef06b4" podNamespace="default" podName="nfs-server-provisioner-0" Dec 13 01:42:47.428640 kubelet[2213]: I1213 01:42:47.428520 2213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88zv6\" (UniqueName: \"kubernetes.io/projected/f9f8c2ec-da07-4420-bf91-25b016ef06b4-kube-api-access-88zv6\") pod \"nfs-server-provisioner-0\" (UID: \"f9f8c2ec-da07-4420-bf91-25b016ef06b4\") " pod="default/nfs-server-provisioner-0" Dec 13 01:42:47.428640 kubelet[2213]: I1213 01:42:47.428617 2213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/f9f8c2ec-da07-4420-bf91-25b016ef06b4-data\") pod \"nfs-server-provisioner-0\" (UID: \"f9f8c2ec-da07-4420-bf91-25b016ef06b4\") " pod="default/nfs-server-provisioner-0" Dec 13 01:42:47.597864 containerd[1611]: time="2024-12-13T01:42:47.597350400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:f9f8c2ec-da07-4420-bf91-25b016ef06b4,Namespace:default,Attempt:0,}" Dec 13 01:42:47.773438 systemd-networkd[1251]: cali60e51b789ff: Link UP Dec 13 01:42:47.774357 systemd-networkd[1251]: cali60e51b789ff: Gained carrier Dec 13 01:42:47.800189 containerd[1611]: 2024-12-13 01:42:47.686 [INFO][3518] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.4-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default f9f8c2ec-da07-4420-bf91-25b016ef06b4 1731 0 2024-12-13 01:42:47 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.0.0.4 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="d11d5dcc1ceccf7cf7a77d8f4ee41cad0d4eedbc815e120b0a8a21abe18a3a01" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-" Dec 13 01:42:47.800189 containerd[1611]: 2024-12-13 01:42:47.687 [INFO][3518] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d11d5dcc1ceccf7cf7a77d8f4ee41cad0d4eedbc815e120b0a8a21abe18a3a01" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" Dec 13 01:42:47.800189 containerd[1611]: 2024-12-13 01:42:47.722 [INFO][3530] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d11d5dcc1ceccf7cf7a77d8f4ee41cad0d4eedbc815e120b0a8a21abe18a3a01" HandleID="k8s-pod-network.d11d5dcc1ceccf7cf7a77d8f4ee41cad0d4eedbc815e120b0a8a21abe18a3a01" Workload="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" Dec 13 01:42:47.800189 containerd[1611]: 2024-12-13 01:42:47.733 [INFO][3530] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d11d5dcc1ceccf7cf7a77d8f4ee41cad0d4eedbc815e120b0a8a21abe18a3a01" HandleID="k8s-pod-network.d11d5dcc1ceccf7cf7a77d8f4ee41cad0d4eedbc815e120b0a8a21abe18a3a01" Workload="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003038d0), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.4", "pod":"nfs-server-provisioner-0", "timestamp":"2024-12-13 01:42:47.722476833 +0000 UTC"}, Hostname:"10.0.0.4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:42:47.800189 containerd[1611]: 2024-12-13 01:42:47.733 [INFO][3530] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:42:47.800189 containerd[1611]: 2024-12-13 01:42:47.733 [INFO][3530] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:42:47.800189 containerd[1611]: 2024-12-13 01:42:47.733 [INFO][3530] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.4' Dec 13 01:42:47.800189 containerd[1611]: 2024-12-13 01:42:47.735 [INFO][3530] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d11d5dcc1ceccf7cf7a77d8f4ee41cad0d4eedbc815e120b0a8a21abe18a3a01" host="10.0.0.4" Dec 13 01:42:47.800189 containerd[1611]: 2024-12-13 01:42:47.739 [INFO][3530] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.4" Dec 13 01:42:47.800189 containerd[1611]: 2024-12-13 01:42:47.743 [INFO][3530] ipam/ipam.go 489: Trying affinity for 192.168.99.192/26 host="10.0.0.4" Dec 13 01:42:47.800189 containerd[1611]: 2024-12-13 01:42:47.745 [INFO][3530] ipam/ipam.go 155: Attempting to load block cidr=192.168.99.192/26 host="10.0.0.4" Dec 13 01:42:47.800189 containerd[1611]: 2024-12-13 01:42:47.747 [INFO][3530] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.99.192/26 host="10.0.0.4" Dec 13 01:42:47.800189 containerd[1611]: 2024-12-13 01:42:47.747 [INFO][3530] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.99.192/26 handle="k8s-pod-network.d11d5dcc1ceccf7cf7a77d8f4ee41cad0d4eedbc815e120b0a8a21abe18a3a01" host="10.0.0.4" Dec 13 01:42:47.800189 containerd[1611]: 2024-12-13 01:42:47.748 [INFO][3530] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d11d5dcc1ceccf7cf7a77d8f4ee41cad0d4eedbc815e120b0a8a21abe18a3a01 Dec 13 01:42:47.800189 containerd[1611]: 2024-12-13 01:42:47.751 [INFO][3530] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.99.192/26 handle="k8s-pod-network.d11d5dcc1ceccf7cf7a77d8f4ee41cad0d4eedbc815e120b0a8a21abe18a3a01" host="10.0.0.4" Dec 13 01:42:47.800189 containerd[1611]: 2024-12-13 01:42:47.758 [INFO][3530] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.99.195/26] block=192.168.99.192/26 handle="k8s-pod-network.d11d5dcc1ceccf7cf7a77d8f4ee41cad0d4eedbc815e120b0a8a21abe18a3a01" host="10.0.0.4" Dec 13 01:42:47.800189 containerd[1611]: 2024-12-13 01:42:47.759 [INFO][3530] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.99.195/26] handle="k8s-pod-network.d11d5dcc1ceccf7cf7a77d8f4ee41cad0d4eedbc815e120b0a8a21abe18a3a01" host="10.0.0.4" Dec 13 01:42:47.800189 containerd[1611]: 2024-12-13 01:42:47.759 [INFO][3530] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:42:47.800189 containerd[1611]: 2024-12-13 01:42:47.759 [INFO][3530] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.195/26] IPv6=[] ContainerID="d11d5dcc1ceccf7cf7a77d8f4ee41cad0d4eedbc815e120b0a8a21abe18a3a01" HandleID="k8s-pod-network.d11d5dcc1ceccf7cf7a77d8f4ee41cad0d4eedbc815e120b0a8a21abe18a3a01" Workload="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" Dec 13 01:42:47.802940 containerd[1611]: 2024-12-13 01:42:47.764 [INFO][3518] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d11d5dcc1ceccf7cf7a77d8f4ee41cad0d4eedbc815e120b0a8a21abe18a3a01" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"f9f8c2ec-da07-4420-bf91-25b016ef06b4", ResourceVersion:"1731", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 42, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.99.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:42:47.802940 containerd[1611]: 2024-12-13 01:42:47.764 [INFO][3518] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.99.195/32] ContainerID="d11d5dcc1ceccf7cf7a77d8f4ee41cad0d4eedbc815e120b0a8a21abe18a3a01" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" Dec 13 01:42:47.802940 containerd[1611]: 2024-12-13 01:42:47.764 [INFO][3518] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="d11d5dcc1ceccf7cf7a77d8f4ee41cad0d4eedbc815e120b0a8a21abe18a3a01" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" Dec 13 01:42:47.802940 containerd[1611]: 2024-12-13 01:42:47.780 [INFO][3518] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d11d5dcc1ceccf7cf7a77d8f4ee41cad0d4eedbc815e120b0a8a21abe18a3a01" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" Dec 13 01:42:47.803251 containerd[1611]: 2024-12-13 01:42:47.781 [INFO][3518] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d11d5dcc1ceccf7cf7a77d8f4ee41cad0d4eedbc815e120b0a8a21abe18a3a01" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"f9f8c2ec-da07-4420-bf91-25b016ef06b4", ResourceVersion:"1731", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 42, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"d11d5dcc1ceccf7cf7a77d8f4ee41cad0d4eedbc815e120b0a8a21abe18a3a01", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.99.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"76:91:60:02:7f:cf", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:42:47.803251 containerd[1611]: 2024-12-13 01:42:47.792 [INFO][3518] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d11d5dcc1ceccf7cf7a77d8f4ee41cad0d4eedbc815e120b0a8a21abe18a3a01" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" Dec 13 01:42:47.838300 containerd[1611]: time="2024-12-13T01:42:47.838077391Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:42:47.838434 containerd[1611]: time="2024-12-13T01:42:47.838320528Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:42:47.838999 containerd[1611]: time="2024-12-13T01:42:47.838822262Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:42:47.838999 containerd[1611]: time="2024-12-13T01:42:47.838932459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:42:47.840070 kubelet[2213]: E1213 01:42:47.840021 2213 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:42:47.862318 systemd[1]: run-containerd-runc-k8s.io-d11d5dcc1ceccf7cf7a77d8f4ee41cad0d4eedbc815e120b0a8a21abe18a3a01-runc.i9knIq.mount: Deactivated successfully. Dec 13 01:42:47.897809 containerd[1611]: time="2024-12-13T01:42:47.897753201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:f9f8c2ec-da07-4420-bf91-25b016ef06b4,Namespace:default,Attempt:0,} returns sandbox id \"d11d5dcc1ceccf7cf7a77d8f4ee41cad0d4eedbc815e120b0a8a21abe18a3a01\"" Dec 13 01:42:47.899702 containerd[1611]: time="2024-12-13T01:42:47.899670756Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Dec 13 01:42:48.843074 kubelet[2213]: E1213 01:42:48.843019 2213 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:42:49.151287 systemd-networkd[1251]: cali60e51b789ff: Gained IPv6LL Dec 13 01:42:49.844034 kubelet[2213]: E1213 01:42:49.843892 2213 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:42:50.442923 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4051195791.mount: Deactivated successfully. Dec 13 01:42:50.844194 kubelet[2213]: E1213 01:42:50.844082 2213 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:42:51.845316 kubelet[2213]: E1213 01:42:51.845224 2213 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:42:52.227489 containerd[1611]: time="2024-12-13T01:42:52.227435049Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:42:52.230799 containerd[1611]: time="2024-12-13T01:42:52.230757983Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039474" Dec 13 01:42:52.236403 containerd[1611]: time="2024-12-13T01:42:52.236381152Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:42:52.239539 containerd[1611]: time="2024-12-13T01:42:52.239501567Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:42:52.240368 containerd[1611]: time="2024-12-13T01:42:52.240250935Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 4.340550674s" Dec 13 01:42:52.240368 containerd[1611]: time="2024-12-13T01:42:52.240278116Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Dec 13 01:42:52.248031 containerd[1611]: time="2024-12-13T01:42:52.247993927Z" level=info msg="CreateContainer within sandbox \"d11d5dcc1ceccf7cf7a77d8f4ee41cad0d4eedbc815e120b0a8a21abe18a3a01\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Dec 13 01:42:52.284806 containerd[1611]: time="2024-12-13T01:42:52.284752200Z" level=info msg="CreateContainer within sandbox \"d11d5dcc1ceccf7cf7a77d8f4ee41cad0d4eedbc815e120b0a8a21abe18a3a01\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"f283ad5c9a784ede267425fcb080be59490c211200d9c47308aca9987184192b\"" Dec 13 01:42:52.285548 containerd[1611]: time="2024-12-13T01:42:52.285498703Z" level=info msg="StartContainer for \"f283ad5c9a784ede267425fcb080be59490c211200d9c47308aca9987184192b\"" Dec 13 01:42:52.370825 containerd[1611]: time="2024-12-13T01:42:52.370699482Z" level=info msg="StartContainer for \"f283ad5c9a784ede267425fcb080be59490c211200d9c47308aca9987184192b\" returns successfully" Dec 13 01:42:52.846055 kubelet[2213]: E1213 01:42:52.845958 2213 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:42:53.206110 kubelet[2213]: I1213 01:42:53.205895 2213 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.8643894300000001 podStartE2EDuration="6.205814517s" podCreationTimestamp="2024-12-13 01:42:47 +0000 UTC" firstStartedPulling="2024-12-13 01:42:47.899240416 +0000 UTC m=+48.450588916" lastFinishedPulling="2024-12-13 01:42:52.240665504 +0000 UTC m=+52.792014003" observedRunningTime="2024-12-13 01:42:53.205286344 +0000 UTC m=+53.756634913" watchObservedRunningTime="2024-12-13 01:42:53.205814517 +0000 UTC m=+53.757163055" Dec 13 01:42:53.847315 kubelet[2213]: E1213 01:42:53.847218 2213 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:42:54.847518 kubelet[2213]: E1213 01:42:54.847433 2213 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:42:55.848331 kubelet[2213]: E1213 01:42:55.848222 2213 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:42:56.848928 kubelet[2213]: E1213 01:42:56.848827 2213 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:42:57.849757 kubelet[2213]: E1213 01:42:57.849664 2213 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:42:58.850887 kubelet[2213]: E1213 01:42:58.850781 2213 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:42:59.799489 kubelet[2213]: E1213 01:42:59.799374 2213 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:42:59.824849 containerd[1611]: time="2024-12-13T01:42:59.824241576Z" level=info msg="StopPodSandbox for \"587452bb9cac560a1711ef15564288611643ba47686d985c56c2cbdfa56ce623\"" Dec 13 01:42:59.851993 kubelet[2213]: E1213 01:42:59.851873 2213 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:42:59.959359 containerd[1611]: 2024-12-13 01:42:59.892 [WARNING][3704] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="587452bb9cac560a1711ef15564288611643ba47686d985c56c2cbdfa56ce623" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-csi--node--driver--sdf67-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0ba65a02-8118-4915-9dfa-fbbe2bbe53d0", ResourceVersion:"1662", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 41, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"24074a630c43e32e2b25b01c2cc8bbf45249676b6ab790fc56cbd24e17d192e0", Pod:"csi-node-driver-sdf67", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.99.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibc9f782e33f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:42:59.959359 containerd[1611]: 2024-12-13 01:42:59.893 [INFO][3704] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="587452bb9cac560a1711ef15564288611643ba47686d985c56c2cbdfa56ce623" Dec 13 01:42:59.959359 containerd[1611]: 2024-12-13 01:42:59.893 [INFO][3704] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="587452bb9cac560a1711ef15564288611643ba47686d985c56c2cbdfa56ce623" iface="eth0" netns="" Dec 13 01:42:59.959359 containerd[1611]: 2024-12-13 01:42:59.893 [INFO][3704] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="587452bb9cac560a1711ef15564288611643ba47686d985c56c2cbdfa56ce623" Dec 13 01:42:59.959359 containerd[1611]: 2024-12-13 01:42:59.893 [INFO][3704] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="587452bb9cac560a1711ef15564288611643ba47686d985c56c2cbdfa56ce623" Dec 13 01:42:59.959359 containerd[1611]: 2024-12-13 01:42:59.938 [INFO][3710] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="587452bb9cac560a1711ef15564288611643ba47686d985c56c2cbdfa56ce623" HandleID="k8s-pod-network.587452bb9cac560a1711ef15564288611643ba47686d985c56c2cbdfa56ce623" Workload="10.0.0.4-k8s-csi--node--driver--sdf67-eth0" Dec 13 01:42:59.959359 containerd[1611]: 2024-12-13 01:42:59.938 [INFO][3710] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:42:59.959359 containerd[1611]: 2024-12-13 01:42:59.939 [INFO][3710] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:42:59.959359 containerd[1611]: 2024-12-13 01:42:59.947 [WARNING][3710] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="587452bb9cac560a1711ef15564288611643ba47686d985c56c2cbdfa56ce623" HandleID="k8s-pod-network.587452bb9cac560a1711ef15564288611643ba47686d985c56c2cbdfa56ce623" Workload="10.0.0.4-k8s-csi--node--driver--sdf67-eth0" Dec 13 01:42:59.959359 containerd[1611]: 2024-12-13 01:42:59.947 [INFO][3710] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="587452bb9cac560a1711ef15564288611643ba47686d985c56c2cbdfa56ce623" HandleID="k8s-pod-network.587452bb9cac560a1711ef15564288611643ba47686d985c56c2cbdfa56ce623" Workload="10.0.0.4-k8s-csi--node--driver--sdf67-eth0" Dec 13 01:42:59.959359 containerd[1611]: 2024-12-13 01:42:59.949 [INFO][3710] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:42:59.959359 containerd[1611]: 2024-12-13 01:42:59.954 [INFO][3704] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="587452bb9cac560a1711ef15564288611643ba47686d985c56c2cbdfa56ce623" Dec 13 01:42:59.960657 containerd[1611]: time="2024-12-13T01:42:59.959396089Z" level=info msg="TearDown network for sandbox \"587452bb9cac560a1711ef15564288611643ba47686d985c56c2cbdfa56ce623\" successfully" Dec 13 01:42:59.960657 containerd[1611]: time="2024-12-13T01:42:59.959434130Z" level=info msg="StopPodSandbox for \"587452bb9cac560a1711ef15564288611643ba47686d985c56c2cbdfa56ce623\" returns successfully" Dec 13 01:42:59.968098 containerd[1611]: time="2024-12-13T01:42:59.968034963Z" level=info msg="RemovePodSandbox for \"587452bb9cac560a1711ef15564288611643ba47686d985c56c2cbdfa56ce623\"" Dec 13 01:42:59.968182 containerd[1611]: time="2024-12-13T01:42:59.968112839Z" level=info msg="Forcibly stopping sandbox \"587452bb9cac560a1711ef15564288611643ba47686d985c56c2cbdfa56ce623\"" Dec 13 01:43:00.076161 containerd[1611]: 2024-12-13 01:43:00.018 [WARNING][3730] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="587452bb9cac560a1711ef15564288611643ba47686d985c56c2cbdfa56ce623" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-csi--node--driver--sdf67-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0ba65a02-8118-4915-9dfa-fbbe2bbe53d0", ResourceVersion:"1662", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 41, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"24074a630c43e32e2b25b01c2cc8bbf45249676b6ab790fc56cbd24e17d192e0", Pod:"csi-node-driver-sdf67", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.99.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibc9f782e33f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:43:00.076161 containerd[1611]: 2024-12-13 01:43:00.018 [INFO][3730] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="587452bb9cac560a1711ef15564288611643ba47686d985c56c2cbdfa56ce623" Dec 13 01:43:00.076161 containerd[1611]: 2024-12-13 01:43:00.019 [INFO][3730] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="587452bb9cac560a1711ef15564288611643ba47686d985c56c2cbdfa56ce623" iface="eth0" netns="" Dec 13 01:43:00.076161 containerd[1611]: 2024-12-13 01:43:00.019 [INFO][3730] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="587452bb9cac560a1711ef15564288611643ba47686d985c56c2cbdfa56ce623" Dec 13 01:43:00.076161 containerd[1611]: 2024-12-13 01:43:00.019 [INFO][3730] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="587452bb9cac560a1711ef15564288611643ba47686d985c56c2cbdfa56ce623" Dec 13 01:43:00.076161 containerd[1611]: 2024-12-13 01:43:00.058 [INFO][3736] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="587452bb9cac560a1711ef15564288611643ba47686d985c56c2cbdfa56ce623" HandleID="k8s-pod-network.587452bb9cac560a1711ef15564288611643ba47686d985c56c2cbdfa56ce623" Workload="10.0.0.4-k8s-csi--node--driver--sdf67-eth0" Dec 13 01:43:00.076161 containerd[1611]: 2024-12-13 01:43:00.058 [INFO][3736] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:43:00.076161 containerd[1611]: 2024-12-13 01:43:00.058 [INFO][3736] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:43:00.076161 containerd[1611]: 2024-12-13 01:43:00.066 [WARNING][3736] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="587452bb9cac560a1711ef15564288611643ba47686d985c56c2cbdfa56ce623" HandleID="k8s-pod-network.587452bb9cac560a1711ef15564288611643ba47686d985c56c2cbdfa56ce623" Workload="10.0.0.4-k8s-csi--node--driver--sdf67-eth0" Dec 13 01:43:00.076161 containerd[1611]: 2024-12-13 01:43:00.066 [INFO][3736] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="587452bb9cac560a1711ef15564288611643ba47686d985c56c2cbdfa56ce623" HandleID="k8s-pod-network.587452bb9cac560a1711ef15564288611643ba47686d985c56c2cbdfa56ce623" Workload="10.0.0.4-k8s-csi--node--driver--sdf67-eth0" Dec 13 01:43:00.076161 containerd[1611]: 2024-12-13 01:43:00.068 [INFO][3736] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:43:00.076161 containerd[1611]: 2024-12-13 01:43:00.071 [INFO][3730] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="587452bb9cac560a1711ef15564288611643ba47686d985c56c2cbdfa56ce623" Dec 13 01:43:00.076161 containerd[1611]: time="2024-12-13T01:43:00.075371360Z" level=info msg="TearDown network for sandbox \"587452bb9cac560a1711ef15564288611643ba47686d985c56c2cbdfa56ce623\" successfully" Dec 13 01:43:00.105180 containerd[1611]: time="2024-12-13T01:43:00.105069034Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"587452bb9cac560a1711ef15564288611643ba47686d985c56c2cbdfa56ce623\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:43:00.105180 containerd[1611]: time="2024-12-13T01:43:00.105186025Z" level=info msg="RemovePodSandbox \"587452bb9cac560a1711ef15564288611643ba47686d985c56c2cbdfa56ce623\" returns successfully" Dec 13 01:43:00.105932 containerd[1611]: time="2024-12-13T01:43:00.105886210Z" level=info msg="StopPodSandbox for \"9c6385a798eef28e85ab5c43aa4d9bb76f80034da986f6d64b9e4c221096680e\"" Dec 13 01:43:00.183321 containerd[1611]: 2024-12-13 01:43:00.146 [WARNING][3754] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9c6385a798eef28e85ab5c43aa4d9bb76f80034da986f6d64b9e4c221096680e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-nginx--deployment--6d5f899847--nzs5p-eth0", GenerateName:"nginx-deployment-6d5f899847-", Namespace:"default", SelfLink:"", UID:"97990401-e34a-45ad-b932-28c3a2fd54e7", ResourceVersion:"1689", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 42, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"6d5f899847", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"d31d500ec622fb988c356120227d482d8431537ee10e158b164f2faf7fe03b7e", Pod:"nginx-deployment-6d5f899847-nzs5p", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.99.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5a17b6b0a7d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:43:00.183321 containerd[1611]: 2024-12-13 01:43:00.146 [INFO][3754] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9c6385a798eef28e85ab5c43aa4d9bb76f80034da986f6d64b9e4c221096680e" Dec 13 01:43:00.183321 containerd[1611]: 2024-12-13 01:43:00.146 [INFO][3754] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9c6385a798eef28e85ab5c43aa4d9bb76f80034da986f6d64b9e4c221096680e" iface="eth0" netns="" Dec 13 01:43:00.183321 containerd[1611]: 2024-12-13 01:43:00.146 [INFO][3754] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9c6385a798eef28e85ab5c43aa4d9bb76f80034da986f6d64b9e4c221096680e" Dec 13 01:43:00.183321 containerd[1611]: 2024-12-13 01:43:00.146 [INFO][3754] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9c6385a798eef28e85ab5c43aa4d9bb76f80034da986f6d64b9e4c221096680e" Dec 13 01:43:00.183321 containerd[1611]: 2024-12-13 01:43:00.167 [INFO][3760] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9c6385a798eef28e85ab5c43aa4d9bb76f80034da986f6d64b9e4c221096680e" HandleID="k8s-pod-network.9c6385a798eef28e85ab5c43aa4d9bb76f80034da986f6d64b9e4c221096680e" Workload="10.0.0.4-k8s-nginx--deployment--6d5f899847--nzs5p-eth0" Dec 13 01:43:00.183321 containerd[1611]: 2024-12-13 01:43:00.167 [INFO][3760] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:43:00.183321 containerd[1611]: 2024-12-13 01:43:00.167 [INFO][3760] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:43:00.183321 containerd[1611]: 2024-12-13 01:43:00.174 [WARNING][3760] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9c6385a798eef28e85ab5c43aa4d9bb76f80034da986f6d64b9e4c221096680e" HandleID="k8s-pod-network.9c6385a798eef28e85ab5c43aa4d9bb76f80034da986f6d64b9e4c221096680e" Workload="10.0.0.4-k8s-nginx--deployment--6d5f899847--nzs5p-eth0" Dec 13 01:43:00.183321 containerd[1611]: 2024-12-13 01:43:00.174 [INFO][3760] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9c6385a798eef28e85ab5c43aa4d9bb76f80034da986f6d64b9e4c221096680e" HandleID="k8s-pod-network.9c6385a798eef28e85ab5c43aa4d9bb76f80034da986f6d64b9e4c221096680e" Workload="10.0.0.4-k8s-nginx--deployment--6d5f899847--nzs5p-eth0" Dec 13 01:43:00.183321 containerd[1611]: 2024-12-13 01:43:00.176 [INFO][3760] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:43:00.183321 containerd[1611]: 2024-12-13 01:43:00.179 [INFO][3754] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9c6385a798eef28e85ab5c43aa4d9bb76f80034da986f6d64b9e4c221096680e" Dec 13 01:43:00.183851 containerd[1611]: time="2024-12-13T01:43:00.183586527Z" level=info msg="TearDown network for sandbox \"9c6385a798eef28e85ab5c43aa4d9bb76f80034da986f6d64b9e4c221096680e\" successfully" Dec 13 01:43:00.183851 containerd[1611]: time="2024-12-13T01:43:00.183639627Z" level=info msg="StopPodSandbox for \"9c6385a798eef28e85ab5c43aa4d9bb76f80034da986f6d64b9e4c221096680e\" returns successfully" Dec 13 01:43:00.184569 containerd[1611]: time="2024-12-13T01:43:00.184474565Z" level=info msg="RemovePodSandbox for \"9c6385a798eef28e85ab5c43aa4d9bb76f80034da986f6d64b9e4c221096680e\"" Dec 13 01:43:00.184569 containerd[1611]: time="2024-12-13T01:43:00.184513959Z" level=info msg="Forcibly stopping sandbox \"9c6385a798eef28e85ab5c43aa4d9bb76f80034da986f6d64b9e4c221096680e\"" Dec 13 01:43:00.259879 containerd[1611]: 2024-12-13 01:43:00.224 [WARNING][3778] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9c6385a798eef28e85ab5c43aa4d9bb76f80034da986f6d64b9e4c221096680e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-nginx--deployment--6d5f899847--nzs5p-eth0", GenerateName:"nginx-deployment-6d5f899847-", Namespace:"default", SelfLink:"", UID:"97990401-e34a-45ad-b932-28c3a2fd54e7", ResourceVersion:"1689", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 42, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"6d5f899847", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"d31d500ec622fb988c356120227d482d8431537ee10e158b164f2faf7fe03b7e", Pod:"nginx-deployment-6d5f899847-nzs5p", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.99.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5a17b6b0a7d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:43:00.259879 containerd[1611]: 2024-12-13 01:43:00.224 [INFO][3778] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9c6385a798eef28e85ab5c43aa4d9bb76f80034da986f6d64b9e4c221096680e" Dec 13 01:43:00.259879 containerd[1611]: 2024-12-13 01:43:00.224 [INFO][3778] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9c6385a798eef28e85ab5c43aa4d9bb76f80034da986f6d64b9e4c221096680e" iface="eth0" netns="" Dec 13 01:43:00.259879 containerd[1611]: 2024-12-13 01:43:00.224 [INFO][3778] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9c6385a798eef28e85ab5c43aa4d9bb76f80034da986f6d64b9e4c221096680e" Dec 13 01:43:00.259879 containerd[1611]: 2024-12-13 01:43:00.224 [INFO][3778] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9c6385a798eef28e85ab5c43aa4d9bb76f80034da986f6d64b9e4c221096680e" Dec 13 01:43:00.259879 containerd[1611]: 2024-12-13 01:43:00.245 [INFO][3784] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9c6385a798eef28e85ab5c43aa4d9bb76f80034da986f6d64b9e4c221096680e" HandleID="k8s-pod-network.9c6385a798eef28e85ab5c43aa4d9bb76f80034da986f6d64b9e4c221096680e" Workload="10.0.0.4-k8s-nginx--deployment--6d5f899847--nzs5p-eth0" Dec 13 01:43:00.259879 containerd[1611]: 2024-12-13 01:43:00.245 [INFO][3784] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:43:00.259879 containerd[1611]: 2024-12-13 01:43:00.245 [INFO][3784] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:43:00.259879 containerd[1611]: 2024-12-13 01:43:00.252 [WARNING][3784] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9c6385a798eef28e85ab5c43aa4d9bb76f80034da986f6d64b9e4c221096680e" HandleID="k8s-pod-network.9c6385a798eef28e85ab5c43aa4d9bb76f80034da986f6d64b9e4c221096680e" Workload="10.0.0.4-k8s-nginx--deployment--6d5f899847--nzs5p-eth0" Dec 13 01:43:00.259879 containerd[1611]: 2024-12-13 01:43:00.252 [INFO][3784] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9c6385a798eef28e85ab5c43aa4d9bb76f80034da986f6d64b9e4c221096680e" HandleID="k8s-pod-network.9c6385a798eef28e85ab5c43aa4d9bb76f80034da986f6d64b9e4c221096680e" Workload="10.0.0.4-k8s-nginx--deployment--6d5f899847--nzs5p-eth0" Dec 13 01:43:00.259879 containerd[1611]: 2024-12-13 01:43:00.253 [INFO][3784] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:43:00.259879 containerd[1611]: 2024-12-13 01:43:00.255 [INFO][3778] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9c6385a798eef28e85ab5c43aa4d9bb76f80034da986f6d64b9e4c221096680e" Dec 13 01:43:00.260651 containerd[1611]: time="2024-12-13T01:43:00.260577461Z" level=info msg="TearDown network for sandbox \"9c6385a798eef28e85ab5c43aa4d9bb76f80034da986f6d64b9e4c221096680e\" successfully" Dec 13 01:43:00.265065 containerd[1611]: time="2024-12-13T01:43:00.265014919Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9c6385a798eef28e85ab5c43aa4d9bb76f80034da986f6d64b9e4c221096680e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:43:00.265149 containerd[1611]: time="2024-12-13T01:43:00.265077686Z" level=info msg="RemovePodSandbox \"9c6385a798eef28e85ab5c43aa4d9bb76f80034da986f6d64b9e4c221096680e\" returns successfully" Dec 13 01:43:00.852633 kubelet[2213]: E1213 01:43:00.852559 2213 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:43:01.853377 kubelet[2213]: E1213 01:43:01.853268 2213 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:43:02.183993 kubelet[2213]: I1213 01:43:02.183798 2213 topology_manager.go:215] "Topology Admit Handler" podUID="caea03eb-c89f-40cb-a4ac-49e2ff4ff0e8" podNamespace="default" podName="test-pod-1" Dec 13 01:43:02.317979 kubelet[2213]: I1213 01:43:02.317781 2213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-563fd810-b08a-404e-bd68-2d9fe91b4036\" (UniqueName: \"kubernetes.io/nfs/caea03eb-c89f-40cb-a4ac-49e2ff4ff0e8-pvc-563fd810-b08a-404e-bd68-2d9fe91b4036\") pod \"test-pod-1\" (UID: \"caea03eb-c89f-40cb-a4ac-49e2ff4ff0e8\") " pod="default/test-pod-1" Dec 13 01:43:02.317979 kubelet[2213]: I1213 01:43:02.317878 2213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dr9w\" (UniqueName: \"kubernetes.io/projected/caea03eb-c89f-40cb-a4ac-49e2ff4ff0e8-kube-api-access-6dr9w\") pod \"test-pod-1\" (UID: \"caea03eb-c89f-40cb-a4ac-49e2ff4ff0e8\") " pod="default/test-pod-1" Dec 13 01:43:02.491154 kernel: FS-Cache: Loaded Dec 13 01:43:02.580288 kernel: RPC: Registered named UNIX socket transport module. Dec 13 01:43:02.580378 kernel: RPC: Registered udp transport module. Dec 13 01:43:02.582218 kernel: RPC: Registered tcp transport module. Dec 13 01:43:02.584255 kernel: RPC: Registered tcp-with-tls transport module. Dec 13 01:43:02.586430 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Dec 13 01:43:02.855672 kubelet[2213]: E1213 01:43:02.854361 2213 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:43:02.925602 kernel: NFS: Registering the id_resolver key type Dec 13 01:43:02.925738 kernel: Key type id_resolver registered Dec 13 01:43:02.925766 kernel: Key type id_legacy registered Dec 13 01:43:02.984749 nfsidmap[3813]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Dec 13 01:43:02.990781 nfsidmap[3814]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Dec 13 01:43:03.090385 containerd[1611]: time="2024-12-13T01:43:03.090149208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:caea03eb-c89f-40cb-a4ac-49e2ff4ff0e8,Namespace:default,Attempt:0,}" Dec 13 01:43:03.284812 systemd-networkd[1251]: cali5ec59c6bf6e: Link UP Dec 13 01:43:03.286873 systemd-networkd[1251]: cali5ec59c6bf6e: Gained carrier Dec 13 01:43:03.305529 containerd[1611]: 2024-12-13 01:43:03.168 [INFO][3816] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.4-k8s-test--pod--1-eth0 default caea03eb-c89f-40cb-a4ac-49e2ff4ff0e8 1792 0 2024-12-13 01:42:49 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.4 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="67a1819857106e6ee6fb8d4328c53da442065408f0daec395b04b5fce1392553" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-" Dec 13 01:43:03.305529 containerd[1611]: 2024-12-13 01:43:03.169 [INFO][3816] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="67a1819857106e6ee6fb8d4328c53da442065408f0daec395b04b5fce1392553" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" Dec 13 01:43:03.305529 containerd[1611]: 2024-12-13 01:43:03.227 [INFO][3826] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="67a1819857106e6ee6fb8d4328c53da442065408f0daec395b04b5fce1392553" HandleID="k8s-pod-network.67a1819857106e6ee6fb8d4328c53da442065408f0daec395b04b5fce1392553" Workload="10.0.0.4-k8s-test--pod--1-eth0" Dec 13 01:43:03.305529 containerd[1611]: 2024-12-13 01:43:03.238 [INFO][3826] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="67a1819857106e6ee6fb8d4328c53da442065408f0daec395b04b5fce1392553" HandleID="k8s-pod-network.67a1819857106e6ee6fb8d4328c53da442065408f0daec395b04b5fce1392553" Workload="10.0.0.4-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000292b00), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.4", "pod":"test-pod-1", "timestamp":"2024-12-13 01:43:03.227375395 +0000 UTC"}, Hostname:"10.0.0.4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:43:03.305529 containerd[1611]: 2024-12-13 01:43:03.238 [INFO][3826] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:43:03.305529 containerd[1611]: 2024-12-13 01:43:03.238 [INFO][3826] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:43:03.305529 containerd[1611]: 2024-12-13 01:43:03.238 [INFO][3826] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.4' Dec 13 01:43:03.305529 containerd[1611]: 2024-12-13 01:43:03.240 [INFO][3826] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.67a1819857106e6ee6fb8d4328c53da442065408f0daec395b04b5fce1392553" host="10.0.0.4" Dec 13 01:43:03.305529 containerd[1611]: 2024-12-13 01:43:03.245 [INFO][3826] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.4" Dec 13 01:43:03.305529 containerd[1611]: 2024-12-13 01:43:03.251 [INFO][3826] ipam/ipam.go 489: Trying affinity for 192.168.99.192/26 host="10.0.0.4" Dec 13 01:43:03.305529 containerd[1611]: 2024-12-13 01:43:03.253 [INFO][3826] ipam/ipam.go 155: Attempting to load block cidr=192.168.99.192/26 host="10.0.0.4" Dec 13 01:43:03.305529 containerd[1611]: 2024-12-13 01:43:03.256 [INFO][3826] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.99.192/26 host="10.0.0.4" Dec 13 01:43:03.305529 containerd[1611]: 2024-12-13 01:43:03.256 [INFO][3826] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.99.192/26 handle="k8s-pod-network.67a1819857106e6ee6fb8d4328c53da442065408f0daec395b04b5fce1392553" host="10.0.0.4" Dec 13 01:43:03.305529 containerd[1611]: 2024-12-13 01:43:03.260 [INFO][3826] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.67a1819857106e6ee6fb8d4328c53da442065408f0daec395b04b5fce1392553 Dec 13 01:43:03.305529 containerd[1611]: 2024-12-13 01:43:03.265 [INFO][3826] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.99.192/26 handle="k8s-pod-network.67a1819857106e6ee6fb8d4328c53da442065408f0daec395b04b5fce1392553" host="10.0.0.4" Dec 13 01:43:03.305529 containerd[1611]: 2024-12-13 01:43:03.272 [INFO][3826] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.99.196/26] block=192.168.99.192/26 handle="k8s-pod-network.67a1819857106e6ee6fb8d4328c53da442065408f0daec395b04b5fce1392553" host="10.0.0.4" Dec 13 01:43:03.305529 containerd[1611]: 2024-12-13 01:43:03.272 [INFO][3826] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.99.196/26] handle="k8s-pod-network.67a1819857106e6ee6fb8d4328c53da442065408f0daec395b04b5fce1392553" host="10.0.0.4" Dec 13 01:43:03.305529 containerd[1611]: 2024-12-13 01:43:03.272 [INFO][3826] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:43:03.305529 containerd[1611]: 2024-12-13 01:43:03.272 [INFO][3826] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.196/26] IPv6=[] ContainerID="67a1819857106e6ee6fb8d4328c53da442065408f0daec395b04b5fce1392553" HandleID="k8s-pod-network.67a1819857106e6ee6fb8d4328c53da442065408f0daec395b04b5fce1392553" Workload="10.0.0.4-k8s-test--pod--1-eth0" Dec 13 01:43:03.305529 containerd[1611]: 2024-12-13 01:43:03.277 [INFO][3816] cni-plugin/k8s.go 386: Populated endpoint ContainerID="67a1819857106e6ee6fb8d4328c53da442065408f0daec395b04b5fce1392553" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"caea03eb-c89f-40cb-a4ac-49e2ff4ff0e8", ResourceVersion:"1792", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 42, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.99.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:43:03.308735 containerd[1611]: 2024-12-13 01:43:03.278 [INFO][3816] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.99.196/32] ContainerID="67a1819857106e6ee6fb8d4328c53da442065408f0daec395b04b5fce1392553" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" Dec 13 01:43:03.308735 containerd[1611]: 2024-12-13 01:43:03.278 [INFO][3816] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="67a1819857106e6ee6fb8d4328c53da442065408f0daec395b04b5fce1392553" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" Dec 13 01:43:03.308735 containerd[1611]: 2024-12-13 01:43:03.283 [INFO][3816] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="67a1819857106e6ee6fb8d4328c53da442065408f0daec395b04b5fce1392553" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" Dec 13 01:43:03.308735 containerd[1611]: 2024-12-13 01:43:03.285 [INFO][3816] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="67a1819857106e6ee6fb8d4328c53da442065408f0daec395b04b5fce1392553" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"caea03eb-c89f-40cb-a4ac-49e2ff4ff0e8", ResourceVersion:"1792", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 42, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"67a1819857106e6ee6fb8d4328c53da442065408f0daec395b04b5fce1392553", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.99.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"6e:ca:b0:f1:d6:e8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:43:03.308735 containerd[1611]: 2024-12-13 01:43:03.294 [INFO][3816] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="67a1819857106e6ee6fb8d4328c53da442065408f0daec395b04b5fce1392553" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" Dec 13 01:43:03.357078 containerd[1611]: time="2024-12-13T01:43:03.356841081Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:43:03.357078 containerd[1611]: time="2024-12-13T01:43:03.356917174Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:43:03.357078 containerd[1611]: time="2024-12-13T01:43:03.356940078Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:43:03.358729 containerd[1611]: time="2024-12-13T01:43:03.358609384Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:43:03.443497 containerd[1611]: time="2024-12-13T01:43:03.443424121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:caea03eb-c89f-40cb-a4ac-49e2ff4ff0e8,Namespace:default,Attempt:0,} returns sandbox id \"67a1819857106e6ee6fb8d4328c53da442065408f0daec395b04b5fce1392553\"" Dec 13 01:43:03.446278 containerd[1611]: time="2024-12-13T01:43:03.446150995Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 01:43:03.803726 containerd[1611]: time="2024-12-13T01:43:03.803653753Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:43:03.804775 containerd[1611]: time="2024-12-13T01:43:03.804737098Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Dec 13 01:43:03.809694 containerd[1611]: time="2024-12-13T01:43:03.809628119Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1\", size \"71035905\" in 363.439032ms" Dec 13 01:43:03.809694 containerd[1611]: time="2024-12-13T01:43:03.809668515Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 01:43:03.811931 containerd[1611]: time="2024-12-13T01:43:03.811892804Z" level=info msg="CreateContainer within sandbox \"67a1819857106e6ee6fb8d4328c53da442065408f0daec395b04b5fce1392553\" for container &ContainerMetadata{Name:test,Attempt:0,}" Dec 13 01:43:03.834924 containerd[1611]: time="2024-12-13T01:43:03.834877592Z" level=info msg="CreateContainer within sandbox \"67a1819857106e6ee6fb8d4328c53da442065408f0daec395b04b5fce1392553\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"be35c7b06b7a6e65b0c64b9ea8aa78724522dee4a270860acd3ae59c56f14cd5\"" Dec 13 01:43:03.837357 containerd[1611]: time="2024-12-13T01:43:03.836223922Z" level=info msg="StartContainer for \"be35c7b06b7a6e65b0c64b9ea8aa78724522dee4a270860acd3ae59c56f14cd5\"" Dec 13 01:43:03.855738 kubelet[2213]: E1213 01:43:03.855697 2213 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:43:03.909906 containerd[1611]: time="2024-12-13T01:43:03.909850086Z" level=info msg="StartContainer for \"be35c7b06b7a6e65b0c64b9ea8aa78724522dee4a270860acd3ae59c56f14cd5\" returns successfully" Dec 13 01:43:04.239864 kubelet[2213]: I1213 01:43:04.239764 2213 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=14.874975008 podStartE2EDuration="15.239688195s" podCreationTimestamp="2024-12-13 01:42:49 +0000 UTC" firstStartedPulling="2024-12-13 01:43:03.445368364 +0000 UTC m=+63.996716893" lastFinishedPulling="2024-12-13 01:43:03.810081581 +0000 UTC m=+64.361430080" observedRunningTime="2024-12-13 01:43:04.239689527 +0000 UTC m=+64.791038066" watchObservedRunningTime="2024-12-13 01:43:04.239688195 +0000 UTC m=+64.791036734" Dec 13 01:43:04.638177 systemd-networkd[1251]: cali5ec59c6bf6e: Gained IPv6LL Dec 13 01:43:04.857624 kubelet[2213]: E1213 01:43:04.857526 2213 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:43:05.858842 kubelet[2213]: E1213 01:43:05.858736 2213 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:43:06.860129 kubelet[2213]: E1213 01:43:06.859994 2213 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:43:07.860328 kubelet[2213]: E1213 01:43:07.860259 2213 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:43:08.861023 kubelet[2213]: E1213 01:43:08.860968 2213 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:43:09.862151 kubelet[2213]: E1213 01:43:09.862078 2213 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:43:10.863064 kubelet[2213]: E1213 01:43:10.862958 2213 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:43:11.863557 kubelet[2213]: E1213 01:43:11.863441 2213 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:43:12.864720 kubelet[2213]: E1213 01:43:12.864620 2213 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:43:13.864860 kubelet[2213]: E1213 01:43:13.864795 2213 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:43:14.865352 kubelet[2213]: E1213 01:43:14.865222 2213 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:43:15.865494 kubelet[2213]: E1213 01:43:15.865430 2213 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:43:16.866548 kubelet[2213]: E1213 01:43:16.866390 2213 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:43:17.866959 kubelet[2213]: E1213 01:43:17.866832 2213 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:43:18.867212 kubelet[2213]: E1213 01:43:18.867148 2213 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:43:19.799395 kubelet[2213]: E1213 01:43:19.799298 2213 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:43:19.868266 kubelet[2213]: E1213 01:43:19.868210 2213 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:43:20.665239 kubelet[2213]: E1213 01:43:20.665156 2213 controller.go:195] "Failed to update lease" err="Put \"https://188.245.117.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/10.0.0.4?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 13 01:43:20.868843 kubelet[2213]: E1213 01:43:20.868778 2213 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:43:20.948716 kubelet[2213]: E1213 01:43:20.948520 2213 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"10.0.0.4\": Get \"https://188.245.117.98:6443/api/v1/nodes/10.0.0.4?resourceVersion=0&timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 13 01:43:21.869314 kubelet[2213]: E1213 01:43:21.869214 2213 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:43:22.870560 kubelet[2213]: E1213 01:43:22.870448 2213 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:43:23.871412 kubelet[2213]: E1213 01:43:23.871307 2213 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:43:24.872318 kubelet[2213]: E1213 01:43:24.872212 2213 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:43:25.872830 kubelet[2213]: E1213 01:43:25.872722 2213 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:43:26.873668 kubelet[2213]: E1213 01:43:26.873553 2213 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"