Dec 16 09:52:44.175590 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Dec 12 23:15:00 -00 2024 Dec 16 09:52:44.175659 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 16 09:52:44.175675 kernel: BIOS-provided physical RAM map: Dec 16 09:52:44.175686 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 16 09:52:44.175696 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 16 09:52:44.175707 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 16 09:52:44.175720 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007cfdbfff] usable Dec 16 09:52:44.175731 kernel: BIOS-e820: [mem 0x000000007cfdc000-0x000000007cffffff] reserved Dec 16 09:52:44.175746 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 16 09:52:44.175797 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 16 09:52:44.175808 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 16 09:52:44.175819 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 16 09:52:44.175830 kernel: NX (Execute Disable) protection: active Dec 16 09:52:44.175837 kernel: APIC: Static calls initialized Dec 16 09:52:44.175848 kernel: SMBIOS 2.8 present. Dec 16 09:52:44.175855 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Dec 16 09:52:44.175862 kernel: Hypervisor detected: KVM Dec 16 09:52:44.175869 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 16 09:52:44.175875 kernel: kvm-clock: using sched offset of 2998105421 cycles Dec 16 09:52:44.175882 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 16 09:52:44.175890 kernel: tsc: Detected 2495.312 MHz processor Dec 16 09:52:44.175897 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 16 09:52:44.175904 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 16 09:52:44.175913 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 Dec 16 09:52:44.175920 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 16 09:52:44.175927 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 16 09:52:44.175934 kernel: Using GB pages for direct mapping Dec 16 09:52:44.175941 kernel: ACPI: Early table checksum verification disabled Dec 16 09:52:44.175948 kernel: ACPI: RSDP 0x00000000000F51F0 000014 (v00 BOCHS ) Dec 16 09:52:44.175955 kernel: ACPI: RSDT 0x000000007CFE265D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 09:52:44.175961 kernel: ACPI: FACP 0x000000007CFE244D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 09:52:44.175968 kernel: ACPI: DSDT 0x000000007CFE0040 00240D (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 09:52:44.175977 kernel: ACPI: FACS 0x000000007CFE0000 000040 Dec 16 09:52:44.175984 kernel: ACPI: APIC 0x000000007CFE2541 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 09:52:44.175991 kernel: ACPI: HPET 0x000000007CFE25C1 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 09:52:44.175998 kernel: ACPI: MCFG 0x000000007CFE25F9 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 09:52:44.176005 kernel: ACPI: WAET 0x000000007CFE2635 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 09:52:44.176012 kernel: ACPI: Reserving FACP table memory at [mem 0x7cfe244d-0x7cfe2540] Dec 16 09:52:44.176019 kernel: ACPI: Reserving DSDT table memory at [mem 0x7cfe0040-0x7cfe244c] Dec 16 09:52:44.176026 kernel: ACPI: Reserving FACS table memory at [mem 0x7cfe0000-0x7cfe003f] Dec 16 09:52:44.176038 kernel: ACPI: Reserving APIC table memory at [mem 0x7cfe2541-0x7cfe25c0] Dec 16 09:52:44.176045 kernel: ACPI: Reserving HPET table memory at [mem 0x7cfe25c1-0x7cfe25f8] Dec 16 09:52:44.176052 kernel: ACPI: Reserving MCFG table memory at [mem 0x7cfe25f9-0x7cfe2634] Dec 16 09:52:44.176059 kernel: ACPI: Reserving WAET table memory at [mem 0x7cfe2635-0x7cfe265c] Dec 16 09:52:44.176066 kernel: No NUMA configuration found Dec 16 09:52:44.176073 kernel: Faking a node at [mem 0x0000000000000000-0x000000007cfdbfff] Dec 16 09:52:44.176082 kernel: NODE_DATA(0) allocated [mem 0x7cfd6000-0x7cfdbfff] Dec 16 09:52:44.176090 kernel: Zone ranges: Dec 16 09:52:44.176097 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 16 09:52:44.176104 kernel: DMA32 [mem 0x0000000001000000-0x000000007cfdbfff] Dec 16 09:52:44.176111 kernel: Normal empty Dec 16 09:52:44.176118 kernel: Movable zone start for each node Dec 16 09:52:44.176125 kernel: Early memory node ranges Dec 16 09:52:44.176132 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 16 09:52:44.176139 kernel: node 0: [mem 0x0000000000100000-0x000000007cfdbfff] Dec 16 09:52:44.176146 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007cfdbfff] Dec 16 09:52:44.176155 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 16 09:52:44.176162 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 16 09:52:44.176169 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Dec 16 09:52:44.176176 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 16 09:52:44.176184 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 16 09:52:44.176191 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 16 09:52:44.176198 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 16 09:52:44.176205 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 16 09:52:44.176212 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 16 09:52:44.176221 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 16 09:52:44.176228 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 16 09:52:44.176235 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 16 09:52:44.176243 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 16 09:52:44.176250 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 16 09:52:44.176257 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 16 09:52:44.176264 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 16 09:52:44.176271 kernel: Booting paravirtualized kernel on KVM Dec 16 09:52:44.176278 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 16 09:52:44.176288 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 16 09:52:44.176295 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Dec 16 09:52:44.176302 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Dec 16 09:52:44.176309 kernel: pcpu-alloc: [0] 0 1 Dec 16 09:52:44.176316 kernel: kvm-guest: PV spinlocks disabled, no host support Dec 16 09:52:44.176324 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 16 09:52:44.176332 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 16 09:52:44.176339 kernel: random: crng init done Dec 16 09:52:44.176348 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 16 09:52:44.176356 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 16 09:52:44.176363 kernel: Fallback order for Node 0: 0 Dec 16 09:52:44.176370 kernel: Built 1 zonelists, mobility grouping on. Total pages: 503708 Dec 16 09:52:44.176377 kernel: Policy zone: DMA32 Dec 16 09:52:44.176384 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 16 09:52:44.176391 kernel: Memory: 1922056K/2047464K available (12288K kernel code, 2299K rwdata, 22724K rodata, 42844K init, 2348K bss, 125148K reserved, 0K cma-reserved) Dec 16 09:52:44.176398 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 16 09:52:44.176405 kernel: ftrace: allocating 37902 entries in 149 pages Dec 16 09:52:44.176415 kernel: ftrace: allocated 149 pages with 4 groups Dec 16 09:52:44.176422 kernel: Dynamic Preempt: voluntary Dec 16 09:52:44.176429 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 16 09:52:44.176437 kernel: rcu: RCU event tracing is enabled. Dec 16 09:52:44.176444 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 16 09:52:44.176451 kernel: Trampoline variant of Tasks RCU enabled. Dec 16 09:52:44.176458 kernel: Rude variant of Tasks RCU enabled. Dec 16 09:52:44.176466 kernel: Tracing variant of Tasks RCU enabled. Dec 16 09:52:44.176473 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 16 09:52:44.176480 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 16 09:52:44.176489 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 16 09:52:44.176496 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 16 09:52:44.176503 kernel: Console: colour VGA+ 80x25 Dec 16 09:52:44.176510 kernel: printk: console [tty0] enabled Dec 16 09:52:44.176517 kernel: printk: console [ttyS0] enabled Dec 16 09:52:44.176524 kernel: ACPI: Core revision 20230628 Dec 16 09:52:44.176532 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 16 09:52:44.176539 kernel: APIC: Switch to symmetric I/O mode setup Dec 16 09:52:44.176546 kernel: x2apic enabled Dec 16 09:52:44.176555 kernel: APIC: Switched APIC routing to: physical x2apic Dec 16 09:52:44.176563 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 16 09:52:44.176570 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Dec 16 09:52:44.176577 kernel: Calibrating delay loop (skipped) preset value.. 4990.62 BogoMIPS (lpj=2495312) Dec 16 09:52:44.176584 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 16 09:52:44.176591 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 16 09:52:44.176608 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 16 09:52:44.176616 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 16 09:52:44.176633 kernel: Spectre V2 : Mitigation: Retpolines Dec 16 09:52:44.176640 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 16 09:52:44.176648 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 16 09:52:44.176658 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Dec 16 09:52:44.176665 kernel: RETBleed: Mitigation: untrained return thunk Dec 16 09:52:44.176673 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 16 09:52:44.176681 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 16 09:52:44.176691 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Dec 16 09:52:44.176702 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Dec 16 09:52:44.176712 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Dec 16 09:52:44.176723 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 16 09:52:44.176736 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 16 09:52:44.176744 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 16 09:52:44.176771 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 16 09:52:44.176779 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Dec 16 09:52:44.176787 kernel: Freeing SMP alternatives memory: 32K Dec 16 09:52:44.176797 kernel: pid_max: default: 32768 minimum: 301 Dec 16 09:52:44.176804 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 16 09:52:44.176812 kernel: landlock: Up and running. Dec 16 09:52:44.176819 kernel: SELinux: Initializing. Dec 16 09:52:44.176827 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 16 09:52:44.176834 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 16 09:52:44.176842 kernel: smpboot: CPU0: AMD EPYC Processor (family: 0x17, model: 0x31, stepping: 0x0) Dec 16 09:52:44.176849 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 09:52:44.176857 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 09:52:44.176867 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 09:52:44.176874 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 16 09:52:44.176882 kernel: ... version: 0 Dec 16 09:52:44.176889 kernel: ... bit width: 48 Dec 16 09:52:44.176897 kernel: ... generic registers: 6 Dec 16 09:52:44.176904 kernel: ... value mask: 0000ffffffffffff Dec 16 09:52:44.176911 kernel: ... max period: 00007fffffffffff Dec 16 09:52:44.176919 kernel: ... fixed-purpose events: 0 Dec 16 09:52:44.176926 kernel: ... event mask: 000000000000003f Dec 16 09:52:44.176936 kernel: signal: max sigframe size: 1776 Dec 16 09:52:44.176943 kernel: rcu: Hierarchical SRCU implementation. Dec 16 09:52:44.176951 kernel: rcu: Max phase no-delay instances is 400. Dec 16 09:52:44.176958 kernel: smp: Bringing up secondary CPUs ... Dec 16 09:52:44.176966 kernel: smpboot: x86: Booting SMP configuration: Dec 16 09:52:44.176973 kernel: .... node #0, CPUs: #1 Dec 16 09:52:44.176981 kernel: smp: Brought up 1 node, 2 CPUs Dec 16 09:52:44.176988 kernel: smpboot: Max logical packages: 1 Dec 16 09:52:44.176995 kernel: smpboot: Total of 2 processors activated (9981.24 BogoMIPS) Dec 16 09:52:44.177003 kernel: devtmpfs: initialized Dec 16 09:52:44.177013 kernel: x86/mm: Memory block size: 128MB Dec 16 09:52:44.177020 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 16 09:52:44.177028 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 16 09:52:44.177035 kernel: pinctrl core: initialized pinctrl subsystem Dec 16 09:52:44.177043 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 16 09:52:44.177050 kernel: audit: initializing netlink subsys (disabled) Dec 16 09:52:44.177057 kernel: audit: type=2000 audit(1734342762.960:1): state=initialized audit_enabled=0 res=1 Dec 16 09:52:44.177065 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 16 09:52:44.177074 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 16 09:52:44.177082 kernel: cpuidle: using governor menu Dec 16 09:52:44.177089 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 16 09:52:44.177098 kernel: dca service started, version 1.12.1 Dec 16 09:52:44.177112 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Dec 16 09:52:44.177130 kernel: PCI: Using configuration type 1 for base access Dec 16 09:52:44.177142 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 16 09:52:44.177152 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 16 09:52:44.177161 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 16 09:52:44.177173 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 16 09:52:44.177180 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 16 09:52:44.177188 kernel: ACPI: Added _OSI(Module Device) Dec 16 09:52:44.177195 kernel: ACPI: Added _OSI(Processor Device) Dec 16 09:52:44.177205 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 16 09:52:44.177215 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 16 09:52:44.177226 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 16 09:52:44.177236 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 16 09:52:44.177245 kernel: ACPI: Interpreter enabled Dec 16 09:52:44.177254 kernel: ACPI: PM: (supports S0 S5) Dec 16 09:52:44.177267 kernel: ACPI: Using IOAPIC for interrupt routing Dec 16 09:52:44.177283 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 16 09:52:44.177294 kernel: PCI: Using E820 reservations for host bridge windows Dec 16 09:52:44.177304 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 16 09:52:44.177314 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 16 09:52:44.177526 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 16 09:52:44.177692 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 16 09:52:44.177859 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 16 09:52:44.177872 kernel: PCI host bridge to bus 0000:00 Dec 16 09:52:44.178007 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 16 09:52:44.178119 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 16 09:52:44.178229 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 16 09:52:44.178338 kernel: pci_bus 0000:00: root bus resource [mem 0x7d000000-0xafffffff window] Dec 16 09:52:44.178476 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 16 09:52:44.178635 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Dec 16 09:52:44.178776 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 16 09:52:44.178921 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Dec 16 09:52:44.179078 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 Dec 16 09:52:44.179203 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfb800000-0xfbffffff pref] Dec 16 09:52:44.179323 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfd200000-0xfd203fff 64bit pref] Dec 16 09:52:44.179447 kernel: pci 0000:00:01.0: reg 0x20: [mem 0xfea10000-0xfea10fff] Dec 16 09:52:44.179622 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea00000-0xfea0ffff pref] Dec 16 09:52:44.179769 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 16 09:52:44.179923 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Dec 16 09:52:44.180091 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea11000-0xfea11fff] Dec 16 09:52:44.180227 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Dec 16 09:52:44.180348 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea12000-0xfea12fff] Dec 16 09:52:44.180481 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Dec 16 09:52:44.180647 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea13000-0xfea13fff] Dec 16 09:52:44.180828 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Dec 16 09:52:44.180966 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea14000-0xfea14fff] Dec 16 09:52:44.181124 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Dec 16 09:52:44.181251 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea15000-0xfea15fff] Dec 16 09:52:44.181391 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Dec 16 09:52:44.181513 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea16000-0xfea16fff] Dec 16 09:52:44.181671 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Dec 16 09:52:44.181852 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea17000-0xfea17fff] Dec 16 09:52:44.182003 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Dec 16 09:52:44.182168 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea18000-0xfea18fff] Dec 16 09:52:44.182305 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Dec 16 09:52:44.182426 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfea19000-0xfea19fff] Dec 16 09:52:44.182553 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Dec 16 09:52:44.182690 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 16 09:52:44.182944 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Dec 16 09:52:44.183079 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc040-0xc05f] Dec 16 09:52:44.183207 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea1a000-0xfea1afff] Dec 16 09:52:44.183335 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Dec 16 09:52:44.183453 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Dec 16 09:52:44.183590 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Dec 16 09:52:44.183731 kernel: pci 0000:01:00.0: reg 0x14: [mem 0xfe880000-0xfe880fff] Dec 16 09:52:44.183912 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Dec 16 09:52:44.184057 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfe800000-0xfe87ffff pref] Dec 16 09:52:44.184197 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Dec 16 09:52:44.184319 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Dec 16 09:52:44.184438 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Dec 16 09:52:44.184576 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Dec 16 09:52:44.184746 kernel: pci 0000:02:00.0: reg 0x10: [mem 0xfe600000-0xfe603fff 64bit] Dec 16 09:52:44.185851 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Dec 16 09:52:44.185984 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Dec 16 09:52:44.187846 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 16 09:52:44.188031 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Dec 16 09:52:44.188171 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfe400000-0xfe400fff] Dec 16 09:52:44.188324 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xfcc00000-0xfcc03fff 64bit pref] Dec 16 09:52:44.188455 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Dec 16 09:52:44.188589 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Dec 16 09:52:44.188727 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 16 09:52:44.188940 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Dec 16 09:52:44.189095 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Dec 16 09:52:44.189222 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Dec 16 09:52:44.189364 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Dec 16 09:52:44.189493 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 16 09:52:44.189654 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Dec 16 09:52:44.190874 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xfc800000-0xfc803fff 64bit pref] Dec 16 09:52:44.191025 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Dec 16 09:52:44.191146 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Dec 16 09:52:44.191293 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 16 09:52:44.191474 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Dec 16 09:52:44.191642 kernel: pci 0000:06:00.0: reg 0x14: [mem 0xfde00000-0xfde00fff] Dec 16 09:52:44.192824 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xfc600000-0xfc603fff 64bit pref] Dec 16 09:52:44.192958 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Dec 16 09:52:44.193106 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Dec 16 09:52:44.193261 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 16 09:52:44.193277 kernel: acpiphp: Slot [0] registered Dec 16 09:52:44.193432 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Dec 16 09:52:44.193560 kernel: pci 0000:07:00.0: reg 0x14: [mem 0xfdc80000-0xfdc80fff] Dec 16 09:52:44.193724 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xfc400000-0xfc403fff 64bit pref] Dec 16 09:52:44.194952 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfdc00000-0xfdc7ffff pref] Dec 16 09:52:44.195100 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Dec 16 09:52:44.195229 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Dec 16 09:52:44.195348 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 16 09:52:44.195358 kernel: acpiphp: Slot [0-2] registered Dec 16 09:52:44.195478 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Dec 16 09:52:44.195614 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Dec 16 09:52:44.195787 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 16 09:52:44.195801 kernel: acpiphp: Slot [0-3] registered Dec 16 09:52:44.195952 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Dec 16 09:52:44.196118 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 16 09:52:44.196268 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 16 09:52:44.196282 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 16 09:52:44.196294 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 16 09:52:44.196304 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 16 09:52:44.196315 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 16 09:52:44.196326 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 16 09:52:44.196337 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 16 09:52:44.196347 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 16 09:52:44.196362 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 16 09:52:44.196373 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 16 09:52:44.196384 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 16 09:52:44.196395 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 16 09:52:44.196405 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 16 09:52:44.196416 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 16 09:52:44.196425 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 16 09:52:44.196433 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 16 09:52:44.196441 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 16 09:52:44.196452 kernel: iommu: Default domain type: Translated Dec 16 09:52:44.196460 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 16 09:52:44.196467 kernel: PCI: Using ACPI for IRQ routing Dec 16 09:52:44.196475 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 16 09:52:44.196483 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 16 09:52:44.196491 kernel: e820: reserve RAM buffer [mem 0x7cfdc000-0x7fffffff] Dec 16 09:52:44.196636 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 16 09:52:44.199831 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 16 09:52:44.199968 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 16 09:52:44.199983 kernel: vgaarb: loaded Dec 16 09:52:44.199991 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 16 09:52:44.199999 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 16 09:52:44.200007 kernel: clocksource: Switched to clocksource kvm-clock Dec 16 09:52:44.200014 kernel: VFS: Disk quotas dquot_6.6.0 Dec 16 09:52:44.200023 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 16 09:52:44.200030 kernel: pnp: PnP ACPI init Dec 16 09:52:44.200160 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 16 09:52:44.200175 kernel: pnp: PnP ACPI: found 5 devices Dec 16 09:52:44.200183 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 16 09:52:44.200191 kernel: NET: Registered PF_INET protocol family Dec 16 09:52:44.200199 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 16 09:52:44.200207 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 16 09:52:44.200215 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 16 09:52:44.200223 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 16 09:52:44.200231 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 16 09:52:44.200239 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 16 09:52:44.200250 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 16 09:52:44.200261 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 16 09:52:44.200272 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 16 09:52:44.200283 kernel: NET: Registered PF_XDP protocol family Dec 16 09:52:44.200425 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Dec 16 09:52:44.200568 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Dec 16 09:52:44.200703 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Dec 16 09:52:44.200850 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] Dec 16 09:52:44.201053 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] Dec 16 09:52:44.201199 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] Dec 16 09:52:44.201323 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Dec 16 09:52:44.201472 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Dec 16 09:52:44.201610 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Dec 16 09:52:44.201735 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Dec 16 09:52:44.203731 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Dec 16 09:52:44.203935 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 16 09:52:44.204062 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Dec 16 09:52:44.204181 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Dec 16 09:52:44.204299 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 16 09:52:44.204418 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Dec 16 09:52:44.204536 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Dec 16 09:52:44.204693 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 16 09:52:44.204843 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Dec 16 09:52:44.205015 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Dec 16 09:52:44.205141 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 16 09:52:44.205379 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Dec 16 09:52:44.206845 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Dec 16 09:52:44.207031 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 16 09:52:44.207198 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Dec 16 09:52:44.207330 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Dec 16 09:52:44.207450 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Dec 16 09:52:44.207571 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 16 09:52:44.207717 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Dec 16 09:52:44.207860 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Dec 16 09:52:44.208015 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Dec 16 09:52:44.208141 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 16 09:52:44.209959 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Dec 16 09:52:44.210144 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Dec 16 09:52:44.210269 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 16 09:52:44.210398 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 16 09:52:44.210518 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 16 09:52:44.210643 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 16 09:52:44.211992 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 16 09:52:44.212119 kernel: pci_bus 0000:00: resource 7 [mem 0x7d000000-0xafffffff window] Dec 16 09:52:44.212231 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 16 09:52:44.212340 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Dec 16 09:52:44.212467 kernel: pci_bus 0000:01: resource 1 [mem 0xfe800000-0xfe9fffff] Dec 16 09:52:44.212585 kernel: pci_bus 0000:01: resource 2 [mem 0xfd000000-0xfd1fffff 64bit pref] Dec 16 09:52:44.212723 kernel: pci_bus 0000:02: resource 1 [mem 0xfe600000-0xfe7fffff] Dec 16 09:52:44.213929 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Dec 16 09:52:44.214058 kernel: pci_bus 0000:03: resource 1 [mem 0xfe400000-0xfe5fffff] Dec 16 09:52:44.214174 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 16 09:52:44.214298 kernel: pci_bus 0000:04: resource 1 [mem 0xfe200000-0xfe3fffff] Dec 16 09:52:44.214413 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 16 09:52:44.214537 kernel: pci_bus 0000:05: resource 1 [mem 0xfe000000-0xfe1fffff] Dec 16 09:52:44.214669 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 16 09:52:44.214808 kernel: pci_bus 0000:06: resource 1 [mem 0xfde00000-0xfdffffff] Dec 16 09:52:44.214923 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 16 09:52:44.215047 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Dec 16 09:52:44.215162 kernel: pci_bus 0000:07: resource 1 [mem 0xfdc00000-0xfddfffff] Dec 16 09:52:44.215275 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 16 09:52:44.215413 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Dec 16 09:52:44.215537 kernel: pci_bus 0000:08: resource 1 [mem 0xfda00000-0xfdbfffff] Dec 16 09:52:44.215662 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 16 09:52:44.217879 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Dec 16 09:52:44.218012 kernel: pci_bus 0000:09: resource 1 [mem 0xfd800000-0xfd9fffff] Dec 16 09:52:44.218126 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 16 09:52:44.218138 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 16 09:52:44.218147 kernel: PCI: CLS 0 bytes, default 64 Dec 16 09:52:44.218160 kernel: Initialise system trusted keyrings Dec 16 09:52:44.218169 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 16 09:52:44.218178 kernel: Key type asymmetric registered Dec 16 09:52:44.218186 kernel: Asymmetric key parser 'x509' registered Dec 16 09:52:44.218195 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 16 09:52:44.218203 kernel: io scheduler mq-deadline registered Dec 16 09:52:44.218212 kernel: io scheduler kyber registered Dec 16 09:52:44.218220 kernel: io scheduler bfq registered Dec 16 09:52:44.218346 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Dec 16 09:52:44.218472 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Dec 16 09:52:44.218608 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Dec 16 09:52:44.218790 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Dec 16 09:52:44.218946 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Dec 16 09:52:44.219069 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Dec 16 09:52:44.219191 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Dec 16 09:52:44.219310 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Dec 16 09:52:44.219457 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Dec 16 09:52:44.219588 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Dec 16 09:52:44.219730 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Dec 16 09:52:44.221926 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Dec 16 09:52:44.222065 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Dec 16 09:52:44.222186 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Dec 16 09:52:44.222310 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Dec 16 09:52:44.222431 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Dec 16 09:52:44.222448 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 16 09:52:44.222569 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Dec 16 09:52:44.222739 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Dec 16 09:52:44.222774 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 16 09:52:44.222783 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Dec 16 09:52:44.222792 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 16 09:52:44.222800 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 16 09:52:44.222808 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 16 09:52:44.222816 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 16 09:52:44.222824 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 16 09:52:44.222838 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 16 09:52:44.222970 kernel: rtc_cmos 00:03: RTC can wake from S4 Dec 16 09:52:44.223084 kernel: rtc_cmos 00:03: registered as rtc0 Dec 16 09:52:44.223194 kernel: rtc_cmos 00:03: setting system clock to 2024-12-16T09:52:43 UTC (1734342763) Dec 16 09:52:44.223305 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Dec 16 09:52:44.223316 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Dec 16 09:52:44.223324 kernel: NET: Registered PF_INET6 protocol family Dec 16 09:52:44.223335 kernel: Segment Routing with IPv6 Dec 16 09:52:44.223351 kernel: In-situ OAM (IOAM) with IPv6 Dec 16 09:52:44.223362 kernel: NET: Registered PF_PACKET protocol family Dec 16 09:52:44.223374 kernel: Key type dns_resolver registered Dec 16 09:52:44.223386 kernel: IPI shorthand broadcast: enabled Dec 16 09:52:44.223397 kernel: sched_clock: Marking stable (1409010395, 140631097)->(1604026260, -54384768) Dec 16 09:52:44.223406 kernel: registered taskstats version 1 Dec 16 09:52:44.223414 kernel: Loading compiled-in X.509 certificates Dec 16 09:52:44.223422 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: c82d546f528d79a5758dcebbc47fb6daf92836a0' Dec 16 09:52:44.223430 kernel: Key type .fscrypt registered Dec 16 09:52:44.223441 kernel: Key type fscrypt-provisioning registered Dec 16 09:52:44.223449 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 16 09:52:44.223457 kernel: ima: Allocated hash algorithm: sha1 Dec 16 09:52:44.223465 kernel: ima: No architecture policies found Dec 16 09:52:44.223473 kernel: clk: Disabling unused clocks Dec 16 09:52:44.223481 kernel: Freeing unused kernel image (initmem) memory: 42844K Dec 16 09:52:44.223489 kernel: Write protecting the kernel read-only data: 36864k Dec 16 09:52:44.223498 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Dec 16 09:52:44.223506 kernel: Run /init as init process Dec 16 09:52:44.223516 kernel: with arguments: Dec 16 09:52:44.223525 kernel: /init Dec 16 09:52:44.223533 kernel: with environment: Dec 16 09:52:44.223541 kernel: HOME=/ Dec 16 09:52:44.223549 kernel: TERM=linux Dec 16 09:52:44.223557 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 16 09:52:44.223568 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 16 09:52:44.223579 systemd[1]: Detected virtualization kvm. Dec 16 09:52:44.223591 systemd[1]: Detected architecture x86-64. Dec 16 09:52:44.223613 systemd[1]: Running in initrd. Dec 16 09:52:44.223621 systemd[1]: No hostname configured, using default hostname. Dec 16 09:52:44.223629 systemd[1]: Hostname set to . Dec 16 09:52:44.223638 systemd[1]: Initializing machine ID from VM UUID. Dec 16 09:52:44.223646 systemd[1]: Queued start job for default target initrd.target. Dec 16 09:52:44.223655 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 09:52:44.223663 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 09:52:44.223675 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 16 09:52:44.223684 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 09:52:44.223692 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 16 09:52:44.223702 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 16 09:52:44.223712 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 16 09:52:44.223721 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 16 09:52:44.223732 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 09:52:44.223746 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 09:52:44.225820 systemd[1]: Reached target paths.target - Path Units. Dec 16 09:52:44.225830 systemd[1]: Reached target slices.target - Slice Units. Dec 16 09:52:44.225838 systemd[1]: Reached target swap.target - Swaps. Dec 16 09:52:44.225847 systemd[1]: Reached target timers.target - Timer Units. Dec 16 09:52:44.225855 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 09:52:44.225864 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 09:52:44.225873 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 16 09:52:44.225885 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 16 09:52:44.225894 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 09:52:44.225902 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 09:52:44.225914 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 09:52:44.225925 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 09:52:44.225937 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 16 09:52:44.225949 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 09:52:44.225960 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 16 09:52:44.225968 systemd[1]: Starting systemd-fsck-usr.service... Dec 16 09:52:44.225980 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 09:52:44.225988 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 09:52:44.225996 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 09:52:44.226038 systemd-journald[188]: Collecting audit messages is disabled. Dec 16 09:52:44.226063 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 16 09:52:44.226072 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 09:52:44.226081 systemd[1]: Finished systemd-fsck-usr.service. Dec 16 09:52:44.226091 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 09:52:44.226102 systemd-journald[188]: Journal started Dec 16 09:52:44.226121 systemd-journald[188]: Runtime Journal (/run/log/journal/c6eda34e7bee419583735e87670cfe5c) is 4.8M, max 38.4M, 33.6M free. Dec 16 09:52:44.206994 systemd-modules-load[189]: Inserted module 'overlay' Dec 16 09:52:44.251366 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 16 09:52:44.251391 kernel: Bridge firewalling registered Dec 16 09:52:44.251402 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 09:52:44.244778 systemd-modules-load[189]: Inserted module 'br_netfilter' Dec 16 09:52:44.258188 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 09:52:44.259875 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 09:52:44.268922 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 09:52:44.271897 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 09:52:44.280974 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 09:52:44.283012 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 09:52:44.295993 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 09:52:44.296967 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 09:52:44.297798 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 09:52:44.300703 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 09:52:44.309972 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 16 09:52:44.314002 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 09:52:44.316826 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 09:52:44.325687 dracut-cmdline[221]: dracut-dracut-053 Dec 16 09:52:44.330980 dracut-cmdline[221]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 16 09:52:44.355020 systemd-resolved[222]: Positive Trust Anchors: Dec 16 09:52:44.355041 systemd-resolved[222]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 09:52:44.355072 systemd-resolved[222]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 09:52:44.358138 systemd-resolved[222]: Defaulting to hostname 'linux'. Dec 16 09:52:44.359669 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 09:52:44.364846 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 09:52:44.419830 kernel: SCSI subsystem initialized Dec 16 09:52:44.430787 kernel: Loading iSCSI transport class v2.0-870. Dec 16 09:52:44.453810 kernel: iscsi: registered transport (tcp) Dec 16 09:52:44.490884 kernel: iscsi: registered transport (qla4xxx) Dec 16 09:52:44.490972 kernel: QLogic iSCSI HBA Driver Dec 16 09:52:44.552051 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 16 09:52:44.560938 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 16 09:52:44.588258 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 16 09:52:44.588334 kernel: device-mapper: uevent: version 1.0.3 Dec 16 09:52:44.588359 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 16 09:52:44.637832 kernel: raid6: avx2x4 gen() 27856 MB/s Dec 16 09:52:44.655829 kernel: raid6: avx2x2 gen() 29727 MB/s Dec 16 09:52:44.672984 kernel: raid6: avx2x1 gen() 25729 MB/s Dec 16 09:52:44.673050 kernel: raid6: using algorithm avx2x2 gen() 29727 MB/s Dec 16 09:52:44.691826 kernel: raid6: .... xor() 19612 MB/s, rmw enabled Dec 16 09:52:44.691892 kernel: raid6: using avx2x2 recovery algorithm Dec 16 09:52:44.713827 kernel: xor: automatically using best checksumming function avx Dec 16 09:52:44.882816 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 16 09:52:44.902091 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 16 09:52:44.911877 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 09:52:44.954932 systemd-udevd[406]: Using default interface naming scheme 'v255'. Dec 16 09:52:44.963913 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 09:52:44.973965 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 16 09:52:45.011858 dracut-pre-trigger[412]: rd.md=0: removing MD RAID activation Dec 16 09:52:45.074723 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 09:52:45.081984 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 09:52:45.198552 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 09:52:45.211957 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 16 09:52:45.252939 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 16 09:52:45.256655 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 09:52:45.259078 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 09:52:45.261174 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 09:52:45.267997 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 16 09:52:45.296906 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 16 09:52:45.308785 kernel: libata version 3.00 loaded. Dec 16 09:52:45.318986 kernel: scsi host0: Virtio SCSI HBA Dec 16 09:52:45.319081 kernel: ahci 0000:00:1f.2: version 3.0 Dec 16 09:52:45.426650 kernel: cryptd: max_cpu_qlen set to 1000 Dec 16 09:52:45.426670 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 16 09:52:45.426682 kernel: AVX2 version of gcm_enc/dec engaged. Dec 16 09:52:45.426692 kernel: AES CTR mode by8 optimization enabled Dec 16 09:52:45.426708 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Dec 16 09:52:45.426897 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 16 09:52:45.427563 kernel: scsi host1: ahci Dec 16 09:52:45.428788 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Dec 16 09:52:45.428977 kernel: scsi host2: ahci Dec 16 09:52:45.429125 kernel: scsi host3: ahci Dec 16 09:52:45.429267 kernel: scsi host4: ahci Dec 16 09:52:45.429406 kernel: ACPI: bus type USB registered Dec 16 09:52:45.429417 kernel: usbcore: registered new interface driver usbfs Dec 16 09:52:45.429427 kernel: usbcore: registered new interface driver hub Dec 16 09:52:45.429442 kernel: usbcore: registered new device driver usb Dec 16 09:52:45.429452 kernel: scsi host5: ahci Dec 16 09:52:45.429610 kernel: scsi host6: ahci Dec 16 09:52:45.432815 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a100 irq 40 Dec 16 09:52:45.432835 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a180 irq 40 Dec 16 09:52:45.432846 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a200 irq 40 Dec 16 09:52:45.432856 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a280 irq 40 Dec 16 09:52:45.432866 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a300 irq 40 Dec 16 09:52:45.432881 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a380 irq 40 Dec 16 09:52:45.395500 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 09:52:45.395632 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 09:52:45.396376 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 09:52:45.396948 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 09:52:45.397066 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 09:52:45.397606 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 09:52:45.409110 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 09:52:45.479666 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 09:52:45.486940 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 09:52:45.509114 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 09:52:45.740781 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Dec 16 09:52:45.740879 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Dec 16 09:52:45.740904 kernel: ata1.00: applying bridge limits Dec 16 09:52:45.740926 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 16 09:52:45.741814 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 16 09:52:45.746098 kernel: ata1.00: configured for UDMA/100 Dec 16 09:52:45.746778 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 16 09:52:45.751794 kernel: ata3: SATA link down (SStatus 0 SControl 300) Dec 16 09:52:45.751834 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 16 09:52:45.755710 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 16 09:52:45.824346 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Dec 16 09:52:45.854252 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Dec 16 09:52:45.854521 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Dec 16 09:52:45.854880 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Dec 16 09:52:45.855128 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Dec 16 09:52:45.855376 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Dec 16 09:52:45.855630 kernel: hub 1-0:1.0: USB hub found Dec 16 09:52:45.855931 kernel: sd 0:0:0:0: Power-on or device reset occurred Dec 16 09:52:45.881380 kernel: hub 1-0:1.0: 4 ports detected Dec 16 09:52:45.881697 kernel: sd 0:0:0:0: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Dec 16 09:52:45.881990 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Dec 16 09:52:45.882854 kernel: sd 0:0:0:0: [sda] Write Protect is off Dec 16 09:52:45.883135 kernel: hub 2-0:1.0: USB hub found Dec 16 09:52:45.883403 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Dec 16 09:52:45.883683 kernel: hub 2-0:1.0: 4 ports detected Dec 16 09:52:45.884344 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Dec 16 09:52:45.884630 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Dec 16 09:52:45.888881 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 16 09:52:45.888915 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 16 09:52:45.888933 kernel: GPT:17805311 != 80003071 Dec 16 09:52:45.888950 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 16 09:52:45.888966 kernel: GPT:17805311 != 80003071 Dec 16 09:52:45.888982 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 16 09:52:45.889005 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 16 09:52:45.889027 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Dec 16 09:52:45.889334 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Dec 16 09:52:45.939800 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (459) Dec 16 09:52:45.945784 kernel: BTRFS: device fsid c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be devid 1 transid 41 /dev/sda3 scanned by (udev-worker) (448) Dec 16 09:52:45.952346 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Dec 16 09:52:45.982124 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Dec 16 09:52:45.996281 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Dec 16 09:52:46.003827 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Dec 16 09:52:46.005082 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Dec 16 09:52:46.011895 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 16 09:52:46.020285 disk-uuid[574]: Primary Header is updated. Dec 16 09:52:46.020285 disk-uuid[574]: Secondary Entries is updated. Dec 16 09:52:46.020285 disk-uuid[574]: Secondary Header is updated. Dec 16 09:52:46.038817 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 16 09:52:46.052797 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 16 09:52:46.098815 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Dec 16 09:52:46.243795 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 16 09:52:46.251434 kernel: usbcore: registered new interface driver usbhid Dec 16 09:52:46.251464 kernel: usbhid: USB HID core driver Dec 16 09:52:46.259776 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Dec 16 09:52:46.259814 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Dec 16 09:52:47.066071 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 16 09:52:47.071151 disk-uuid[575]: The operation has completed successfully. Dec 16 09:52:47.173093 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 16 09:52:47.173316 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 16 09:52:47.187996 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 16 09:52:47.214228 sh[594]: Success Dec 16 09:52:47.241143 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Dec 16 09:52:47.329647 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 16 09:52:47.342979 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 16 09:52:47.347124 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 16 09:52:47.382136 kernel: BTRFS info (device dm-0): first mount of filesystem c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be Dec 16 09:52:47.382205 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 16 09:52:47.384880 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 16 09:52:47.389291 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 16 09:52:47.389321 kernel: BTRFS info (device dm-0): using free space tree Dec 16 09:52:47.403787 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 16 09:52:47.406391 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 16 09:52:47.408701 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 16 09:52:47.416144 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 16 09:52:47.421040 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 16 09:52:47.448046 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 16 09:52:47.448124 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 09:52:47.450912 kernel: BTRFS info (device sda6): using free space tree Dec 16 09:52:47.463530 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 16 09:52:47.463634 kernel: BTRFS info (device sda6): auto enabling async discard Dec 16 09:52:47.485252 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 16 09:52:47.493815 kernel: BTRFS info (device sda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 16 09:52:47.495562 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 16 09:52:47.503981 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 16 09:52:47.606118 ignition[690]: Ignition 2.19.0 Dec 16 09:52:47.607961 ignition[690]: Stage: fetch-offline Dec 16 09:52:47.608448 ignition[690]: no configs at "/usr/lib/ignition/base.d" Dec 16 09:52:47.608459 ignition[690]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 16 09:52:47.608546 ignition[690]: parsed url from cmdline: "" Dec 16 09:52:47.609572 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 09:52:47.608550 ignition[690]: no config URL provided Dec 16 09:52:47.612860 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 09:52:47.608556 ignition[690]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 09:52:47.608564 ignition[690]: no config at "/usr/lib/ignition/user.ign" Dec 16 09:52:47.608569 ignition[690]: failed to fetch config: resource requires networking Dec 16 09:52:47.609900 ignition[690]: Ignition finished successfully Dec 16 09:52:47.620001 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 09:52:47.643027 systemd-networkd[781]: lo: Link UP Dec 16 09:52:47.643040 systemd-networkd[781]: lo: Gained carrier Dec 16 09:52:47.645772 systemd-networkd[781]: Enumeration completed Dec 16 09:52:47.646569 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 09:52:47.646771 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 09:52:47.646776 systemd-networkd[781]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 09:52:47.647830 systemd-networkd[781]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 09:52:47.647834 systemd-networkd[781]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 09:52:47.648723 systemd-networkd[781]: eth0: Link UP Dec 16 09:52:47.648727 systemd-networkd[781]: eth0: Gained carrier Dec 16 09:52:47.648734 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 09:52:47.648979 systemd[1]: Reached target network.target - Network. Dec 16 09:52:47.653554 systemd-networkd[781]: eth1: Link UP Dec 16 09:52:47.653558 systemd-networkd[781]: eth1: Gained carrier Dec 16 09:52:47.653565 systemd-networkd[781]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 09:52:47.656992 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 16 09:52:47.671962 ignition[783]: Ignition 2.19.0 Dec 16 09:52:47.671978 ignition[783]: Stage: fetch Dec 16 09:52:47.672189 ignition[783]: no configs at "/usr/lib/ignition/base.d" Dec 16 09:52:47.672201 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 16 09:52:47.672301 ignition[783]: parsed url from cmdline: "" Dec 16 09:52:47.672305 ignition[783]: no config URL provided Dec 16 09:52:47.672310 ignition[783]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 09:52:47.672322 ignition[783]: no config at "/usr/lib/ignition/user.ign" Dec 16 09:52:47.672340 ignition[783]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Dec 16 09:52:47.672568 ignition[783]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Dec 16 09:52:47.681818 systemd-networkd[781]: eth1: DHCPv4 address 10.0.0.4/32, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 16 09:52:47.711826 systemd-networkd[781]: eth0: DHCPv4 address 188.34.167.196/32, gateway 172.31.1.1 acquired from 172.31.1.1 Dec 16 09:52:47.873513 ignition[783]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Dec 16 09:52:47.880746 ignition[783]: GET result: OK Dec 16 09:52:47.880857 ignition[783]: parsing config with SHA512: 2517c34ada98527ac8575fe61f0c1d82f26bf4dc25a37e443891db24cdbc72b91003a6a4bfbe883060f7aadc0b0895e001b2838a1f296e5b3232478c68fd7039 Dec 16 09:52:47.886315 unknown[783]: fetched base config from "system" Dec 16 09:52:47.886338 unknown[783]: fetched base config from "system" Dec 16 09:52:47.886937 ignition[783]: fetch: fetch complete Dec 16 09:52:47.886352 unknown[783]: fetched user config from "hetzner" Dec 16 09:52:47.886948 ignition[783]: fetch: fetch passed Dec 16 09:52:47.887030 ignition[783]: Ignition finished successfully Dec 16 09:52:47.893158 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 16 09:52:47.902013 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 16 09:52:47.946068 ignition[790]: Ignition 2.19.0 Dec 16 09:52:47.946091 ignition[790]: Stage: kargs Dec 16 09:52:47.946421 ignition[790]: no configs at "/usr/lib/ignition/base.d" Dec 16 09:52:47.946446 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 16 09:52:47.947904 ignition[790]: kargs: kargs passed Dec 16 09:52:47.951670 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 16 09:52:47.947992 ignition[790]: Ignition finished successfully Dec 16 09:52:47.965982 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 16 09:52:47.988361 ignition[797]: Ignition 2.19.0 Dec 16 09:52:47.988373 ignition[797]: Stage: disks Dec 16 09:52:47.988664 ignition[797]: no configs at "/usr/lib/ignition/base.d" Dec 16 09:52:47.988683 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 16 09:52:47.989614 ignition[797]: disks: disks passed Dec 16 09:52:47.992206 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 16 09:52:47.989676 ignition[797]: Ignition finished successfully Dec 16 09:52:47.994402 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 16 09:52:47.995231 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 16 09:52:47.995709 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 09:52:47.996210 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 09:52:47.997814 systemd[1]: Reached target basic.target - Basic System. Dec 16 09:52:48.004024 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 16 09:52:48.024567 systemd-fsck[805]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Dec 16 09:52:48.029414 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 16 09:52:48.036945 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 16 09:52:48.161782 kernel: EXT4-fs (sda9): mounted filesystem 390119fa-ab9c-4f50-b046-3b5c76c46193 r/w with ordered data mode. Quota mode: none. Dec 16 09:52:48.163669 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 16 09:52:48.165729 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 16 09:52:48.172900 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 09:52:48.181000 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 16 09:52:48.187925 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 16 09:52:48.190748 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 16 09:52:48.190942 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 09:52:48.196537 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 16 09:52:48.209769 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (813) Dec 16 09:52:48.206342 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 16 09:52:48.228045 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 16 09:52:48.228070 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 09:52:48.228080 kernel: BTRFS info (device sda6): using free space tree Dec 16 09:52:48.228091 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 16 09:52:48.228101 kernel: BTRFS info (device sda6): auto enabling async discard Dec 16 09:52:48.232524 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 09:52:48.279662 initrd-setup-root[840]: cut: /sysroot/etc/passwd: No such file or directory Dec 16 09:52:48.287703 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory Dec 16 09:52:48.292798 coreos-metadata[815]: Dec 16 09:52:48.292 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Dec 16 09:52:48.293728 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory Dec 16 09:52:48.295553 coreos-metadata[815]: Dec 16 09:52:48.294 INFO Fetch successful Dec 16 09:52:48.296131 coreos-metadata[815]: Dec 16 09:52:48.296 INFO wrote hostname ci-4081-2-1-9-dd07cc3c0e to /sysroot/etc/hostname Dec 16 09:52:48.297499 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 16 09:52:48.302627 initrd-setup-root[862]: cut: /sysroot/etc/gshadow: No such file or directory Dec 16 09:52:48.406468 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 16 09:52:48.410868 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 16 09:52:48.413419 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 16 09:52:48.421330 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 16 09:52:48.422978 kernel: BTRFS info (device sda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 16 09:52:48.448005 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 16 09:52:48.453674 ignition[930]: INFO : Ignition 2.19.0 Dec 16 09:52:48.453674 ignition[930]: INFO : Stage: mount Dec 16 09:52:48.454774 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 09:52:48.454774 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 16 09:52:48.456007 ignition[930]: INFO : mount: mount passed Dec 16 09:52:48.456007 ignition[930]: INFO : Ignition finished successfully Dec 16 09:52:48.456896 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 16 09:52:48.461959 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 16 09:52:48.474944 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 09:52:48.488088 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (941) Dec 16 09:52:48.488130 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 16 09:52:48.489872 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 09:52:48.492239 kernel: BTRFS info (device sda6): using free space tree Dec 16 09:52:48.497817 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 16 09:52:48.497863 kernel: BTRFS info (device sda6): auto enabling async discard Dec 16 09:52:48.501503 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 09:52:48.528610 ignition[957]: INFO : Ignition 2.19.0 Dec 16 09:52:48.528610 ignition[957]: INFO : Stage: files Dec 16 09:52:48.529792 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 09:52:48.529792 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 16 09:52:48.531982 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Dec 16 09:52:48.531982 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 16 09:52:48.531982 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 16 09:52:48.536175 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 16 09:52:48.537430 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 16 09:52:48.538541 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 16 09:52:48.537642 unknown[957]: wrote ssh authorized keys file for user: core Dec 16 09:52:48.540665 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Dec 16 09:52:48.540665 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Dec 16 09:52:48.540665 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 09:52:48.544306 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 09:52:48.544306 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 16 09:52:48.544306 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 16 09:52:48.544306 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 16 09:52:48.544306 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 16 09:52:48.853109 systemd-networkd[781]: eth1: Gained IPv6LL Dec 16 09:52:48.981242 systemd-networkd[781]: eth0: Gained IPv6LL Dec 16 09:52:49.102052 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Dec 16 09:52:49.390859 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 16 09:52:49.392789 ignition[957]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Dec 16 09:52:49.392789 ignition[957]: INFO : files: op(7): op(8): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Dec 16 09:52:49.395665 ignition[957]: INFO : files: op(7): op(8): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Dec 16 09:52:49.395665 ignition[957]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Dec 16 09:52:49.395665 ignition[957]: INFO : files: createResultFile: createFiles: op(9): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 16 09:52:49.395665 ignition[957]: INFO : files: createResultFile: createFiles: op(9): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 16 09:52:49.395665 ignition[957]: INFO : files: files passed Dec 16 09:52:49.395665 ignition[957]: INFO : Ignition finished successfully Dec 16 09:52:49.398336 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 16 09:52:49.413141 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 16 09:52:49.424012 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 16 09:52:49.431574 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 16 09:52:49.431810 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 16 09:52:49.449805 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 09:52:49.449805 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 16 09:52:49.453612 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 09:52:49.453673 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 09:52:49.455568 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 16 09:52:49.465066 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 16 09:52:49.514301 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 16 09:52:49.514495 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 16 09:52:49.517039 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 16 09:52:49.519580 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 16 09:52:49.520538 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 16 09:52:49.525028 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 16 09:52:49.556476 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 09:52:49.564009 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 16 09:52:49.593464 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 16 09:52:49.596312 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 09:52:49.599177 systemd[1]: Stopped target timers.target - Timer Units. Dec 16 09:52:49.600378 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 16 09:52:49.600632 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 09:52:49.603427 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 16 09:52:49.605114 systemd[1]: Stopped target basic.target - Basic System. Dec 16 09:52:49.607805 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 16 09:52:49.610311 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 09:52:49.612736 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 16 09:52:49.615562 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 16 09:52:49.618015 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 09:52:49.620668 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 16 09:52:49.623167 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 16 09:52:49.625694 systemd[1]: Stopped target swap.target - Swaps. Dec 16 09:52:49.628209 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 16 09:52:49.628438 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 16 09:52:49.631089 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 16 09:52:49.632710 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 09:52:49.635169 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 16 09:52:49.635387 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 09:52:49.638224 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 16 09:52:49.638533 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 16 09:52:49.642304 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 16 09:52:49.642553 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 09:52:49.644126 systemd[1]: ignition-files.service: Deactivated successfully. Dec 16 09:52:49.644472 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 16 09:52:49.646314 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 16 09:52:49.646640 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 16 09:52:49.656475 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 16 09:52:49.666253 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 16 09:52:49.670298 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 16 09:52:49.670570 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 09:52:49.677977 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 16 09:52:49.678207 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 09:52:49.688476 ignition[1011]: INFO : Ignition 2.19.0 Dec 16 09:52:49.688476 ignition[1011]: INFO : Stage: umount Dec 16 09:52:49.694958 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 09:52:49.694958 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 16 09:52:49.694822 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 16 09:52:49.702191 ignition[1011]: INFO : umount: umount passed Dec 16 09:52:49.702191 ignition[1011]: INFO : Ignition finished successfully Dec 16 09:52:49.695054 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 16 09:52:49.699613 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 16 09:52:49.699856 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 16 09:52:49.710666 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 16 09:52:49.710841 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 16 09:52:49.716838 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 16 09:52:49.716897 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 16 09:52:49.725442 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 16 09:52:49.725494 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 16 09:52:49.726791 systemd[1]: Stopped target network.target - Network. Dec 16 09:52:49.728201 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 16 09:52:49.728263 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 09:52:49.728775 systemd[1]: Stopped target paths.target - Path Units. Dec 16 09:52:49.729270 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 16 09:52:49.732850 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 09:52:49.733568 systemd[1]: Stopped target slices.target - Slice Units. Dec 16 09:52:49.733990 systemd[1]: Stopped target sockets.target - Socket Units. Dec 16 09:52:49.734424 systemd[1]: iscsid.socket: Deactivated successfully. Dec 16 09:52:49.734469 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 09:52:49.737893 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 16 09:52:49.737935 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 09:52:49.745358 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 16 09:52:49.745407 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 16 09:52:49.750213 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 16 09:52:49.750266 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 16 09:52:49.751406 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 16 09:52:49.752790 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 16 09:52:49.755233 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 16 09:52:49.756126 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 16 09:52:49.756237 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 16 09:52:49.756812 systemd-networkd[781]: eth0: DHCPv6 lease lost Dec 16 09:52:49.758410 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 16 09:52:49.758499 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 16 09:52:49.760812 systemd-networkd[781]: eth1: DHCPv6 lease lost Dec 16 09:52:49.763031 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 16 09:52:49.763171 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 16 09:52:49.765082 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 16 09:52:49.765197 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 16 09:52:49.770016 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 16 09:52:49.770076 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 16 09:52:49.775878 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 16 09:52:49.777239 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 16 09:52:49.777300 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 09:52:49.780368 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 09:52:49.780465 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 09:52:49.781393 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 16 09:52:49.781491 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 16 09:52:49.783242 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 16 09:52:49.783340 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 09:52:49.785074 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 09:52:49.804486 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 16 09:52:49.804633 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 16 09:52:49.807094 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 16 09:52:49.807254 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 09:52:49.809485 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 16 09:52:49.809556 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 16 09:52:49.811023 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 16 09:52:49.811062 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 09:52:49.812834 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 16 09:52:49.812882 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 16 09:52:49.815352 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 16 09:52:49.815399 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 16 09:52:49.816912 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 09:52:49.816972 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 09:52:49.824930 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 16 09:52:49.827261 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 16 09:52:49.827387 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 09:52:49.828364 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 16 09:52:49.828445 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 09:52:49.831914 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 16 09:52:49.832020 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 09:52:49.833347 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 09:52:49.833451 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 09:52:49.836024 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 16 09:52:49.836298 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 16 09:52:49.838470 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 16 09:52:49.847097 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 16 09:52:49.859371 systemd[1]: Switching root. Dec 16 09:52:49.906705 systemd-journald[188]: Journal stopped Dec 16 09:52:51.179825 systemd-journald[188]: Received SIGTERM from PID 1 (systemd). Dec 16 09:52:51.179922 kernel: SELinux: policy capability network_peer_controls=1 Dec 16 09:52:51.179950 kernel: SELinux: policy capability open_perms=1 Dec 16 09:52:51.179962 kernel: SELinux: policy capability extended_socket_class=1 Dec 16 09:52:51.179972 kernel: SELinux: policy capability always_check_network=0 Dec 16 09:52:51.179988 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 16 09:52:51.179999 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 16 09:52:51.180012 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 16 09:52:51.180023 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 16 09:52:51.180034 kernel: audit: type=1403 audit(1734342770.076:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 16 09:52:51.180047 systemd[1]: Successfully loaded SELinux policy in 50.063ms. Dec 16 09:52:51.180077 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.303ms. Dec 16 09:52:51.180090 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 16 09:52:51.180101 systemd[1]: Detected virtualization kvm. Dec 16 09:52:51.180114 systemd[1]: Detected architecture x86-64. Dec 16 09:52:51.180129 systemd[1]: Detected first boot. Dec 16 09:52:51.180140 systemd[1]: Hostname set to . Dec 16 09:52:51.180153 systemd[1]: Initializing machine ID from VM UUID. Dec 16 09:52:51.180165 zram_generator::config[1053]: No configuration found. Dec 16 09:52:51.180178 systemd[1]: Populated /etc with preset unit settings. Dec 16 09:52:51.180190 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 16 09:52:51.180201 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 16 09:52:51.180213 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 16 09:52:51.180233 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 16 09:52:51.180244 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 16 09:52:51.180256 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 16 09:52:51.180268 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 16 09:52:51.180279 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 16 09:52:51.180291 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 16 09:52:51.180303 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 16 09:52:51.180314 systemd[1]: Created slice user.slice - User and Session Slice. Dec 16 09:52:51.180326 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 09:52:51.180340 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 09:52:51.180353 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 16 09:52:51.180364 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 16 09:52:51.180376 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 16 09:52:51.180392 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 09:52:51.180404 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 16 09:52:51.180416 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 09:52:51.180427 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 16 09:52:51.180442 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 16 09:52:51.180453 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 16 09:52:51.180465 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 16 09:52:51.180477 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 09:52:51.180491 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 09:52:51.180502 systemd[1]: Reached target slices.target - Slice Units. Dec 16 09:52:51.180514 systemd[1]: Reached target swap.target - Swaps. Dec 16 09:52:51.180533 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 16 09:52:51.180545 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 16 09:52:51.180557 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 09:52:51.180569 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 09:52:51.180594 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 09:52:51.180606 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 16 09:52:51.180618 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 16 09:52:51.180630 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 16 09:52:51.180642 systemd[1]: Mounting media.mount - External Media Directory... Dec 16 09:52:51.180656 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 09:52:51.180668 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 16 09:52:51.180680 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 16 09:52:51.180692 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 16 09:52:51.180704 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 16 09:52:51.180716 systemd[1]: Reached target machines.target - Containers. Dec 16 09:52:51.180727 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 16 09:52:51.180739 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 09:52:51.181962 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 09:52:51.181980 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 16 09:52:51.182000 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 09:52:51.182012 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 09:52:51.182024 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 09:52:51.182036 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 16 09:52:51.182048 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 09:52:51.182060 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 16 09:52:51.182080 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 16 09:52:51.182094 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 16 09:52:51.182107 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 16 09:52:51.182119 systemd[1]: Stopped systemd-fsck-usr.service. Dec 16 09:52:51.182131 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 09:52:51.182173 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 09:52:51.182193 kernel: loop: module loaded Dec 16 09:52:51.182214 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 09:52:51.182231 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 16 09:52:51.182247 kernel: ACPI: bus type drm_connector registered Dec 16 09:52:51.182263 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 09:52:51.182279 kernel: fuse: init (API version 7.39) Dec 16 09:52:51.182295 systemd[1]: verity-setup.service: Deactivated successfully. Dec 16 09:52:51.182311 systemd[1]: Stopped verity-setup.service. Dec 16 09:52:51.182328 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 09:52:51.182345 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 16 09:52:51.182364 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 16 09:52:51.182380 systemd[1]: Mounted media.mount - External Media Directory. Dec 16 09:52:51.182397 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 16 09:52:51.182413 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 16 09:52:51.182429 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 16 09:52:51.182441 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 16 09:52:51.182472 systemd-journald[1143]: Collecting audit messages is disabled. Dec 16 09:52:51.182494 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 09:52:51.182506 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 16 09:52:51.182518 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 16 09:52:51.182530 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 09:52:51.182545 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 09:52:51.182557 systemd-journald[1143]: Journal started Dec 16 09:52:51.182578 systemd-journald[1143]: Runtime Journal (/run/log/journal/c6eda34e7bee419583735e87670cfe5c) is 4.8M, max 38.4M, 33.6M free. Dec 16 09:52:50.766213 systemd[1]: Queued start job for default target multi-user.target. Dec 16 09:52:50.789625 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Dec 16 09:52:50.790579 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 16 09:52:51.184820 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 09:52:51.190160 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 09:52:51.190320 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 09:52:51.191096 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 09:52:51.191252 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 09:52:51.192116 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 16 09:52:51.192308 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 16 09:52:51.193081 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 09:52:51.193256 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 09:52:51.194015 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 09:52:51.194796 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 09:52:51.195629 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 16 09:52:51.212918 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 09:52:51.221869 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 16 09:52:51.228372 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 16 09:52:51.229046 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 16 09:52:51.229079 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 09:52:51.230545 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 16 09:52:51.245862 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 16 09:52:51.252981 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 16 09:52:51.253991 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 09:52:51.257907 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 16 09:52:51.261905 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 16 09:52:51.262478 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 09:52:51.264902 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 16 09:52:51.265440 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 09:52:51.268963 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 09:52:51.273206 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 16 09:52:51.283030 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 09:52:51.287666 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 16 09:52:51.289526 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 16 09:52:51.298876 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 16 09:52:51.342713 systemd-journald[1143]: Time spent on flushing to /var/log/journal/c6eda34e7bee419583735e87670cfe5c is 28.440ms for 1119 entries. Dec 16 09:52:51.342713 systemd-journald[1143]: System Journal (/var/log/journal/c6eda34e7bee419583735e87670cfe5c) is 8.0M, max 584.8M, 576.8M free. Dec 16 09:52:51.390082 systemd-journald[1143]: Received client request to flush runtime journal. Dec 16 09:52:51.390120 kernel: loop0: detected capacity change from 0 to 142488 Dec 16 09:52:51.354203 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 16 09:52:51.354974 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 16 09:52:51.362427 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 16 09:52:51.371839 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 09:52:51.382348 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 16 09:52:51.397908 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Dec 16 09:52:51.397922 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Dec 16 09:52:51.399229 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 16 09:52:51.418133 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 16 09:52:51.426701 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 09:52:51.438185 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 09:52:51.447708 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 16 09:52:51.450650 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 16 09:52:51.452721 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 16 09:52:51.457800 kernel: loop1: detected capacity change from 0 to 140768 Dec 16 09:52:51.461052 udevadm[1184]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 16 09:52:51.518972 kernel: loop2: detected capacity change from 0 to 8 Dec 16 09:52:51.514682 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 16 09:52:51.523910 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 09:52:51.542024 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. Dec 16 09:52:51.542400 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. Dec 16 09:52:51.549572 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 09:52:51.550811 kernel: loop3: detected capacity change from 0 to 211296 Dec 16 09:52:51.602047 kernel: loop4: detected capacity change from 0 to 142488 Dec 16 09:52:51.634850 kernel: loop5: detected capacity change from 0 to 140768 Dec 16 09:52:51.665909 kernel: loop6: detected capacity change from 0 to 8 Dec 16 09:52:51.670780 kernel: loop7: detected capacity change from 0 to 211296 Dec 16 09:52:51.704357 (sd-merge)[1202]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Dec 16 09:52:51.705115 (sd-merge)[1202]: Merged extensions into '/usr'. Dec 16 09:52:51.711849 systemd[1]: Reloading requested from client PID 1173 ('systemd-sysext') (unit systemd-sysext.service)... Dec 16 09:52:51.711865 systemd[1]: Reloading... Dec 16 09:52:51.831811 zram_generator::config[1235]: No configuration found. Dec 16 09:52:51.898975 ldconfig[1168]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 16 09:52:51.947911 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 16 09:52:52.002144 systemd[1]: Reloading finished in 289 ms. Dec 16 09:52:52.027572 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 16 09:52:52.033316 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 16 09:52:52.042980 systemd[1]: Starting ensure-sysext.service... Dec 16 09:52:52.046325 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 09:52:52.059875 systemd[1]: Reloading requested from client PID 1271 ('systemctl') (unit ensure-sysext.service)... Dec 16 09:52:52.059889 systemd[1]: Reloading... Dec 16 09:52:52.071846 systemd-tmpfiles[1272]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 16 09:52:52.072226 systemd-tmpfiles[1272]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 16 09:52:52.073743 systemd-tmpfiles[1272]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 16 09:52:52.074162 systemd-tmpfiles[1272]: ACLs are not supported, ignoring. Dec 16 09:52:52.074295 systemd-tmpfiles[1272]: ACLs are not supported, ignoring. Dec 16 09:52:52.078200 systemd-tmpfiles[1272]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 09:52:52.078281 systemd-tmpfiles[1272]: Skipping /boot Dec 16 09:52:52.090892 systemd-tmpfiles[1272]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 09:52:52.090964 systemd-tmpfiles[1272]: Skipping /boot Dec 16 09:52:52.157836 zram_generator::config[1308]: No configuration found. Dec 16 09:52:52.276015 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 16 09:52:52.332327 systemd[1]: Reloading finished in 272 ms. Dec 16 09:52:52.352450 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 16 09:52:52.353460 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 09:52:52.379123 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 16 09:52:52.386051 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 16 09:52:52.401009 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 16 09:52:52.409085 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 09:52:52.423133 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 09:52:52.434176 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 16 09:52:52.455014 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 16 09:52:52.458099 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 16 09:52:52.467175 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 09:52:52.468726 systemd-udevd[1356]: Using default interface naming scheme 'v255'. Dec 16 09:52:52.469634 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 09:52:52.478870 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 09:52:52.486987 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 09:52:52.495030 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 09:52:52.495735 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 09:52:52.499992 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 16 09:52:52.500869 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 09:52:52.510719 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 16 09:52:52.512672 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 09:52:52.512927 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 09:52:52.519359 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 09:52:52.519827 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 09:52:52.524741 augenrules[1375]: No rules Dec 16 09:52:52.527435 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 09:52:52.528715 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 09:52:52.528840 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 09:52:52.530282 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 16 09:52:52.531262 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 09:52:52.531431 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 09:52:52.533738 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 09:52:52.539294 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 09:52:52.539497 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 09:52:52.545568 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 09:52:52.545860 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 09:52:52.557262 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 09:52:52.561955 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 09:52:52.569024 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 09:52:52.569732 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 09:52:52.569885 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 09:52:52.571903 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 09:52:52.572729 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 16 09:52:52.583977 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 09:52:52.584920 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 16 09:52:52.587466 systemd[1]: Finished ensure-sysext.service. Dec 16 09:52:52.593024 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 09:52:52.593551 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 09:52:52.613944 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 16 09:52:52.623902 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 09:52:52.624695 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 09:52:52.649230 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 16 09:52:52.652316 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 09:52:52.669152 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 09:52:52.669394 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 09:52:52.673603 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 09:52:52.673815 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 09:52:52.674771 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1408) Dec 16 09:52:52.678518 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 09:52:52.678606 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 09:52:52.700778 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1408) Dec 16 09:52:52.711427 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 16 09:52:52.765875 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 16 09:52:52.796297 systemd-networkd[1393]: lo: Link UP Dec 16 09:52:52.796307 systemd-networkd[1393]: lo: Gained carrier Dec 16 09:52:52.800325 kernel: ACPI: button: Power Button [PWRF] Dec 16 09:52:52.807634 systemd-networkd[1393]: Enumeration completed Dec 16 09:52:52.807786 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 09:52:52.816905 systemd-resolved[1354]: Positive Trust Anchors: Dec 16 09:52:52.816929 systemd-resolved[1354]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 09:52:52.816965 systemd-resolved[1354]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 09:52:52.817915 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 16 09:52:52.819850 systemd-networkd[1393]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 09:52:52.819856 systemd-networkd[1393]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 09:52:52.822195 kernel: mousedev: PS/2 mouse device common for all mice Dec 16 09:52:52.822563 systemd-networkd[1393]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 09:52:52.822569 systemd-networkd[1393]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 09:52:52.824020 systemd-networkd[1393]: eth0: Link UP Dec 16 09:52:52.824089 systemd-networkd[1393]: eth0: Gained carrier Dec 16 09:52:52.824140 systemd-networkd[1393]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 09:52:52.825267 systemd-resolved[1354]: Using system hostname 'ci-4081-2-1-9-dd07cc3c0e'. Dec 16 09:52:52.829529 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 09:52:52.829972 systemd-networkd[1393]: eth1: Link UP Dec 16 09:52:52.829979 systemd-networkd[1393]: eth1: Gained carrier Dec 16 09:52:52.829991 systemd-networkd[1393]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 09:52:52.831128 systemd[1]: Reached target network.target - Network. Dec 16 09:52:52.831569 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 09:52:52.833821 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 16 09:52:52.834812 systemd[1]: Reached target time-set.target - System Time Set. Dec 16 09:52:52.857532 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Dec 16 09:52:52.857594 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 09:52:52.857721 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 09:52:52.860845 systemd-networkd[1393]: eth1: DHCPv4 address 10.0.0.4/32, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 16 09:52:52.862938 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 09:52:52.863667 systemd-timesyncd[1407]: Network configuration changed, trying to establish connection. Dec 16 09:52:52.865700 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 09:52:52.868919 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 09:52:52.869547 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 09:52:52.869599 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 09:52:52.869613 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 09:52:52.893822 systemd-networkd[1393]: eth0: DHCPv4 address 188.34.167.196/32, gateway 172.31.1.1 acquired from 172.31.1.1 Dec 16 09:52:52.896001 systemd-timesyncd[1407]: Network configuration changed, trying to establish connection. Dec 16 09:52:52.897082 systemd-timesyncd[1407]: Network configuration changed, trying to establish connection. Dec 16 09:52:52.898191 systemd-networkd[1393]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 09:52:52.898663 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 09:52:52.899506 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 09:52:52.901278 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 09:52:52.902204 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 09:52:52.903432 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 09:52:52.904009 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 09:52:52.910212 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 09:52:52.910268 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 09:52:52.926730 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 16 09:52:52.927063 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Dec 16 09:52:52.927245 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 16 09:52:52.950808 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1396) Dec 16 09:52:52.953982 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 09:52:52.964848 kernel: EDAC MC: Ver: 3.0.0 Dec 16 09:52:52.968808 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Dec 16 09:52:53.010371 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Dec 16 09:52:53.010454 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Dec 16 09:52:53.020404 kernel: Console: switching to colour dummy device 80x25 Dec 16 09:52:53.022079 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Dec 16 09:52:53.022131 kernel: [drm] features: -context_init Dec 16 09:52:53.031793 kernel: [drm] number of scanouts: 1 Dec 16 09:52:53.031846 kernel: [drm] number of cap sets: 0 Dec 16 09:52:53.035775 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Dec 16 09:52:53.043279 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Dec 16 09:52:53.043323 kernel: Console: switching to colour frame buffer device 160x50 Dec 16 09:52:53.040026 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 09:52:53.040305 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 09:52:53.049789 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Dec 16 09:52:53.054717 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Dec 16 09:52:53.072995 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 16 09:52:53.075638 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 09:52:53.089327 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 16 09:52:53.116139 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 09:52:53.179349 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 16 09:52:53.189145 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 16 09:52:53.222003 lvm[1460]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 16 09:52:53.271398 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 16 09:52:53.272031 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 09:52:53.272202 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 09:52:53.272544 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 16 09:52:53.273145 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 16 09:52:53.275650 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 16 09:52:53.277122 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 16 09:52:53.277305 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 16 09:52:53.277446 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 16 09:52:53.277490 systemd[1]: Reached target paths.target - Path Units. Dec 16 09:52:53.277648 systemd[1]: Reached target timers.target - Timer Units. Dec 16 09:52:53.280241 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 16 09:52:53.284821 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 16 09:52:53.307904 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 16 09:52:53.319095 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 16 09:52:53.322294 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 16 09:52:53.324398 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 09:52:53.326259 systemd[1]: Reached target basic.target - Basic System. Dec 16 09:52:53.331379 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 16 09:52:53.331449 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 16 09:52:53.343010 systemd[1]: Starting containerd.service - containerd container runtime... Dec 16 09:52:53.345466 lvm[1465]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 16 09:52:53.357180 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 16 09:52:53.378090 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 16 09:52:53.388556 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 16 09:52:53.402109 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 16 09:52:53.404093 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 16 09:52:53.414499 coreos-metadata[1467]: Dec 16 09:52:53.414 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Dec 16 09:52:53.419454 coreos-metadata[1467]: Dec 16 09:52:53.416 INFO Fetch successful Dec 16 09:52:53.419454 coreos-metadata[1467]: Dec 16 09:52:53.416 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Dec 16 09:52:53.419454 coreos-metadata[1467]: Dec 16 09:52:53.417 INFO Fetch successful Dec 16 09:52:53.418141 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 16 09:52:53.419663 jq[1471]: false Dec 16 09:52:53.428162 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Dec 16 09:52:53.432995 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 16 09:52:53.454207 extend-filesystems[1472]: Found loop4 Dec 16 09:52:53.454207 extend-filesystems[1472]: Found loop5 Dec 16 09:52:53.454207 extend-filesystems[1472]: Found loop6 Dec 16 09:52:53.454207 extend-filesystems[1472]: Found loop7 Dec 16 09:52:53.454207 extend-filesystems[1472]: Found sda Dec 16 09:52:53.454207 extend-filesystems[1472]: Found sda1 Dec 16 09:52:53.454207 extend-filesystems[1472]: Found sda2 Dec 16 09:52:53.454207 extend-filesystems[1472]: Found sda3 Dec 16 09:52:53.448960 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 16 09:52:53.485904 extend-filesystems[1472]: Found usr Dec 16 09:52:53.485904 extend-filesystems[1472]: Found sda4 Dec 16 09:52:53.485904 extend-filesystems[1472]: Found sda6 Dec 16 09:52:53.485904 extend-filesystems[1472]: Found sda7 Dec 16 09:52:53.485904 extend-filesystems[1472]: Found sda9 Dec 16 09:52:53.485904 extend-filesystems[1472]: Checking size of /dev/sda9 Dec 16 09:52:53.485904 extend-filesystems[1472]: Resized partition /dev/sda9 Dec 16 09:52:53.531836 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Dec 16 09:52:53.468850 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 16 09:52:53.532077 extend-filesystems[1485]: resize2fs 1.47.1 (20-May-2024) Dec 16 09:52:53.499838 dbus-daemon[1470]: [system] SELinux support is enabled Dec 16 09:52:53.479124 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 16 09:52:53.491065 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 16 09:52:53.499007 systemd[1]: Starting update-engine.service - Update Engine... Dec 16 09:52:53.517780 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 16 09:52:53.528012 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 16 09:52:53.536532 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 16 09:52:53.543623 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 16 09:52:53.543853 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 16 09:52:53.544200 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 16 09:52:53.544385 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 16 09:52:53.566629 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 16 09:52:53.569645 jq[1494]: true Dec 16 09:52:53.572947 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1395) Dec 16 09:52:53.566657 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 16 09:52:53.567507 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 16 09:52:53.567523 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 16 09:52:53.584073 systemd[1]: motdgen.service: Deactivated successfully. Dec 16 09:52:53.590181 update_engine[1488]: I20241216 09:52:53.583917 1488 main.cc:92] Flatcar Update Engine starting Dec 16 09:52:53.584301 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 16 09:52:53.593773 update_engine[1488]: I20241216 09:52:53.591436 1488 update_check_scheduler.cc:74] Next update check in 4m59s Dec 16 09:52:53.618264 systemd[1]: Started update-engine.service - Update Engine. Dec 16 09:52:53.627654 (ntainerd)[1506]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 16 09:52:53.634782 jq[1505]: true Dec 16 09:52:53.635946 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 16 09:52:53.696788 systemd-logind[1480]: New seat seat0. Dec 16 09:52:53.707140 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Dec 16 09:52:53.707641 systemd-logind[1480]: Watching system buttons on /dev/input/event2 (Power Button) Dec 16 09:52:53.707674 systemd-logind[1480]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 16 09:52:53.712764 systemd[1]: Started systemd-logind.service - User Login Management. Dec 16 09:52:53.721857 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 16 09:52:53.722964 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 16 09:52:53.748958 extend-filesystems[1485]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Dec 16 09:52:53.748958 extend-filesystems[1485]: old_desc_blocks = 1, new_desc_blocks = 5 Dec 16 09:52:53.748958 extend-filesystems[1485]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Dec 16 09:52:53.772493 extend-filesystems[1472]: Resized filesystem in /dev/sda9 Dec 16 09:52:53.772493 extend-filesystems[1472]: Found sr0 Dec 16 09:52:53.773924 sshd_keygen[1504]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 16 09:52:53.750260 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 16 09:52:53.750491 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 16 09:52:53.785552 bash[1533]: Updated "/home/core/.ssh/authorized_keys" Dec 16 09:52:53.788494 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 16 09:52:53.800603 systemd[1]: Starting sshkeys.service... Dec 16 09:52:53.809555 locksmithd[1512]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 16 09:52:53.826731 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 16 09:52:53.836363 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 16 09:52:53.837625 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 16 09:52:53.852992 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 16 09:52:53.863688 systemd[1]: issuegen.service: Deactivated successfully. Dec 16 09:52:53.863969 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 16 09:52:53.878585 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 16 09:52:53.878934 coreos-metadata[1550]: Dec 16 09:52:53.878 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Dec 16 09:52:53.883024 coreos-metadata[1550]: Dec 16 09:52:53.882 INFO Fetch successful Dec 16 09:52:53.886243 unknown[1550]: wrote ssh authorized keys file for user: core Dec 16 09:52:53.894645 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 16 09:52:53.910494 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 16 09:52:53.921675 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 16 09:52:53.926312 systemd[1]: Reached target getty.target - Login Prompts. Dec 16 09:52:53.932241 update-ssh-keys[1565]: Updated "/home/core/.ssh/authorized_keys" Dec 16 09:52:53.933440 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 16 09:52:53.941354 systemd[1]: Finished sshkeys.service. Dec 16 09:52:53.969112 containerd[1506]: time="2024-12-16T09:52:53.968970741Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 16 09:52:53.993061 containerd[1506]: time="2024-12-16T09:52:53.992913228Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 16 09:52:53.994841 containerd[1506]: time="2024-12-16T09:52:53.994792522Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 16 09:52:53.994841 containerd[1506]: time="2024-12-16T09:52:53.994831836Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 16 09:52:53.994913 containerd[1506]: time="2024-12-16T09:52:53.994848427Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 16 09:52:53.995075 containerd[1506]: time="2024-12-16T09:52:53.995048162Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 16 09:52:53.995099 containerd[1506]: time="2024-12-16T09:52:53.995073970Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 16 09:52:53.995174 containerd[1506]: time="2024-12-16T09:52:53.995147538Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 16 09:52:53.995174 containerd[1506]: time="2024-12-16T09:52:53.995164740Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 16 09:52:53.995399 containerd[1506]: time="2024-12-16T09:52:53.995365076Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 16 09:52:53.995399 containerd[1506]: time="2024-12-16T09:52:53.995385855Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 16 09:52:53.995399 containerd[1506]: time="2024-12-16T09:52:53.995398058Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 16 09:52:53.995483 containerd[1506]: time="2024-12-16T09:52:53.995407916Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 16 09:52:53.995511 containerd[1506]: time="2024-12-16T09:52:53.995495441Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 16 09:52:53.995848 containerd[1506]: time="2024-12-16T09:52:53.995815761Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 16 09:52:53.995986 containerd[1506]: time="2024-12-16T09:52:53.995955994Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 16 09:52:53.995986 containerd[1506]: time="2024-12-16T09:52:53.995975461Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 16 09:52:53.996100 containerd[1506]: time="2024-12-16T09:52:53.996075268Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 16 09:52:53.996183 containerd[1506]: time="2024-12-16T09:52:53.996135410Z" level=info msg="metadata content store policy set" policy=shared Dec 16 09:52:54.000774 containerd[1506]: time="2024-12-16T09:52:54.000727581Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 16 09:52:54.000811 containerd[1506]: time="2024-12-16T09:52:54.000792273Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 16 09:52:54.000811 containerd[1506]: time="2024-12-16T09:52:54.000815747Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 16 09:52:54.000811 containerd[1506]: time="2024-12-16T09:52:54.000831597Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 16 09:52:54.000915 containerd[1506]: time="2024-12-16T09:52:54.000846615Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 16 09:52:54.001019 containerd[1506]: time="2024-12-16T09:52:54.000988411Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 16 09:52:54.001321 containerd[1506]: time="2024-12-16T09:52:54.001260641Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 16 09:52:54.001537 containerd[1506]: time="2024-12-16T09:52:54.001505951Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 16 09:52:54.001561 containerd[1506]: time="2024-12-16T09:52:54.001536619Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 16 09:52:54.001599 containerd[1506]: time="2024-12-16T09:52:54.001557027Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 16 09:52:54.001599 containerd[1506]: time="2024-12-16T09:52:54.001589628Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 16 09:52:54.001641 containerd[1506]: time="2024-12-16T09:52:54.001609094Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 16 09:52:54.001641 containerd[1506]: time="2024-12-16T09:52:54.001628872Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 16 09:52:54.001677 containerd[1506]: time="2024-12-16T09:52:54.001649380Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 16 09:52:54.001701 containerd[1506]: time="2024-12-16T09:52:54.001678535Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 16 09:52:54.001724 containerd[1506]: time="2024-12-16T09:52:54.001699054Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 16 09:52:54.001724 containerd[1506]: time="2024-12-16T09:52:54.001716897Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 16 09:52:54.001724 containerd[1506]: time="2024-12-16T09:52:54.001733518Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 16 09:52:54.001914 containerd[1506]: time="2024-12-16T09:52:54.001791687Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 16 09:52:54.001914 containerd[1506]: time="2024-12-16T09:52:54.001814940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 16 09:52:54.001914 containerd[1506]: time="2024-12-16T09:52:54.001832273Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 16 09:52:54.001914 containerd[1506]: time="2024-12-16T09:52:54.001849285Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 16 09:52:54.001914 containerd[1506]: time="2024-12-16T09:52:54.001864714Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 16 09:52:54.001914 containerd[1506]: time="2024-12-16T09:52:54.001882026Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 16 09:52:54.001914 containerd[1506]: time="2024-12-16T09:52:54.001897786Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 16 09:52:54.001914 containerd[1506]: time="2024-12-16T09:52:54.001915399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 16 09:52:54.002072 containerd[1506]: time="2024-12-16T09:52:54.001932681Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 16 09:52:54.002072 containerd[1506]: time="2024-12-16T09:52:54.001952589Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 16 09:52:54.002072 containerd[1506]: time="2024-12-16T09:52:54.002040243Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 16 09:52:54.002072 containerd[1506]: time="2024-12-16T09:52:54.002057435Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 16 09:52:54.002147 containerd[1506]: time="2024-12-16T09:52:54.002074858Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 16 09:52:54.002147 containerd[1506]: time="2024-12-16T09:52:54.002097100Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 16 09:52:54.002147 containerd[1506]: time="2024-12-16T09:52:54.002123379Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 16 09:52:54.002198 containerd[1506]: time="2024-12-16T09:52:54.002152634Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 16 09:52:54.002198 containerd[1506]: time="2024-12-16T09:52:54.002169646Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 16 09:52:54.002295 containerd[1506]: time="2024-12-16T09:52:54.002226943Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 16 09:52:54.002295 containerd[1506]: time="2024-12-16T09:52:54.002256929Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 16 09:52:54.002295 containerd[1506]: time="2024-12-16T09:52:54.002269944Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 16 09:52:54.002295 containerd[1506]: time="2024-12-16T09:52:54.002282217Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 16 09:52:54.002295 containerd[1506]: time="2024-12-16T09:52:54.002292897Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 16 09:52:54.002393 containerd[1506]: time="2024-12-16T09:52:54.002306382Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 16 09:52:54.002393 containerd[1506]: time="2024-12-16T09:52:54.002317884Z" level=info msg="NRI interface is disabled by configuration." Dec 16 09:52:54.002393 containerd[1506]: time="2024-12-16T09:52:54.002327562Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 16 09:52:54.002684 containerd[1506]: time="2024-12-16T09:52:54.002620241Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 16 09:52:54.002684 containerd[1506]: time="2024-12-16T09:52:54.002682037Z" level=info msg="Connect containerd service" Dec 16 09:52:54.002849 containerd[1506]: time="2024-12-16T09:52:54.002718795Z" level=info msg="using legacy CRI server" Dec 16 09:52:54.002849 containerd[1506]: time="2024-12-16T09:52:54.002726630Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 16 09:52:54.002849 containerd[1506]: time="2024-12-16T09:52:54.002821147Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 16 09:52:54.004817 containerd[1506]: time="2024-12-16T09:52:54.003906523Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 09:52:54.004817 containerd[1506]: time="2024-12-16T09:52:54.004540101Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 16 09:52:54.004817 containerd[1506]: time="2024-12-16T09:52:54.004610903Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 16 09:52:54.004817 containerd[1506]: time="2024-12-16T09:52:54.004657842Z" level=info msg="Start subscribing containerd event" Dec 16 09:52:54.005052 containerd[1506]: time="2024-12-16T09:52:54.004747560Z" level=info msg="Start recovering state" Dec 16 09:52:54.005827 containerd[1506]: time="2024-12-16T09:52:54.005306278Z" level=info msg="Start event monitor" Dec 16 09:52:54.005827 containerd[1506]: time="2024-12-16T09:52:54.005342366Z" level=info msg="Start snapshots syncer" Dec 16 09:52:54.005827 containerd[1506]: time="2024-12-16T09:52:54.005357604Z" level=info msg="Start cni network conf syncer for default" Dec 16 09:52:54.005827 containerd[1506]: time="2024-12-16T09:52:54.005368294Z" level=info msg="Start streaming server" Dec 16 09:52:54.005827 containerd[1506]: time="2024-12-16T09:52:54.005477820Z" level=info msg="containerd successfully booted in 0.038290s" Dec 16 09:52:54.005562 systemd[1]: Started containerd.service - containerd container runtime. Dec 16 09:52:54.357026 systemd-networkd[1393]: eth1: Gained IPv6LL Dec 16 09:52:54.358069 systemd-networkd[1393]: eth0: Gained IPv6LL Dec 16 09:52:54.358396 systemd-timesyncd[1407]: Network configuration changed, trying to establish connection. Dec 16 09:52:54.360030 systemd-timesyncd[1407]: Network configuration changed, trying to establish connection. Dec 16 09:52:54.365508 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 16 09:52:54.368913 systemd[1]: Reached target network-online.target - Network is Online. Dec 16 09:52:54.381183 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 09:52:54.394190 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 16 09:52:54.439278 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 16 09:52:55.567991 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 09:52:55.572312 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 16 09:52:55.572895 (kubelet)[1590]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 09:52:55.577884 systemd[1]: Startup finished in 1.626s (kernel) + 6.259s (initrd) + 5.549s (userspace) = 13.434s. Dec 16 09:52:56.598350 kubelet[1590]: E1216 09:52:56.598202 1590 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 09:52:56.604009 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 09:52:56.604403 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 09:52:56.607519 systemd[1]: kubelet.service: Consumed 1.596s CPU time. Dec 16 09:53:06.854599 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 16 09:53:06.861086 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 09:53:07.055806 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 09:53:07.060468 (kubelet)[1610]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 09:53:07.129686 kubelet[1610]: E1216 09:53:07.129489 1610 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 09:53:07.143966 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 09:53:07.144264 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 09:53:17.395156 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 16 09:53:17.407171 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 09:53:17.618372 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 09:53:17.626018 (kubelet)[1626]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 09:53:17.675414 kubelet[1626]: E1216 09:53:17.675216 1626 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 09:53:17.679594 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 09:53:17.680010 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 09:53:25.550924 systemd-timesyncd[1407]: Contacted time server 78.47.56.71:123 (2.flatcar.pool.ntp.org). Dec 16 09:53:25.551018 systemd-timesyncd[1407]: Initial clock synchronization to Mon 2024-12-16 09:53:25.550590 UTC. Dec 16 09:53:25.551709 systemd-resolved[1354]: Clock change detected. Flushing caches. Dec 16 09:53:28.726292 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 16 09:53:28.733473 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 09:53:28.938990 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 09:53:28.949394 (kubelet)[1643]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 09:53:29.012919 kubelet[1643]: E1216 09:53:29.012747 1643 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 09:53:29.017268 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 09:53:29.017572 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 09:53:39.022913 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 16 09:53:39.028329 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 09:53:39.218914 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 09:53:39.223186 (kubelet)[1659]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 09:53:39.293550 kubelet[1659]: E1216 09:53:39.293346 1659 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 09:53:39.302799 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 09:53:39.303160 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 09:53:39.985396 update_engine[1488]: I20241216 09:53:39.985193 1488 update_attempter.cc:509] Updating boot flags... Dec 16 09:53:40.084645 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1676) Dec 16 09:53:40.151093 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1676) Dec 16 09:53:40.211157 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1676) Dec 16 09:53:49.523342 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Dec 16 09:53:49.530861 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 09:53:49.751499 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 09:53:49.755488 (kubelet)[1696]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 09:53:49.812593 kubelet[1696]: E1216 09:53:49.812375 1696 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 09:53:49.817104 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 09:53:49.817568 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 09:54:00.023259 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Dec 16 09:54:00.030348 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 09:54:00.256793 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 09:54:00.261206 (kubelet)[1712]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 09:54:00.324028 kubelet[1712]: E1216 09:54:00.323791 1712 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 09:54:00.335385 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 09:54:00.335697 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 09:54:10.523249 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Dec 16 09:54:10.531348 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 09:54:10.715041 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 09:54:10.728381 (kubelet)[1728]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 09:54:10.773556 kubelet[1728]: E1216 09:54:10.773315 1728 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 09:54:10.777210 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 09:54:10.777589 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 09:54:21.022815 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Dec 16 09:54:21.028276 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 09:54:21.232081 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 09:54:21.247348 (kubelet)[1745]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 09:54:21.293644 kubelet[1745]: E1216 09:54:21.293481 1745 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 09:54:21.298734 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 09:54:21.298944 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 09:54:31.523146 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Dec 16 09:54:31.536388 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 09:54:31.752924 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 09:54:31.757479 (kubelet)[1761]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 09:54:31.824727 kubelet[1761]: E1216 09:54:31.824514 1761 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 09:54:31.832671 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 09:54:31.833385 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 09:54:42.023165 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Dec 16 09:54:42.031383 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 09:54:42.247870 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 09:54:42.252683 (kubelet)[1777]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 09:54:42.298303 kubelet[1777]: E1216 09:54:42.298151 1777 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 09:54:42.305397 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 09:54:42.305600 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 09:54:52.523425 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Dec 16 09:54:52.530371 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 09:54:52.745393 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 09:54:52.745424 (kubelet)[1793]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 09:54:52.794781 kubelet[1793]: E1216 09:54:52.794613 1793 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 09:54:52.803961 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 09:54:52.804225 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 09:54:53.234896 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 16 09:54:53.240492 systemd[1]: Started sshd@0-188.34.167.196:22-147.75.109.163:56178.service - OpenSSH per-connection server daemon (147.75.109.163:56178). Dec 16 09:54:54.250253 sshd[1803]: Accepted publickey for core from 147.75.109.163 port 56178 ssh2: RSA SHA256:zB/zPQRxUCFkkFdvDftk99JQqA6bP3NHPa7FnaDUxKk Dec 16 09:54:54.255037 sshd[1803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 09:54:54.275336 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 16 09:54:54.285590 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 16 09:54:54.291605 systemd-logind[1480]: New session 1 of user core. Dec 16 09:54:54.321345 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 16 09:54:54.331641 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 16 09:54:54.355709 (systemd)[1807]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 16 09:54:54.517482 systemd[1807]: Queued start job for default target default.target. Dec 16 09:54:54.524304 systemd[1807]: Created slice app.slice - User Application Slice. Dec 16 09:54:54.524330 systemd[1807]: Reached target paths.target - Paths. Dec 16 09:54:54.524342 systemd[1807]: Reached target timers.target - Timers. Dec 16 09:54:54.525915 systemd[1807]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 16 09:54:54.566099 systemd[1807]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 16 09:54:54.566424 systemd[1807]: Reached target sockets.target - Sockets. Dec 16 09:54:54.566464 systemd[1807]: Reached target basic.target - Basic System. Dec 16 09:54:54.566553 systemd[1807]: Reached target default.target - Main User Target. Dec 16 09:54:54.566634 systemd[1807]: Startup finished in 196ms. Dec 16 09:54:54.567420 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 16 09:54:54.578341 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 16 09:54:55.279893 systemd[1]: Started sshd@1-188.34.167.196:22-147.75.109.163:56192.service - OpenSSH per-connection server daemon (147.75.109.163:56192). Dec 16 09:54:56.279855 sshd[1818]: Accepted publickey for core from 147.75.109.163 port 56192 ssh2: RSA SHA256:zB/zPQRxUCFkkFdvDftk99JQqA6bP3NHPa7FnaDUxKk Dec 16 09:54:56.283167 sshd[1818]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 09:54:56.293038 systemd-logind[1480]: New session 2 of user core. Dec 16 09:54:56.303405 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 16 09:54:56.965031 sshd[1818]: pam_unix(sshd:session): session closed for user core Dec 16 09:54:56.972375 systemd[1]: sshd@1-188.34.167.196:22-147.75.109.163:56192.service: Deactivated successfully. Dec 16 09:54:56.975800 systemd[1]: session-2.scope: Deactivated successfully. Dec 16 09:54:56.980598 systemd-logind[1480]: Session 2 logged out. Waiting for processes to exit. Dec 16 09:54:56.983106 systemd-logind[1480]: Removed session 2. Dec 16 09:54:57.140943 systemd[1]: Started sshd@2-188.34.167.196:22-147.75.109.163:58036.service - OpenSSH per-connection server daemon (147.75.109.163:58036). Dec 16 09:54:58.130666 sshd[1825]: Accepted publickey for core from 147.75.109.163 port 58036 ssh2: RSA SHA256:zB/zPQRxUCFkkFdvDftk99JQqA6bP3NHPa7FnaDUxKk Dec 16 09:54:58.133855 sshd[1825]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 09:54:58.143195 systemd-logind[1480]: New session 3 of user core. Dec 16 09:54:58.150324 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 16 09:54:58.803678 sshd[1825]: pam_unix(sshd:session): session closed for user core Dec 16 09:54:58.810691 systemd[1]: sshd@2-188.34.167.196:22-147.75.109.163:58036.service: Deactivated successfully. Dec 16 09:54:58.814549 systemd[1]: session-3.scope: Deactivated successfully. Dec 16 09:54:58.815612 systemd-logind[1480]: Session 3 logged out. Waiting for processes to exit. Dec 16 09:54:58.817405 systemd-logind[1480]: Removed session 3. Dec 16 09:54:58.981503 systemd[1]: Started sshd@3-188.34.167.196:22-147.75.109.163:58052.service - OpenSSH per-connection server daemon (147.75.109.163:58052). Dec 16 09:54:59.982526 sshd[1832]: Accepted publickey for core from 147.75.109.163 port 58052 ssh2: RSA SHA256:zB/zPQRxUCFkkFdvDftk99JQqA6bP3NHPa7FnaDUxKk Dec 16 09:54:59.985972 sshd[1832]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 09:54:59.995560 systemd-logind[1480]: New session 4 of user core. Dec 16 09:55:00.002461 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 16 09:55:00.671604 sshd[1832]: pam_unix(sshd:session): session closed for user core Dec 16 09:55:00.676122 systemd[1]: sshd@3-188.34.167.196:22-147.75.109.163:58052.service: Deactivated successfully. Dec 16 09:55:00.678735 systemd[1]: session-4.scope: Deactivated successfully. Dec 16 09:55:00.680878 systemd-logind[1480]: Session 4 logged out. Waiting for processes to exit. Dec 16 09:55:00.682572 systemd-logind[1480]: Removed session 4. Dec 16 09:55:00.841611 systemd[1]: Started sshd@4-188.34.167.196:22-147.75.109.163:58062.service - OpenSSH per-connection server daemon (147.75.109.163:58062). Dec 16 09:55:01.813639 sshd[1839]: Accepted publickey for core from 147.75.109.163 port 58062 ssh2: RSA SHA256:zB/zPQRxUCFkkFdvDftk99JQqA6bP3NHPa7FnaDUxKk Dec 16 09:55:01.815742 sshd[1839]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 09:55:01.821582 systemd-logind[1480]: New session 5 of user core. Dec 16 09:55:01.832636 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 16 09:55:02.356105 sudo[1842]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 16 09:55:02.357051 sudo[1842]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 09:55:02.380632 sudo[1842]: pam_unix(sudo:session): session closed for user root Dec 16 09:55:02.541146 sshd[1839]: pam_unix(sshd:session): session closed for user core Dec 16 09:55:02.549874 systemd[1]: sshd@4-188.34.167.196:22-147.75.109.163:58062.service: Deactivated successfully. Dec 16 09:55:02.554440 systemd[1]: session-5.scope: Deactivated successfully. Dec 16 09:55:02.555714 systemd-logind[1480]: Session 5 logged out. Waiting for processes to exit. Dec 16 09:55:02.557718 systemd-logind[1480]: Removed session 5. Dec 16 09:55:02.721552 systemd[1]: Started sshd@5-188.34.167.196:22-147.75.109.163:58064.service - OpenSSH per-connection server daemon (147.75.109.163:58064). Dec 16 09:55:03.023104 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Dec 16 09:55:03.034367 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 09:55:03.232391 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 09:55:03.236945 (kubelet)[1856]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 09:55:03.285128 kubelet[1856]: E1216 09:55:03.284200 1856 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 09:55:03.293081 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 09:55:03.293292 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 09:55:03.737940 sshd[1847]: Accepted publickey for core from 147.75.109.163 port 58064 ssh2: RSA SHA256:zB/zPQRxUCFkkFdvDftk99JQqA6bP3NHPa7FnaDUxKk Dec 16 09:55:03.740986 sshd[1847]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 09:55:03.751823 systemd-logind[1480]: New session 6 of user core. Dec 16 09:55:03.761309 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 16 09:55:04.274798 sudo[1867]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 16 09:55:04.275543 sudo[1867]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 09:55:04.283126 sudo[1867]: pam_unix(sudo:session): session closed for user root Dec 16 09:55:04.296853 sudo[1866]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 16 09:55:04.297816 sudo[1866]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 09:55:04.323416 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 16 09:55:04.352836 auditctl[1870]: No rules Dec 16 09:55:04.354982 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 09:55:04.355367 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 16 09:55:04.362409 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 16 09:55:04.420494 augenrules[1888]: No rules Dec 16 09:55:04.421875 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 16 09:55:04.424444 sudo[1866]: pam_unix(sudo:session): session closed for user root Dec 16 09:55:04.586642 sshd[1847]: pam_unix(sshd:session): session closed for user core Dec 16 09:55:04.596203 systemd[1]: sshd@5-188.34.167.196:22-147.75.109.163:58064.service: Deactivated successfully. Dec 16 09:55:04.600516 systemd[1]: session-6.scope: Deactivated successfully. Dec 16 09:55:04.602632 systemd-logind[1480]: Session 6 logged out. Waiting for processes to exit. Dec 16 09:55:04.605049 systemd-logind[1480]: Removed session 6. Dec 16 09:55:04.767904 systemd[1]: Started sshd@6-188.34.167.196:22-147.75.109.163:58068.service - OpenSSH per-connection server daemon (147.75.109.163:58068). Dec 16 09:55:05.772168 sshd[1896]: Accepted publickey for core from 147.75.109.163 port 58068 ssh2: RSA SHA256:zB/zPQRxUCFkkFdvDftk99JQqA6bP3NHPa7FnaDUxKk Dec 16 09:55:05.775453 sshd[1896]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 09:55:05.785485 systemd-logind[1480]: New session 7 of user core. Dec 16 09:55:05.793496 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 16 09:55:06.298679 sudo[1899]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 16 09:55:06.299568 sudo[1899]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 09:55:07.293325 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 09:55:07.302453 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 09:55:07.351426 systemd[1]: Reloading requested from client PID 1937 ('systemctl') (unit session-7.scope)... Dec 16 09:55:07.351635 systemd[1]: Reloading... Dec 16 09:55:07.485087 zram_generator::config[1979]: No configuration found. Dec 16 09:55:07.603405 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 16 09:55:07.687283 systemd[1]: Reloading finished in 334 ms. Dec 16 09:55:07.738648 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 16 09:55:07.738742 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 16 09:55:07.739032 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 09:55:07.741035 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 09:55:07.910163 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 09:55:07.916372 (kubelet)[2028]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 09:55:07.979250 kubelet[2028]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 09:55:07.979250 kubelet[2028]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 16 09:55:07.979250 kubelet[2028]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 09:55:07.979815 kubelet[2028]: I1216 09:55:07.979352 2028 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 09:55:08.363440 kubelet[2028]: I1216 09:55:08.363397 2028 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 16 09:55:08.363990 kubelet[2028]: I1216 09:55:08.363639 2028 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 09:55:08.364521 kubelet[2028]: I1216 09:55:08.364495 2028 server.go:919] "Client rotation is on, will bootstrap in background" Dec 16 09:55:08.390368 kubelet[2028]: I1216 09:55:08.390337 2028 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 09:55:08.400809 kubelet[2028]: I1216 09:55:08.400781 2028 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 09:55:08.401426 kubelet[2028]: I1216 09:55:08.401202 2028 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 09:55:08.401426 kubelet[2028]: I1216 09:55:08.401361 2028 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 16 09:55:08.401973 kubelet[2028]: I1216 09:55:08.401948 2028 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 09:55:08.401973 kubelet[2028]: I1216 09:55:08.401968 2028 container_manager_linux.go:301] "Creating device plugin manager" Dec 16 09:55:08.402140 kubelet[2028]: I1216 09:55:08.402115 2028 state_mem.go:36] "Initialized new in-memory state store" Dec 16 09:55:08.402247 kubelet[2028]: I1216 09:55:08.402229 2028 kubelet.go:396] "Attempting to sync node with API server" Dec 16 09:55:08.402310 kubelet[2028]: I1216 09:55:08.402253 2028 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 09:55:08.402310 kubelet[2028]: I1216 09:55:08.402285 2028 kubelet.go:312] "Adding apiserver pod source" Dec 16 09:55:08.402310 kubelet[2028]: I1216 09:55:08.402298 2028 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 09:55:08.402759 kubelet[2028]: E1216 09:55:08.402727 2028 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:55:08.403516 kubelet[2028]: E1216 09:55:08.402954 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:55:08.403891 kubelet[2028]: I1216 09:55:08.403851 2028 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 16 09:55:08.407121 kubelet[2028]: I1216 09:55:08.406888 2028 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 16 09:55:08.407121 kubelet[2028]: W1216 09:55:08.406972 2028 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 16 09:55:08.407617 kubelet[2028]: I1216 09:55:08.407600 2028 server.go:1256] "Started kubelet" Dec 16 09:55:08.409201 kubelet[2028]: I1216 09:55:08.407704 2028 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 09:55:08.409201 kubelet[2028]: I1216 09:55:08.408715 2028 server.go:461] "Adding debug handlers to kubelet server" Dec 16 09:55:08.411415 kubelet[2028]: I1216 09:55:08.411370 2028 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 09:55:08.414549 kubelet[2028]: I1216 09:55:08.413428 2028 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 09:55:08.414549 kubelet[2028]: I1216 09:55:08.413656 2028 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 09:55:08.415244 kubelet[2028]: W1216 09:55:08.415040 2028 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes "10.0.0.4" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Dec 16 09:55:08.415244 kubelet[2028]: E1216 09:55:08.415097 2028 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.4" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Dec 16 09:55:08.415244 kubelet[2028]: W1216 09:55:08.415166 2028 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 16 09:55:08.415244 kubelet[2028]: E1216 09:55:08.415184 2028 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 16 09:55:08.421327 kubelet[2028]: E1216 09:55:08.421082 2028 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.4.18119fafae59e23a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.4,UID:10.0.0.4,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.4,},FirstTimestamp:2024-12-16 09:55:08.407571002 +0000 UTC m=+0.487238181,LastTimestamp:2024-12-16 09:55:08.407571002 +0000 UTC m=+0.487238181,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.4,}" Dec 16 09:55:08.421461 kubelet[2028]: E1216 09:55:08.421363 2028 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Dec 16 09:55:08.421461 kubelet[2028]: I1216 09:55:08.421388 2028 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 16 09:55:08.421534 kubelet[2028]: I1216 09:55:08.421479 2028 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 16 09:55:08.421534 kubelet[2028]: I1216 09:55:08.421523 2028 reconciler_new.go:29] "Reconciler: start to sync state" Dec 16 09:55:08.422986 kubelet[2028]: I1216 09:55:08.422917 2028 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 09:55:08.424176 kubelet[2028]: E1216 09:55:08.424018 2028 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 09:55:08.424485 kubelet[2028]: I1216 09:55:08.424377 2028 factory.go:221] Registration of the containerd container factory successfully Dec 16 09:55:08.424485 kubelet[2028]: I1216 09:55:08.424391 2028 factory.go:221] Registration of the systemd container factory successfully Dec 16 09:55:08.436915 kubelet[2028]: E1216 09:55:08.436885 2028 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.4.18119fafaf5492e9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.4,UID:10.0.0.4,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:10.0.0.4,},FirstTimestamp:2024-12-16 09:55:08.424000233 +0000 UTC m=+0.503667432,LastTimestamp:2024-12-16 09:55:08.424000233 +0000 UTC m=+0.503667432,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.4,}" Dec 16 09:55:08.448190 kubelet[2028]: I1216 09:55:08.447633 2028 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 16 09:55:08.448190 kubelet[2028]: I1216 09:55:08.447668 2028 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 16 09:55:08.448190 kubelet[2028]: I1216 09:55:08.447683 2028 state_mem.go:36] "Initialized new in-memory state store" Dec 16 09:55:08.451783 kubelet[2028]: E1216 09:55:08.451676 2028 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.4\" not found" node="10.0.0.4" Dec 16 09:55:08.451783 kubelet[2028]: I1216 09:55:08.451740 2028 policy_none.go:49] "None policy: Start" Dec 16 09:55:08.452422 kubelet[2028]: I1216 09:55:08.452360 2028 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 16 09:55:08.452422 kubelet[2028]: I1216 09:55:08.452377 2028 state_mem.go:35] "Initializing new in-memory state store" Dec 16 09:55:08.460899 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 16 09:55:08.471739 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 16 09:55:08.480863 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 16 09:55:08.483015 kubelet[2028]: I1216 09:55:08.482976 2028 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 16 09:55:08.484151 kubelet[2028]: I1216 09:55:08.483528 2028 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 09:55:08.485223 kubelet[2028]: E1216 09:55:08.485193 2028 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.4\" not found" Dec 16 09:55:08.504844 kubelet[2028]: I1216 09:55:08.504727 2028 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 16 09:55:08.506319 kubelet[2028]: I1216 09:55:08.505917 2028 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 16 09:55:08.506319 kubelet[2028]: I1216 09:55:08.505968 2028 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 16 09:55:08.506319 kubelet[2028]: I1216 09:55:08.505990 2028 kubelet.go:2329] "Starting kubelet main sync loop" Dec 16 09:55:08.506640 kubelet[2028]: E1216 09:55:08.506620 2028 kubelet.go:2353] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Dec 16 09:55:08.522610 kubelet[2028]: I1216 09:55:08.522590 2028 kubelet_node_status.go:73] "Attempting to register node" node="10.0.0.4" Dec 16 09:55:08.526535 kubelet[2028]: I1216 09:55:08.526510 2028 kubelet_node_status.go:76] "Successfully registered node" node="10.0.0.4" Dec 16 09:55:08.532853 kubelet[2028]: E1216 09:55:08.532810 2028 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Dec 16 09:55:08.634416 kubelet[2028]: E1216 09:55:08.633655 2028 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Dec 16 09:55:08.734686 kubelet[2028]: E1216 09:55:08.734617 2028 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Dec 16 09:55:08.834850 kubelet[2028]: E1216 09:55:08.834769 2028 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Dec 16 09:55:08.936052 kubelet[2028]: E1216 09:55:08.935779 2028 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Dec 16 09:55:09.036574 kubelet[2028]: E1216 09:55:09.036514 2028 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Dec 16 09:55:09.137711 kubelet[2028]: E1216 09:55:09.137618 2028 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Dec 16 09:55:09.238960 kubelet[2028]: E1216 09:55:09.238726 2028 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Dec 16 09:55:09.339888 kubelet[2028]: E1216 09:55:09.339810 2028 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Dec 16 09:55:09.369323 kubelet[2028]: I1216 09:55:09.369219 2028 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 16 09:55:09.369560 kubelet[2028]: W1216 09:55:09.369492 2028 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.CSIDriver ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Dec 16 09:55:09.369560 kubelet[2028]: W1216 09:55:09.369542 2028 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.RuntimeClass ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Dec 16 09:55:09.403972 kubelet[2028]: E1216 09:55:09.403848 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:55:09.439162 sudo[1899]: pam_unix(sudo:session): session closed for user root Dec 16 09:55:09.441016 kubelet[2028]: E1216 09:55:09.440965 2028 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Dec 16 09:55:09.541400 kubelet[2028]: E1216 09:55:09.541327 2028 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Dec 16 09:55:09.598564 sshd[1896]: pam_unix(sshd:session): session closed for user core Dec 16 09:55:09.607603 systemd-logind[1480]: Session 7 logged out. Waiting for processes to exit. Dec 16 09:55:09.607681 systemd[1]: sshd@6-188.34.167.196:22-147.75.109.163:58068.service: Deactivated successfully. Dec 16 09:55:09.612579 systemd[1]: session-7.scope: Deactivated successfully. Dec 16 09:55:09.614885 systemd-logind[1480]: Removed session 7. Dec 16 09:55:09.642231 kubelet[2028]: E1216 09:55:09.642134 2028 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Dec 16 09:55:09.742849 kubelet[2028]: E1216 09:55:09.742775 2028 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Dec 16 09:55:09.844240 kubelet[2028]: I1216 09:55:09.843662 2028 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Dec 16 09:55:09.844743 containerd[1506]: time="2024-12-16T09:55:09.844684102Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 16 09:55:09.845600 kubelet[2028]: I1216 09:55:09.844918 2028 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Dec 16 09:55:10.404646 kubelet[2028]: E1216 09:55:10.404586 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:55:10.404646 kubelet[2028]: I1216 09:55:10.404615 2028 apiserver.go:52] "Watching apiserver" Dec 16 09:55:10.413415 kubelet[2028]: I1216 09:55:10.412291 2028 topology_manager.go:215] "Topology Admit Handler" podUID="1dee4954-e6cf-43fc-9eac-a7a3336f0f64" podNamespace="calico-system" podName="calico-node-l4mhm" Dec 16 09:55:10.413415 kubelet[2028]: I1216 09:55:10.412391 2028 topology_manager.go:215] "Topology Admit Handler" podUID="3b3f85f9-f74d-4e2a-9c4b-5a8a7393e8af" podNamespace="calico-system" podName="csi-node-driver-5vv27" Dec 16 09:55:10.413415 kubelet[2028]: I1216 09:55:10.412434 2028 topology_manager.go:215] "Topology Admit Handler" podUID="e1323363-aac9-4831-994c-4a80f9b95190" podNamespace="kube-system" podName="kube-proxy-jltpx" Dec 16 09:55:10.413415 kubelet[2028]: E1216 09:55:10.412793 2028 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5vv27" podUID="3b3f85f9-f74d-4e2a-9c4b-5a8a7393e8af" Dec 16 09:55:10.425128 systemd[1]: Created slice kubepods-besteffort-pode1323363_aac9_4831_994c_4a80f9b95190.slice - libcontainer container kubepods-besteffort-pode1323363_aac9_4831_994c_4a80f9b95190.slice. Dec 16 09:55:10.426395 kubelet[2028]: I1216 09:55:10.426350 2028 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 16 09:55:10.432760 kubelet[2028]: I1216 09:55:10.432707 2028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e1323363-aac9-4831-994c-4a80f9b95190-kube-proxy\") pod \"kube-proxy-jltpx\" (UID: \"e1323363-aac9-4831-994c-4a80f9b95190\") " pod="kube-system/kube-proxy-jltpx" Dec 16 09:55:10.432929 kubelet[2028]: I1216 09:55:10.432769 2028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e1323363-aac9-4831-994c-4a80f9b95190-xtables-lock\") pod \"kube-proxy-jltpx\" (UID: \"e1323363-aac9-4831-994c-4a80f9b95190\") " pod="kube-system/kube-proxy-jltpx" Dec 16 09:55:10.432929 kubelet[2028]: I1216 09:55:10.432808 2028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1dee4954-e6cf-43fc-9eac-a7a3336f0f64-var-lib-calico\") pod \"calico-node-l4mhm\" (UID: \"1dee4954-e6cf-43fc-9eac-a7a3336f0f64\") " pod="calico-system/calico-node-l4mhm" Dec 16 09:55:10.432929 kubelet[2028]: I1216 09:55:10.432825 2028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/1dee4954-e6cf-43fc-9eac-a7a3336f0f64-cni-net-dir\") pod \"calico-node-l4mhm\" (UID: \"1dee4954-e6cf-43fc-9eac-a7a3336f0f64\") " pod="calico-system/calico-node-l4mhm" Dec 16 09:55:10.432929 kubelet[2028]: I1216 09:55:10.432845 2028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcvd5\" (UniqueName: \"kubernetes.io/projected/1dee4954-e6cf-43fc-9eac-a7a3336f0f64-kube-api-access-qcvd5\") pod \"calico-node-l4mhm\" (UID: \"1dee4954-e6cf-43fc-9eac-a7a3336f0f64\") " pod="calico-system/calico-node-l4mhm" Dec 16 09:55:10.432929 kubelet[2028]: I1216 09:55:10.432861 2028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/3b3f85f9-f74d-4e2a-9c4b-5a8a7393e8af-varrun\") pod \"csi-node-driver-5vv27\" (UID: \"3b3f85f9-f74d-4e2a-9c4b-5a8a7393e8af\") " pod="calico-system/csi-node-driver-5vv27" Dec 16 09:55:10.433182 kubelet[2028]: I1216 09:55:10.432878 2028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e1323363-aac9-4831-994c-4a80f9b95190-lib-modules\") pod \"kube-proxy-jltpx\" (UID: \"e1323363-aac9-4831-994c-4a80f9b95190\") " pod="kube-system/kube-proxy-jltpx" Dec 16 09:55:10.433182 kubelet[2028]: I1216 09:55:10.432894 2028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1dee4954-e6cf-43fc-9eac-a7a3336f0f64-xtables-lock\") pod \"calico-node-l4mhm\" (UID: \"1dee4954-e6cf-43fc-9eac-a7a3336f0f64\") " pod="calico-system/calico-node-l4mhm" Dec 16 09:55:10.433182 kubelet[2028]: I1216 09:55:10.432924 2028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/1dee4954-e6cf-43fc-9eac-a7a3336f0f64-policysync\") pod \"calico-node-l4mhm\" (UID: \"1dee4954-e6cf-43fc-9eac-a7a3336f0f64\") " pod="calico-system/calico-node-l4mhm" Dec 16 09:55:10.433182 kubelet[2028]: I1216 09:55:10.432941 2028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1dee4954-e6cf-43fc-9eac-a7a3336f0f64-tigera-ca-bundle\") pod \"calico-node-l4mhm\" (UID: \"1dee4954-e6cf-43fc-9eac-a7a3336f0f64\") " pod="calico-system/calico-node-l4mhm" Dec 16 09:55:10.433182 kubelet[2028]: I1216 09:55:10.432959 2028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3b3f85f9-f74d-4e2a-9c4b-5a8a7393e8af-socket-dir\") pod \"csi-node-driver-5vv27\" (UID: \"3b3f85f9-f74d-4e2a-9c4b-5a8a7393e8af\") " pod="calico-system/csi-node-driver-5vv27" Dec 16 09:55:10.433411 kubelet[2028]: I1216 09:55:10.432976 2028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/1dee4954-e6cf-43fc-9eac-a7a3336f0f64-var-run-calico\") pod \"calico-node-l4mhm\" (UID: \"1dee4954-e6cf-43fc-9eac-a7a3336f0f64\") " pod="calico-system/calico-node-l4mhm" Dec 16 09:55:10.433411 kubelet[2028]: I1216 09:55:10.433002 2028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/1dee4954-e6cf-43fc-9eac-a7a3336f0f64-cni-bin-dir\") pod \"calico-node-l4mhm\" (UID: \"1dee4954-e6cf-43fc-9eac-a7a3336f0f64\") " pod="calico-system/calico-node-l4mhm" Dec 16 09:55:10.433411 kubelet[2028]: I1216 09:55:10.433020 2028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/1dee4954-e6cf-43fc-9eac-a7a3336f0f64-flexvol-driver-host\") pod \"calico-node-l4mhm\" (UID: \"1dee4954-e6cf-43fc-9eac-a7a3336f0f64\") " pod="calico-system/calico-node-l4mhm" Dec 16 09:55:10.433411 kubelet[2028]: I1216 09:55:10.433036 2028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3b3f85f9-f74d-4e2a-9c4b-5a8a7393e8af-kubelet-dir\") pod \"csi-node-driver-5vv27\" (UID: \"3b3f85f9-f74d-4e2a-9c4b-5a8a7393e8af\") " pod="calico-system/csi-node-driver-5vv27" Dec 16 09:55:10.433411 kubelet[2028]: I1216 09:55:10.433097 2028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3b3f85f9-f74d-4e2a-9c4b-5a8a7393e8af-registration-dir\") pod \"csi-node-driver-5vv27\" (UID: \"3b3f85f9-f74d-4e2a-9c4b-5a8a7393e8af\") " pod="calico-system/csi-node-driver-5vv27" Dec 16 09:55:10.433661 kubelet[2028]: I1216 09:55:10.433124 2028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqlgj\" (UniqueName: \"kubernetes.io/projected/3b3f85f9-f74d-4e2a-9c4b-5a8a7393e8af-kube-api-access-wqlgj\") pod \"csi-node-driver-5vv27\" (UID: \"3b3f85f9-f74d-4e2a-9c4b-5a8a7393e8af\") " pod="calico-system/csi-node-driver-5vv27" Dec 16 09:55:10.433661 kubelet[2028]: I1216 09:55:10.433146 2028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-765mz\" (UniqueName: \"kubernetes.io/projected/e1323363-aac9-4831-994c-4a80f9b95190-kube-api-access-765mz\") pod \"kube-proxy-jltpx\" (UID: \"e1323363-aac9-4831-994c-4a80f9b95190\") " pod="kube-system/kube-proxy-jltpx" Dec 16 09:55:10.433661 kubelet[2028]: I1216 09:55:10.433186 2028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1dee4954-e6cf-43fc-9eac-a7a3336f0f64-lib-modules\") pod \"calico-node-l4mhm\" (UID: \"1dee4954-e6cf-43fc-9eac-a7a3336f0f64\") " pod="calico-system/calico-node-l4mhm" Dec 16 09:55:10.433661 kubelet[2028]: I1216 09:55:10.433216 2028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/1dee4954-e6cf-43fc-9eac-a7a3336f0f64-node-certs\") pod \"calico-node-l4mhm\" (UID: \"1dee4954-e6cf-43fc-9eac-a7a3336f0f64\") " pod="calico-system/calico-node-l4mhm" Dec 16 09:55:10.433661 kubelet[2028]: I1216 09:55:10.433256 2028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/1dee4954-e6cf-43fc-9eac-a7a3336f0f64-cni-log-dir\") pod \"calico-node-l4mhm\" (UID: \"1dee4954-e6cf-43fc-9eac-a7a3336f0f64\") " pod="calico-system/calico-node-l4mhm" Dec 16 09:55:10.445884 systemd[1]: Created slice kubepods-besteffort-pod1dee4954_e6cf_43fc_9eac_a7a3336f0f64.slice - libcontainer container kubepods-besteffort-pod1dee4954_e6cf_43fc_9eac_a7a3336f0f64.slice. Dec 16 09:55:10.538789 kubelet[2028]: E1216 09:55:10.538524 2028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:55:10.538789 kubelet[2028]: W1216 09:55:10.538554 2028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:55:10.538789 kubelet[2028]: E1216 09:55:10.538624 2028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:55:10.540438 kubelet[2028]: E1216 09:55:10.539995 2028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:55:10.540438 kubelet[2028]: W1216 09:55:10.540021 2028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:55:10.540685 kubelet[2028]: E1216 09:55:10.540661 2028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:55:10.540975 kubelet[2028]: W1216 09:55:10.540798 2028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:55:10.541460 kubelet[2028]: E1216 09:55:10.541432 2028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:55:10.541884 kubelet[2028]: E1216 09:55:10.541610 2028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:55:10.541884 kubelet[2028]: E1216 09:55:10.541706 2028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:55:10.541884 kubelet[2028]: W1216 09:55:10.541834 2028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:55:10.543443 kubelet[2028]: E1216 09:55:10.543365 2028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:55:10.543790 kubelet[2028]: E1216 09:55:10.543729 2028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:55:10.543790 kubelet[2028]: W1216 09:55:10.543751 2028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:55:10.544387 kubelet[2028]: E1216 09:55:10.544218 2028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:55:10.545636 kubelet[2028]: E1216 09:55:10.545442 2028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:55:10.545636 kubelet[2028]: W1216 09:55:10.545465 2028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:55:10.546476 kubelet[2028]: E1216 09:55:10.546428 2028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:55:10.546791 kubelet[2028]: W1216 09:55:10.546449 2028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:55:10.549245 kubelet[2028]: E1216 09:55:10.549218 2028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:55:10.549640 kubelet[2028]: E1216 09:55:10.549511 2028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:55:10.556265 kubelet[2028]: E1216 09:55:10.555744 2028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:55:10.556265 kubelet[2028]: W1216 09:55:10.555766 2028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:55:10.558300 kubelet[2028]: E1216 09:55:10.558137 2028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:55:10.560106 kubelet[2028]: E1216 09:55:10.558570 2028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:55:10.560106 kubelet[2028]: W1216 09:55:10.558592 2028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:55:10.561475 kubelet[2028]: E1216 09:55:10.561429 2028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:55:10.565415 kubelet[2028]: E1216 09:55:10.565389 2028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:55:10.565859 kubelet[2028]: W1216 09:55:10.565717 2028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:55:10.566303 kubelet[2028]: E1216 09:55:10.566031 2028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:55:10.566695 kubelet[2028]: E1216 09:55:10.566673 2028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:55:10.566951 kubelet[2028]: W1216 09:55:10.566775 2028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:55:10.567126 kubelet[2028]: E1216 09:55:10.567104 2028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:55:10.567346 kubelet[2028]: E1216 09:55:10.567326 2028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:55:10.567531 kubelet[2028]: W1216 09:55:10.567419 2028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:55:10.567643 kubelet[2028]: E1216 09:55:10.567623 2028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:55:10.568197 kubelet[2028]: E1216 09:55:10.568133 2028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:55:10.568197 kubelet[2028]: W1216 09:55:10.568155 2028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:55:10.568825 kubelet[2028]: E1216 09:55:10.568624 2028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:55:10.569104 kubelet[2028]: E1216 09:55:10.569043 2028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:55:10.569239 kubelet[2028]: W1216 09:55:10.569217 2028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:55:10.569485 kubelet[2028]: E1216 09:55:10.569453 2028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:55:10.570205 kubelet[2028]: E1216 09:55:10.570182 2028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:55:10.570458 kubelet[2028]: W1216 09:55:10.570366 2028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:55:10.570656 kubelet[2028]: E1216 09:55:10.570566 2028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:55:10.571458 kubelet[2028]: E1216 09:55:10.571331 2028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:55:10.571458 kubelet[2028]: W1216 09:55:10.571351 2028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:55:10.571787 kubelet[2028]: E1216 09:55:10.571637 2028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:55:10.572518 kubelet[2028]: E1216 09:55:10.572388 2028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:55:10.572518 kubelet[2028]: W1216 09:55:10.572408 2028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:55:10.572798 kubelet[2028]: E1216 09:55:10.572694 2028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:55:10.573488 kubelet[2028]: E1216 09:55:10.573428 2028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:55:10.573488 kubelet[2028]: W1216 09:55:10.573452 2028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:55:10.573722 kubelet[2028]: E1216 09:55:10.573568 2028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:55:10.574113 kubelet[2028]: E1216 09:55:10.574014 2028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:55:10.574113 kubelet[2028]: W1216 09:55:10.574047 2028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:55:10.574113 kubelet[2028]: E1216 09:55:10.574155 2028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:55:10.574638 kubelet[2028]: E1216 09:55:10.574484 2028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:55:10.574638 kubelet[2028]: W1216 09:55:10.574510 2028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:55:10.574638 kubelet[2028]: E1216 09:55:10.574567 2028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:55:10.574943 kubelet[2028]: E1216 09:55:10.574834 2028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:55:10.574943 kubelet[2028]: W1216 09:55:10.574848 2028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:55:10.575179 kubelet[2028]: E1216 09:55:10.574987 2028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:55:10.575496 kubelet[2028]: E1216 09:55:10.575264 2028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:55:10.575496 kubelet[2028]: W1216 09:55:10.575279 2028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:55:10.575776 kubelet[2028]: E1216 09:55:10.575389 2028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:55:10.575776 kubelet[2028]: E1216 09:55:10.575667 2028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:55:10.575776 kubelet[2028]: W1216 09:55:10.575684 2028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:55:10.576139 kubelet[2028]: E1216 09:55:10.576044 2028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:55:10.576139 kubelet[2028]: E1216 09:55:10.576054 2028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:55:10.576139 kubelet[2028]: W1216 09:55:10.576106 2028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:55:10.576539 kubelet[2028]: E1216 09:55:10.576412 2028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:55:10.576539 kubelet[2028]: W1216 09:55:10.576437 2028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:55:10.577238 kubelet[2028]: E1216 09:55:10.576765 2028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:55:10.577238 kubelet[2028]: W1216 09:55:10.576779 2028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:55:10.577238 kubelet[2028]: E1216 09:55:10.576817 2028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:55:10.577238 kubelet[2028]: E1216 09:55:10.577193 2028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:55:10.577238 kubelet[2028]: W1216 09:55:10.577207 2028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:55:10.577238 kubelet[2028]: E1216 09:55:10.577233 2028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:55:10.577543 kubelet[2028]: E1216 09:55:10.577335 2028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:55:10.577543 kubelet[2028]: E1216 09:55:10.577363 2028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:55:10.579295 kubelet[2028]: E1216 09:55:10.579254 2028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:55:10.579430 kubelet[2028]: W1216 09:55:10.579306 2028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:55:10.579430 kubelet[2028]: E1216 09:55:10.579333 2028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:55:10.597300 kubelet[2028]: E1216 09:55:10.596955 2028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:55:10.597300 kubelet[2028]: W1216 09:55:10.596995 2028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:55:10.597300 kubelet[2028]: E1216 09:55:10.597055 2028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:55:10.597688 kubelet[2028]: E1216 09:55:10.597661 2028 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 09:55:10.599669 kubelet[2028]: W1216 09:55:10.599587 2028 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 09:55:10.599669 kubelet[2028]: E1216 09:55:10.599627 2028 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 09:55:10.741534 containerd[1506]: time="2024-12-16T09:55:10.741204839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jltpx,Uid:e1323363-aac9-4831-994c-4a80f9b95190,Namespace:kube-system,Attempt:0,}" Dec 16 09:55:10.752736 containerd[1506]: time="2024-12-16T09:55:10.752235409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-l4mhm,Uid:1dee4954-e6cf-43fc-9eac-a7a3336f0f64,Namespace:calico-system,Attempt:0,}" Dec 16 09:55:11.374089 containerd[1506]: time="2024-12-16T09:55:11.373985509Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 09:55:11.375581 containerd[1506]: time="2024-12-16T09:55:11.375531909Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 09:55:11.377295 containerd[1506]: time="2024-12-16T09:55:11.377248225Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312076" Dec 16 09:55:11.378311 containerd[1506]: time="2024-12-16T09:55:11.378262060Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 16 09:55:11.380966 containerd[1506]: time="2024-12-16T09:55:11.380864161Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 09:55:11.386956 containerd[1506]: time="2024-12-16T09:55:11.386293608Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 09:55:11.388651 containerd[1506]: time="2024-12-16T09:55:11.388587745Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 647.263873ms" Dec 16 09:55:11.392545 containerd[1506]: time="2024-12-16T09:55:11.392485558Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 640.093967ms" Dec 16 09:55:11.405859 kubelet[2028]: E1216 09:55:11.405780 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:55:11.555313 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2768411586.mount: Deactivated successfully. Dec 16 09:55:11.575339 containerd[1506]: time="2024-12-16T09:55:11.574593663Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 16 09:55:11.575580 containerd[1506]: time="2024-12-16T09:55:11.575471833Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 16 09:55:11.575580 containerd[1506]: time="2024-12-16T09:55:11.575522768Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 16 09:55:11.575943 containerd[1506]: time="2024-12-16T09:55:11.575758178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 16 09:55:11.581096 containerd[1506]: time="2024-12-16T09:55:11.576168986Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 16 09:55:11.581096 containerd[1506]: time="2024-12-16T09:55:11.576881206Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 16 09:55:11.581096 containerd[1506]: time="2024-12-16T09:55:11.576920179Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 16 09:55:11.581096 containerd[1506]: time="2024-12-16T09:55:11.577018714Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 16 09:55:11.657824 systemd[1]: run-containerd-runc-k8s.io-fa21f43095b58439d756ced3afdb2307f9301e17b77d59380c83ec89909553ff-runc.A3AR67.mount: Deactivated successfully. Dec 16 09:55:11.668212 systemd[1]: Started cri-containerd-7a4d28c3ada76a7259d0f57fa1aede0c5390762ac80d9da6892d57ba71dd7adc.scope - libcontainer container 7a4d28c3ada76a7259d0f57fa1aede0c5390762ac80d9da6892d57ba71dd7adc. Dec 16 09:55:11.672453 systemd[1]: Started cri-containerd-fa21f43095b58439d756ced3afdb2307f9301e17b77d59380c83ec89909553ff.scope - libcontainer container fa21f43095b58439d756ced3afdb2307f9301e17b77d59380c83ec89909553ff. Dec 16 09:55:11.708887 containerd[1506]: time="2024-12-16T09:55:11.708780438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jltpx,Uid:e1323363-aac9-4831-994c-4a80f9b95190,Namespace:kube-system,Attempt:0,} returns sandbox id \"fa21f43095b58439d756ced3afdb2307f9301e17b77d59380c83ec89909553ff\"" Dec 16 09:55:11.709095 containerd[1506]: time="2024-12-16T09:55:11.708846192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-l4mhm,Uid:1dee4954-e6cf-43fc-9eac-a7a3336f0f64,Namespace:calico-system,Attempt:0,} returns sandbox id \"7a4d28c3ada76a7259d0f57fa1aede0c5390762ac80d9da6892d57ba71dd7adc\"" Dec 16 09:55:11.711790 containerd[1506]: time="2024-12-16T09:55:11.711759164Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Dec 16 09:55:12.406125 kubelet[2028]: E1216 09:55:12.406024 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:55:12.507571 kubelet[2028]: E1216 09:55:12.506838 2028 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5vv27" podUID="3b3f85f9-f74d-4e2a-9c4b-5a8a7393e8af" Dec 16 09:55:13.406566 kubelet[2028]: E1216 09:55:13.406513 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:55:13.951609 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount620675332.mount: Deactivated successfully. Dec 16 09:55:14.051616 containerd[1506]: time="2024-12-16T09:55:14.051547736Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:55:14.053153 containerd[1506]: time="2024-12-16T09:55:14.053097341Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Dec 16 09:55:14.054380 containerd[1506]: time="2024-12-16T09:55:14.054305921Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:55:14.056299 containerd[1506]: time="2024-12-16T09:55:14.056231069Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:55:14.057632 containerd[1506]: time="2024-12-16T09:55:14.056797989Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 2.34499363s" Dec 16 09:55:14.057632 containerd[1506]: time="2024-12-16T09:55:14.056826403Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Dec 16 09:55:14.058009 containerd[1506]: time="2024-12-16T09:55:14.057982544Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 16 09:55:14.059230 containerd[1506]: time="2024-12-16T09:55:14.059194979Z" level=info msg="CreateContainer within sandbox \"7a4d28c3ada76a7259d0f57fa1aede0c5390762ac80d9da6892d57ba71dd7adc\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 16 09:55:14.080210 containerd[1506]: time="2024-12-16T09:55:14.080136913Z" level=info msg="CreateContainer within sandbox \"7a4d28c3ada76a7259d0f57fa1aede0c5390762ac80d9da6892d57ba71dd7adc\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"46f287137f3cc88770bd87fe3d142dbec70997fdb5764689e373a64331aaba36\"" Dec 16 09:55:14.081183 containerd[1506]: time="2024-12-16T09:55:14.081132834Z" level=info msg="StartContainer for \"46f287137f3cc88770bd87fe3d142dbec70997fdb5764689e373a64331aaba36\"" Dec 16 09:55:14.128467 systemd[1]: Started cri-containerd-46f287137f3cc88770bd87fe3d142dbec70997fdb5764689e373a64331aaba36.scope - libcontainer container 46f287137f3cc88770bd87fe3d142dbec70997fdb5764689e373a64331aaba36. Dec 16 09:55:14.166460 containerd[1506]: time="2024-12-16T09:55:14.166372589Z" level=info msg="StartContainer for \"46f287137f3cc88770bd87fe3d142dbec70997fdb5764689e373a64331aaba36\" returns successfully" Dec 16 09:55:14.184633 systemd[1]: cri-containerd-46f287137f3cc88770bd87fe3d142dbec70997fdb5764689e373a64331aaba36.scope: Deactivated successfully. Dec 16 09:55:14.245716 containerd[1506]: time="2024-12-16T09:55:14.244745290Z" level=info msg="shim disconnected" id=46f287137f3cc88770bd87fe3d142dbec70997fdb5764689e373a64331aaba36 namespace=k8s.io Dec 16 09:55:14.245716 containerd[1506]: time="2024-12-16T09:55:14.244840709Z" level=warning msg="cleaning up after shim disconnected" id=46f287137f3cc88770bd87fe3d142dbec70997fdb5764689e373a64331aaba36 namespace=k8s.io Dec 16 09:55:14.245716 containerd[1506]: time="2024-12-16T09:55:14.244857941Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 16 09:55:14.407034 kubelet[2028]: E1216 09:55:14.406974 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:55:14.507698 kubelet[2028]: E1216 09:55:14.507039 2028 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5vv27" podUID="3b3f85f9-f74d-4e2a-9c4b-5a8a7393e8af" Dec 16 09:55:14.884483 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-46f287137f3cc88770bd87fe3d142dbec70997fdb5764689e373a64331aaba36-rootfs.mount: Deactivated successfully. Dec 16 09:55:15.132610 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1628159747.mount: Deactivated successfully. Dec 16 09:55:15.407520 kubelet[2028]: E1216 09:55:15.407468 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:55:15.466169 containerd[1506]: time="2024-12-16T09:55:15.466079310Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:55:15.467181 containerd[1506]: time="2024-12-16T09:55:15.467142668Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=28619984" Dec 16 09:55:15.468120 containerd[1506]: time="2024-12-16T09:55:15.468077926Z" level=info msg="ImageCreate event name:\"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:55:15.470010 containerd[1506]: time="2024-12-16T09:55:15.469972317Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:55:15.470686 containerd[1506]: time="2024-12-16T09:55:15.470531052Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"28618977\" in 1.412519345s" Dec 16 09:55:15.470686 containerd[1506]: time="2024-12-16T09:55:15.470571788Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Dec 16 09:55:15.471583 containerd[1506]: time="2024-12-16T09:55:15.471425703Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Dec 16 09:55:15.472933 containerd[1506]: time="2024-12-16T09:55:15.472801836Z" level=info msg="CreateContainer within sandbox \"fa21f43095b58439d756ced3afdb2307f9301e17b77d59380c83ec89909553ff\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 16 09:55:15.497222 containerd[1506]: time="2024-12-16T09:55:15.497146063Z" level=info msg="CreateContainer within sandbox \"fa21f43095b58439d756ced3afdb2307f9301e17b77d59380c83ec89909553ff\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7e349829d35dc93dc94eaae19a140dff18a96d0cbef2dcd9310cca40a45feca7\"" Dec 16 09:55:15.497780 containerd[1506]: time="2024-12-16T09:55:15.497758739Z" level=info msg="StartContainer for \"7e349829d35dc93dc94eaae19a140dff18a96d0cbef2dcd9310cca40a45feca7\"" Dec 16 09:55:15.533212 systemd[1]: Started cri-containerd-7e349829d35dc93dc94eaae19a140dff18a96d0cbef2dcd9310cca40a45feca7.scope - libcontainer container 7e349829d35dc93dc94eaae19a140dff18a96d0cbef2dcd9310cca40a45feca7. Dec 16 09:55:15.563760 containerd[1506]: time="2024-12-16T09:55:15.563711665Z" level=info msg="StartContainer for \"7e349829d35dc93dc94eaae19a140dff18a96d0cbef2dcd9310cca40a45feca7\" returns successfully" Dec 16 09:55:16.408121 kubelet[2028]: E1216 09:55:16.407983 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:55:16.507565 kubelet[2028]: E1216 09:55:16.507030 2028 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5vv27" podUID="3b3f85f9-f74d-4e2a-9c4b-5a8a7393e8af" Dec 16 09:55:16.547320 kubelet[2028]: I1216 09:55:16.547225 2028 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-jltpx" podStartSLOduration=4.787464986 podStartE2EDuration="8.547155872s" podCreationTimestamp="2024-12-16 09:55:08 +0000 UTC" firstStartedPulling="2024-12-16 09:55:11.71136051 +0000 UTC m=+3.791027688" lastFinishedPulling="2024-12-16 09:55:15.471051395 +0000 UTC m=+7.550718574" observedRunningTime="2024-12-16 09:55:16.546888943 +0000 UTC m=+8.626556152" watchObservedRunningTime="2024-12-16 09:55:16.547155872 +0000 UTC m=+8.626823090" Dec 16 09:55:17.409361 kubelet[2028]: E1216 09:55:17.409310 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:55:18.410376 kubelet[2028]: E1216 09:55:18.410310 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:55:18.507487 kubelet[2028]: E1216 09:55:18.507004 2028 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5vv27" podUID="3b3f85f9-f74d-4e2a-9c4b-5a8a7393e8af" Dec 16 09:55:19.411170 kubelet[2028]: E1216 09:55:19.411125 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:55:20.412560 kubelet[2028]: E1216 09:55:20.412483 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:55:20.507416 kubelet[2028]: E1216 09:55:20.506993 2028 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5vv27" podUID="3b3f85f9-f74d-4e2a-9c4b-5a8a7393e8af" Dec 16 09:55:20.513416 containerd[1506]: time="2024-12-16T09:55:20.513329125Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:55:20.515099 containerd[1506]: time="2024-12-16T09:55:20.515010328Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Dec 16 09:55:20.516791 containerd[1506]: time="2024-12-16T09:55:20.516699026Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:55:20.521200 containerd[1506]: time="2024-12-16T09:55:20.521088766Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:55:20.522425 containerd[1506]: time="2024-12-16T09:55:20.522289200Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 5.050820236s" Dec 16 09:55:20.522425 containerd[1506]: time="2024-12-16T09:55:20.522328664Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Dec 16 09:55:20.524579 containerd[1506]: time="2024-12-16T09:55:20.524531384Z" level=info msg="CreateContainer within sandbox \"7a4d28c3ada76a7259d0f57fa1aede0c5390762ac80d9da6892d57ba71dd7adc\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 16 09:55:20.539971 containerd[1506]: time="2024-12-16T09:55:20.539917606Z" level=info msg="CreateContainer within sandbox \"7a4d28c3ada76a7259d0f57fa1aede0c5390762ac80d9da6892d57ba71dd7adc\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"942ca0feaa61c033ffa0959bcae972e6386ea5dea1e2e865cffa7358e731c1e3\"" Dec 16 09:55:20.540685 containerd[1506]: time="2024-12-16T09:55:20.540396212Z" level=info msg="StartContainer for \"942ca0feaa61c033ffa0959bcae972e6386ea5dea1e2e865cffa7358e731c1e3\"" Dec 16 09:55:20.573252 systemd[1]: Started cri-containerd-942ca0feaa61c033ffa0959bcae972e6386ea5dea1e2e865cffa7358e731c1e3.scope - libcontainer container 942ca0feaa61c033ffa0959bcae972e6386ea5dea1e2e865cffa7358e731c1e3. Dec 16 09:55:20.602239 containerd[1506]: time="2024-12-16T09:55:20.602199395Z" level=info msg="StartContainer for \"942ca0feaa61c033ffa0959bcae972e6386ea5dea1e2e865cffa7358e731c1e3\" returns successfully" Dec 16 09:55:21.107046 containerd[1506]: time="2024-12-16T09:55:21.106588958Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 09:55:21.110613 systemd[1]: cri-containerd-942ca0feaa61c033ffa0959bcae972e6386ea5dea1e2e865cffa7358e731c1e3.scope: Deactivated successfully. Dec 16 09:55:21.145489 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-942ca0feaa61c033ffa0959bcae972e6386ea5dea1e2e865cffa7358e731c1e3-rootfs.mount: Deactivated successfully. Dec 16 09:55:21.161528 kubelet[2028]: I1216 09:55:21.160996 2028 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 16 09:55:21.211239 containerd[1506]: time="2024-12-16T09:55:21.211122174Z" level=info msg="shim disconnected" id=942ca0feaa61c033ffa0959bcae972e6386ea5dea1e2e865cffa7358e731c1e3 namespace=k8s.io Dec 16 09:55:21.211239 containerd[1506]: time="2024-12-16T09:55:21.211204058Z" level=warning msg="cleaning up after shim disconnected" id=942ca0feaa61c033ffa0959bcae972e6386ea5dea1e2e865cffa7358e731c1e3 namespace=k8s.io Dec 16 09:55:21.211239 containerd[1506]: time="2024-12-16T09:55:21.211221600Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 16 09:55:21.413584 kubelet[2028]: E1216 09:55:21.413377 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:55:21.556155 containerd[1506]: time="2024-12-16T09:55:21.556104287Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Dec 16 09:55:22.414219 kubelet[2028]: E1216 09:55:22.414115 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:55:22.518155 systemd[1]: Created slice kubepods-besteffort-pod3b3f85f9_f74d_4e2a_9c4b_5a8a7393e8af.slice - libcontainer container kubepods-besteffort-pod3b3f85f9_f74d_4e2a_9c4b_5a8a7393e8af.slice. Dec 16 09:55:22.522832 containerd[1506]: time="2024-12-16T09:55:22.522693225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5vv27,Uid:3b3f85f9-f74d-4e2a-9c4b-5a8a7393e8af,Namespace:calico-system,Attempt:0,}" Dec 16 09:55:22.659531 containerd[1506]: time="2024-12-16T09:55:22.658382205Z" level=error msg="Failed to destroy network for sandbox \"ceb7e2f19c2c5d1096cb5f0485954ffe9d0544db69c2b6349a7d06a55288f0fe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:55:22.659531 containerd[1506]: time="2024-12-16T09:55:22.659249928Z" level=error msg="encountered an error cleaning up failed sandbox \"ceb7e2f19c2c5d1096cb5f0485954ffe9d0544db69c2b6349a7d06a55288f0fe\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:55:22.659531 containerd[1506]: time="2024-12-16T09:55:22.659341048Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5vv27,Uid:3b3f85f9-f74d-4e2a-9c4b-5a8a7393e8af,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ceb7e2f19c2c5d1096cb5f0485954ffe9d0544db69c2b6349a7d06a55288f0fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:55:22.661612 kubelet[2028]: E1216 09:55:22.660873 2028 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ceb7e2f19c2c5d1096cb5f0485954ffe9d0544db69c2b6349a7d06a55288f0fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:55:22.661612 kubelet[2028]: E1216 09:55:22.660996 2028 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ceb7e2f19c2c5d1096cb5f0485954ffe9d0544db69c2b6349a7d06a55288f0fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5vv27" Dec 16 09:55:22.661612 kubelet[2028]: E1216 09:55:22.661047 2028 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ceb7e2f19c2c5d1096cb5f0485954ffe9d0544db69c2b6349a7d06a55288f0fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5vv27" Dec 16 09:55:22.661942 kubelet[2028]: E1216 09:55:22.661198 2028 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-5vv27_calico-system(3b3f85f9-f74d-4e2a-9c4b-5a8a7393e8af)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-5vv27_calico-system(3b3f85f9-f74d-4e2a-9c4b-5a8a7393e8af)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ceb7e2f19c2c5d1096cb5f0485954ffe9d0544db69c2b6349a7d06a55288f0fe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5vv27" podUID="3b3f85f9-f74d-4e2a-9c4b-5a8a7393e8af" Dec 16 09:55:22.664843 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ceb7e2f19c2c5d1096cb5f0485954ffe9d0544db69c2b6349a7d06a55288f0fe-shm.mount: Deactivated successfully. Dec 16 09:55:23.414480 kubelet[2028]: E1216 09:55:23.414418 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:55:23.562191 kubelet[2028]: I1216 09:55:23.561529 2028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ceb7e2f19c2c5d1096cb5f0485954ffe9d0544db69c2b6349a7d06a55288f0fe" Dec 16 09:55:23.562950 containerd[1506]: time="2024-12-16T09:55:23.562726523Z" level=info msg="StopPodSandbox for \"ceb7e2f19c2c5d1096cb5f0485954ffe9d0544db69c2b6349a7d06a55288f0fe\"" Dec 16 09:55:23.563238 containerd[1506]: time="2024-12-16T09:55:23.563159994Z" level=info msg="Ensure that sandbox ceb7e2f19c2c5d1096cb5f0485954ffe9d0544db69c2b6349a7d06a55288f0fe in task-service has been cleanup successfully" Dec 16 09:55:23.623210 containerd[1506]: time="2024-12-16T09:55:23.623018043Z" level=error msg="StopPodSandbox for \"ceb7e2f19c2c5d1096cb5f0485954ffe9d0544db69c2b6349a7d06a55288f0fe\" failed" error="failed to destroy network for sandbox \"ceb7e2f19c2c5d1096cb5f0485954ffe9d0544db69c2b6349a7d06a55288f0fe\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:55:23.623446 kubelet[2028]: E1216 09:55:23.623416 2028 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ceb7e2f19c2c5d1096cb5f0485954ffe9d0544db69c2b6349a7d06a55288f0fe\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ceb7e2f19c2c5d1096cb5f0485954ffe9d0544db69c2b6349a7d06a55288f0fe" Dec 16 09:55:23.623972 kubelet[2028]: E1216 09:55:23.623895 2028 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ceb7e2f19c2c5d1096cb5f0485954ffe9d0544db69c2b6349a7d06a55288f0fe"} Dec 16 09:55:23.623972 kubelet[2028]: E1216 09:55:23.623957 2028 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3b3f85f9-f74d-4e2a-9c4b-5a8a7393e8af\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ceb7e2f19c2c5d1096cb5f0485954ffe9d0544db69c2b6349a7d06a55288f0fe\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 16 09:55:23.624265 kubelet[2028]: E1216 09:55:23.624004 2028 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3b3f85f9-f74d-4e2a-9c4b-5a8a7393e8af\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ceb7e2f19c2c5d1096cb5f0485954ffe9d0544db69c2b6349a7d06a55288f0fe\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5vv27" podUID="3b3f85f9-f74d-4e2a-9c4b-5a8a7393e8af" Dec 16 09:55:24.415692 kubelet[2028]: E1216 09:55:24.415614 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:55:25.416221 kubelet[2028]: E1216 09:55:25.416163 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:55:26.417042 kubelet[2028]: E1216 09:55:26.416962 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:55:27.418679 kubelet[2028]: E1216 09:55:27.418635 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:55:27.638735 kubelet[2028]: I1216 09:55:27.637855 2028 topology_manager.go:215] "Topology Admit Handler" podUID="ff41c56c-b178-4ae4-bd93-317967373b16" podNamespace="default" podName="nginx-deployment-6d5f899847-4wg2x" Dec 16 09:55:27.647606 systemd[1]: Created slice kubepods-besteffort-podff41c56c_b178_4ae4_bd93_317967373b16.slice - libcontainer container kubepods-besteffort-podff41c56c_b178_4ae4_bd93_317967373b16.slice. Dec 16 09:55:27.743454 kubelet[2028]: I1216 09:55:27.743263 2028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpvrw\" (UniqueName: \"kubernetes.io/projected/ff41c56c-b178-4ae4-bd93-317967373b16-kube-api-access-rpvrw\") pod \"nginx-deployment-6d5f899847-4wg2x\" (UID: \"ff41c56c-b178-4ae4-bd93-317967373b16\") " pod="default/nginx-deployment-6d5f899847-4wg2x" Dec 16 09:55:27.952959 containerd[1506]: time="2024-12-16T09:55:27.952841735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-4wg2x,Uid:ff41c56c-b178-4ae4-bd93-317967373b16,Namespace:default,Attempt:0,}" Dec 16 09:55:28.062097 containerd[1506]: time="2024-12-16T09:55:28.062001306Z" level=error msg="Failed to destroy network for sandbox \"8676873e0c5b463954275491d346168ff6e1c9fa04be83fadac2c50e0e16bc75\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:55:28.063897 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8676873e0c5b463954275491d346168ff6e1c9fa04be83fadac2c50e0e16bc75-shm.mount: Deactivated successfully. Dec 16 09:55:28.064687 containerd[1506]: time="2024-12-16T09:55:28.064649390Z" level=error msg="encountered an error cleaning up failed sandbox \"8676873e0c5b463954275491d346168ff6e1c9fa04be83fadac2c50e0e16bc75\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:55:28.064759 containerd[1506]: time="2024-12-16T09:55:28.064699174Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-4wg2x,Uid:ff41c56c-b178-4ae4-bd93-317967373b16,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8676873e0c5b463954275491d346168ff6e1c9fa04be83fadac2c50e0e16bc75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:55:28.065130 kubelet[2028]: E1216 09:55:28.065076 2028 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8676873e0c5b463954275491d346168ff6e1c9fa04be83fadac2c50e0e16bc75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:55:28.065256 kubelet[2028]: E1216 09:55:28.065128 2028 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8676873e0c5b463954275491d346168ff6e1c9fa04be83fadac2c50e0e16bc75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-4wg2x" Dec 16 09:55:28.065256 kubelet[2028]: E1216 09:55:28.065172 2028 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8676873e0c5b463954275491d346168ff6e1c9fa04be83fadac2c50e0e16bc75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-4wg2x" Dec 16 09:55:28.065256 kubelet[2028]: E1216 09:55:28.065229 2028 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-4wg2x_default(ff41c56c-b178-4ae4-bd93-317967373b16)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-4wg2x_default(ff41c56c-b178-4ae4-bd93-317967373b16)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8676873e0c5b463954275491d346168ff6e1c9fa04be83fadac2c50e0e16bc75\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-4wg2x" podUID="ff41c56c-b178-4ae4-bd93-317967373b16" Dec 16 09:55:28.403839 kubelet[2028]: E1216 09:55:28.403424 2028 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:55:28.419645 kubelet[2028]: E1216 09:55:28.419563 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:55:28.570864 kubelet[2028]: I1216 09:55:28.570541 2028 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8676873e0c5b463954275491d346168ff6e1c9fa04be83fadac2c50e0e16bc75" Dec 16 09:55:28.571537 containerd[1506]: time="2024-12-16T09:55:28.571132227Z" level=info msg="StopPodSandbox for \"8676873e0c5b463954275491d346168ff6e1c9fa04be83fadac2c50e0e16bc75\"" Dec 16 09:55:28.571537 containerd[1506]: time="2024-12-16T09:55:28.571281836Z" level=info msg="Ensure that sandbox 8676873e0c5b463954275491d346168ff6e1c9fa04be83fadac2c50e0e16bc75 in task-service has been cleanup successfully" Dec 16 09:55:28.628220 containerd[1506]: time="2024-12-16T09:55:28.627746973Z" level=error msg="StopPodSandbox for \"8676873e0c5b463954275491d346168ff6e1c9fa04be83fadac2c50e0e16bc75\" failed" error="failed to destroy network for sandbox \"8676873e0c5b463954275491d346168ff6e1c9fa04be83fadac2c50e0e16bc75\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 09:55:28.628353 kubelet[2028]: E1216 09:55:28.628033 2028 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8676873e0c5b463954275491d346168ff6e1c9fa04be83fadac2c50e0e16bc75\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8676873e0c5b463954275491d346168ff6e1c9fa04be83fadac2c50e0e16bc75" Dec 16 09:55:28.628353 kubelet[2028]: E1216 09:55:28.628123 2028 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8676873e0c5b463954275491d346168ff6e1c9fa04be83fadac2c50e0e16bc75"} Dec 16 09:55:28.628353 kubelet[2028]: E1216 09:55:28.628163 2028 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ff41c56c-b178-4ae4-bd93-317967373b16\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8676873e0c5b463954275491d346168ff6e1c9fa04be83fadac2c50e0e16bc75\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 16 09:55:28.628353 kubelet[2028]: E1216 09:55:28.628195 2028 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ff41c56c-b178-4ae4-bd93-317967373b16\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8676873e0c5b463954275491d346168ff6e1c9fa04be83fadac2c50e0e16bc75\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-4wg2x" podUID="ff41c56c-b178-4ae4-bd93-317967373b16" Dec 16 09:55:28.709629 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3349800649.mount: Deactivated successfully. Dec 16 09:55:28.754448 containerd[1506]: time="2024-12-16T09:55:28.754371687Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:55:28.755563 containerd[1506]: time="2024-12-16T09:55:28.755520246Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Dec 16 09:55:28.756542 containerd[1506]: time="2024-12-16T09:55:28.756509016Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:55:28.759745 containerd[1506]: time="2024-12-16T09:55:28.759679758Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:55:28.760929 containerd[1506]: time="2024-12-16T09:55:28.760592154Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 7.204431433s" Dec 16 09:55:28.760929 containerd[1506]: time="2024-12-16T09:55:28.760635767Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Dec 16 09:55:28.768232 containerd[1506]: time="2024-12-16T09:55:28.768180893Z" level=info msg="CreateContainer within sandbox \"7a4d28c3ada76a7259d0f57fa1aede0c5390762ac80d9da6892d57ba71dd7adc\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 16 09:55:28.786877 containerd[1506]: time="2024-12-16T09:55:28.786830126Z" level=info msg="CreateContainer within sandbox \"7a4d28c3ada76a7259d0f57fa1aede0c5390762ac80d9da6892d57ba71dd7adc\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"66010bb4711ac370c0c32c86d25bb76ca45c795e3f942b9aa44501a57cc6950c\"" Dec 16 09:55:28.788098 containerd[1506]: time="2024-12-16T09:55:28.787470575Z" level=info msg="StartContainer for \"66010bb4711ac370c0c32c86d25bb76ca45c795e3f942b9aa44501a57cc6950c\"" Dec 16 09:55:28.899256 systemd[1]: Started cri-containerd-66010bb4711ac370c0c32c86d25bb76ca45c795e3f942b9aa44501a57cc6950c.scope - libcontainer container 66010bb4711ac370c0c32c86d25bb76ca45c795e3f942b9aa44501a57cc6950c. Dec 16 09:55:28.948388 containerd[1506]: time="2024-12-16T09:55:28.948323169Z" level=info msg="StartContainer for \"66010bb4711ac370c0c32c86d25bb76ca45c795e3f942b9aa44501a57cc6950c\" returns successfully" Dec 16 09:55:29.026879 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 16 09:55:29.027022 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 16 09:55:29.420673 kubelet[2028]: E1216 09:55:29.420464 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:55:29.598298 kubelet[2028]: I1216 09:55:29.598238 2028 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-l4mhm" podStartSLOduration=4.5486240460000005 podStartE2EDuration="21.598144953s" podCreationTimestamp="2024-12-16 09:55:08 +0000 UTC" firstStartedPulling="2024-12-16 09:55:11.711361411 +0000 UTC m=+3.791028590" lastFinishedPulling="2024-12-16 09:55:28.760882288 +0000 UTC m=+20.840549497" observedRunningTime="2024-12-16 09:55:29.598116099 +0000 UTC m=+21.677783318" watchObservedRunningTime="2024-12-16 09:55:29.598144953 +0000 UTC m=+21.677812162" Dec 16 09:55:29.635916 systemd[1]: run-containerd-runc-k8s.io-66010bb4711ac370c0c32c86d25bb76ca45c795e3f942b9aa44501a57cc6950c-runc.3oBvJm.mount: Deactivated successfully. Dec 16 09:55:30.421309 kubelet[2028]: E1216 09:55:30.421216 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:55:30.611000 systemd[1]: run-containerd-runc-k8s.io-66010bb4711ac370c0c32c86d25bb76ca45c795e3f942b9aa44501a57cc6950c-runc.sS84gI.mount: Deactivated successfully. Dec 16 09:55:30.710155 kernel: bpftool[2826]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Dec 16 09:55:30.991606 systemd-networkd[1393]: vxlan.calico: Link UP Dec 16 09:55:30.991621 systemd-networkd[1393]: vxlan.calico: Gained carrier Dec 16 09:55:31.422358 kubelet[2028]: E1216 09:55:31.422211 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:55:32.422951 kubelet[2028]: E1216 09:55:32.422824 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:55:32.976721 systemd-networkd[1393]: vxlan.calico: Gained IPv6LL Dec 16 09:55:33.423102 kubelet[2028]: E1216 09:55:33.422991 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:55:34.423520 kubelet[2028]: E1216 09:55:34.423399 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:55:35.423949 kubelet[2028]: E1216 09:55:35.423844 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:55:36.424085 kubelet[2028]: E1216 09:55:36.423989 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:55:37.424776 kubelet[2028]: E1216 09:55:37.424667 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:55:38.425578 kubelet[2028]: E1216 09:55:38.425494 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:55:38.507855 containerd[1506]: time="2024-12-16T09:55:38.507786433Z" level=info msg="StopPodSandbox for \"ceb7e2f19c2c5d1096cb5f0485954ffe9d0544db69c2b6349a7d06a55288f0fe\"" Dec 16 09:55:38.722642 containerd[1506]: 2024-12-16 09:55:38.614 [INFO][2915] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ceb7e2f19c2c5d1096cb5f0485954ffe9d0544db69c2b6349a7d06a55288f0fe" Dec 16 09:55:38.722642 containerd[1506]: 2024-12-16 09:55:38.615 [INFO][2915] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ceb7e2f19c2c5d1096cb5f0485954ffe9d0544db69c2b6349a7d06a55288f0fe" iface="eth0" netns="/var/run/netns/cni-1b07b16b-da65-2614-8b63-4636ade1f644" Dec 16 09:55:38.722642 containerd[1506]: 2024-12-16 09:55:38.615 [INFO][2915] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ceb7e2f19c2c5d1096cb5f0485954ffe9d0544db69c2b6349a7d06a55288f0fe" iface="eth0" netns="/var/run/netns/cni-1b07b16b-da65-2614-8b63-4636ade1f644" Dec 16 09:55:38.722642 containerd[1506]: 2024-12-16 09:55:38.617 [INFO][2915] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ceb7e2f19c2c5d1096cb5f0485954ffe9d0544db69c2b6349a7d06a55288f0fe" iface="eth0" netns="/var/run/netns/cni-1b07b16b-da65-2614-8b63-4636ade1f644" Dec 16 09:55:38.722642 containerd[1506]: 2024-12-16 09:55:38.617 [INFO][2915] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ceb7e2f19c2c5d1096cb5f0485954ffe9d0544db69c2b6349a7d06a55288f0fe" Dec 16 09:55:38.722642 containerd[1506]: 2024-12-16 09:55:38.617 [INFO][2915] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ceb7e2f19c2c5d1096cb5f0485954ffe9d0544db69c2b6349a7d06a55288f0fe" Dec 16 09:55:38.722642 containerd[1506]: 2024-12-16 09:55:38.695 [INFO][2921] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ceb7e2f19c2c5d1096cb5f0485954ffe9d0544db69c2b6349a7d06a55288f0fe" HandleID="k8s-pod-network.ceb7e2f19c2c5d1096cb5f0485954ffe9d0544db69c2b6349a7d06a55288f0fe" Workload="10.0.0.4-k8s-csi--node--driver--5vv27-eth0" Dec 16 09:55:38.722642 containerd[1506]: 2024-12-16 09:55:38.695 [INFO][2921] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 16 09:55:38.722642 containerd[1506]: 2024-12-16 09:55:38.695 [INFO][2921] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 16 09:55:38.722642 containerd[1506]: 2024-12-16 09:55:38.706 [WARNING][2921] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ceb7e2f19c2c5d1096cb5f0485954ffe9d0544db69c2b6349a7d06a55288f0fe" HandleID="k8s-pod-network.ceb7e2f19c2c5d1096cb5f0485954ffe9d0544db69c2b6349a7d06a55288f0fe" Workload="10.0.0.4-k8s-csi--node--driver--5vv27-eth0" Dec 16 09:55:38.722642 containerd[1506]: 2024-12-16 09:55:38.706 [INFO][2921] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ceb7e2f19c2c5d1096cb5f0485954ffe9d0544db69c2b6349a7d06a55288f0fe" HandleID="k8s-pod-network.ceb7e2f19c2c5d1096cb5f0485954ffe9d0544db69c2b6349a7d06a55288f0fe" Workload="10.0.0.4-k8s-csi--node--driver--5vv27-eth0" Dec 16 09:55:38.722642 containerd[1506]: 2024-12-16 09:55:38.708 [INFO][2921] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 16 09:55:38.722642 containerd[1506]: 2024-12-16 09:55:38.717 [INFO][2915] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ceb7e2f19c2c5d1096cb5f0485954ffe9d0544db69c2b6349a7d06a55288f0fe" Dec 16 09:55:38.723575 containerd[1506]: time="2024-12-16T09:55:38.723120854Z" level=info msg="TearDown network for sandbox \"ceb7e2f19c2c5d1096cb5f0485954ffe9d0544db69c2b6349a7d06a55288f0fe\" successfully" Dec 16 09:55:38.723575 containerd[1506]: time="2024-12-16T09:55:38.723207046Z" level=info msg="StopPodSandbox for \"ceb7e2f19c2c5d1096cb5f0485954ffe9d0544db69c2b6349a7d06a55288f0fe\" returns successfully" Dec 16 09:55:38.728916 containerd[1506]: time="2024-12-16T09:55:38.728356506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5vv27,Uid:3b3f85f9-f74d-4e2a-9c4b-5a8a7393e8af,Namespace:calico-system,Attempt:1,}" Dec 16 09:55:38.730395 systemd[1]: run-netns-cni\x2d1b07b16b\x2dda65\x2d2614\x2d8b63\x2d4636ade1f644.mount: Deactivated successfully. Dec 16 09:55:38.915407 systemd-networkd[1393]: cali3271290d1d6: Link UP Dec 16 09:55:38.918018 systemd-networkd[1393]: cali3271290d1d6: Gained carrier Dec 16 09:55:38.957222 containerd[1506]: 2024-12-16 09:55:38.809 [INFO][2928] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.4-k8s-csi--node--driver--5vv27-eth0 csi-node-driver- calico-system 3b3f85f9-f74d-4e2a-9c4b-5a8a7393e8af 1643 0 2024-12-16 09:55:08 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.0.0.4 csi-node-driver-5vv27 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali3271290d1d6 [] []}} ContainerID="c3921329be0b99ab44432243751b1f497257d4211f21d56b85ac2a5312976e87" Namespace="calico-system" Pod="csi-node-driver-5vv27" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--5vv27-" Dec 16 09:55:38.957222 containerd[1506]: 2024-12-16 09:55:38.809 [INFO][2928] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c3921329be0b99ab44432243751b1f497257d4211f21d56b85ac2a5312976e87" Namespace="calico-system" Pod="csi-node-driver-5vv27" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--5vv27-eth0" Dec 16 09:55:38.957222 containerd[1506]: 2024-12-16 09:55:38.860 [INFO][2939] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c3921329be0b99ab44432243751b1f497257d4211f21d56b85ac2a5312976e87" HandleID="k8s-pod-network.c3921329be0b99ab44432243751b1f497257d4211f21d56b85ac2a5312976e87" Workload="10.0.0.4-k8s-csi--node--driver--5vv27-eth0" Dec 16 09:55:38.957222 containerd[1506]: 2024-12-16 09:55:38.874 [INFO][2939] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c3921329be0b99ab44432243751b1f497257d4211f21d56b85ac2a5312976e87" HandleID="k8s-pod-network.c3921329be0b99ab44432243751b1f497257d4211f21d56b85ac2a5312976e87" Workload="10.0.0.4-k8s-csi--node--driver--5vv27-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318d20), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.4", "pod":"csi-node-driver-5vv27", "timestamp":"2024-12-16 09:55:38.860519921 +0000 UTC"}, Hostname:"10.0.0.4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 09:55:38.957222 containerd[1506]: 2024-12-16 09:55:38.874 [INFO][2939] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 16 09:55:38.957222 containerd[1506]: 2024-12-16 09:55:38.874 [INFO][2939] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 16 09:55:38.957222 containerd[1506]: 2024-12-16 09:55:38.874 [INFO][2939] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.4' Dec 16 09:55:38.957222 containerd[1506]: 2024-12-16 09:55:38.876 [INFO][2939] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c3921329be0b99ab44432243751b1f497257d4211f21d56b85ac2a5312976e87" host="10.0.0.4" Dec 16 09:55:38.957222 containerd[1506]: 2024-12-16 09:55:38.882 [INFO][2939] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.4" Dec 16 09:55:38.957222 containerd[1506]: 2024-12-16 09:55:38.887 [INFO][2939] ipam/ipam.go 489: Trying affinity for 192.168.99.192/26 host="10.0.0.4" Dec 16 09:55:38.957222 containerd[1506]: 2024-12-16 09:55:38.889 [INFO][2939] ipam/ipam.go 155: Attempting to load block cidr=192.168.99.192/26 host="10.0.0.4" Dec 16 09:55:38.957222 containerd[1506]: 2024-12-16 09:55:38.892 [INFO][2939] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.99.192/26 host="10.0.0.4" Dec 16 09:55:38.957222 containerd[1506]: 2024-12-16 09:55:38.892 [INFO][2939] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.99.192/26 handle="k8s-pod-network.c3921329be0b99ab44432243751b1f497257d4211f21d56b85ac2a5312976e87" host="10.0.0.4" Dec 16 09:55:38.957222 containerd[1506]: 2024-12-16 09:55:38.893 [INFO][2939] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c3921329be0b99ab44432243751b1f497257d4211f21d56b85ac2a5312976e87 Dec 16 09:55:38.957222 containerd[1506]: 2024-12-16 09:55:38.898 [INFO][2939] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.99.192/26 handle="k8s-pod-network.c3921329be0b99ab44432243751b1f497257d4211f21d56b85ac2a5312976e87" host="10.0.0.4" Dec 16 09:55:38.957222 containerd[1506]: 2024-12-16 09:55:38.903 [INFO][2939] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.99.193/26] block=192.168.99.192/26 handle="k8s-pod-network.c3921329be0b99ab44432243751b1f497257d4211f21d56b85ac2a5312976e87" host="10.0.0.4" Dec 16 09:55:38.957222 containerd[1506]: 2024-12-16 09:55:38.903 [INFO][2939] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.99.193/26] handle="k8s-pod-network.c3921329be0b99ab44432243751b1f497257d4211f21d56b85ac2a5312976e87" host="10.0.0.4" Dec 16 09:55:38.957222 containerd[1506]: 2024-12-16 09:55:38.903 [INFO][2939] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 16 09:55:38.957222 containerd[1506]: 2024-12-16 09:55:38.903 [INFO][2939] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.193/26] IPv6=[] ContainerID="c3921329be0b99ab44432243751b1f497257d4211f21d56b85ac2a5312976e87" HandleID="k8s-pod-network.c3921329be0b99ab44432243751b1f497257d4211f21d56b85ac2a5312976e87" Workload="10.0.0.4-k8s-csi--node--driver--5vv27-eth0" Dec 16 09:55:38.958160 containerd[1506]: 2024-12-16 09:55:38.909 [INFO][2928] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c3921329be0b99ab44432243751b1f497257d4211f21d56b85ac2a5312976e87" Namespace="calico-system" Pod="csi-node-driver-5vv27" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--5vv27-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-csi--node--driver--5vv27-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3b3f85f9-f74d-4e2a-9c4b-5a8a7393e8af", ResourceVersion:"1643", Generation:0, CreationTimestamp:time.Date(2024, time.December, 16, 9, 55, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"", Pod:"csi-node-driver-5vv27", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.99.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3271290d1d6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 16 09:55:38.958160 containerd[1506]: 2024-12-16 09:55:38.909 [INFO][2928] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.99.193/32] ContainerID="c3921329be0b99ab44432243751b1f497257d4211f21d56b85ac2a5312976e87" Namespace="calico-system" Pod="csi-node-driver-5vv27" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--5vv27-eth0" Dec 16 09:55:38.958160 containerd[1506]: 2024-12-16 09:55:38.909 [INFO][2928] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3271290d1d6 ContainerID="c3921329be0b99ab44432243751b1f497257d4211f21d56b85ac2a5312976e87" Namespace="calico-system" Pod="csi-node-driver-5vv27" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--5vv27-eth0" Dec 16 09:55:38.958160 containerd[1506]: 2024-12-16 09:55:38.919 [INFO][2928] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c3921329be0b99ab44432243751b1f497257d4211f21d56b85ac2a5312976e87" Namespace="calico-system" Pod="csi-node-driver-5vv27" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--5vv27-eth0" Dec 16 09:55:38.958160 containerd[1506]: 2024-12-16 09:55:38.920 [INFO][2928] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c3921329be0b99ab44432243751b1f497257d4211f21d56b85ac2a5312976e87" Namespace="calico-system" Pod="csi-node-driver-5vv27" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--5vv27-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-csi--node--driver--5vv27-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3b3f85f9-f74d-4e2a-9c4b-5a8a7393e8af", ResourceVersion:"1643", Generation:0, CreationTimestamp:time.Date(2024, time.December, 16, 9, 55, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"c3921329be0b99ab44432243751b1f497257d4211f21d56b85ac2a5312976e87", Pod:"csi-node-driver-5vv27", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.99.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3271290d1d6", MAC:"8a:2e:6e:55:b5:09", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 16 09:55:38.958160 containerd[1506]: 2024-12-16 09:55:38.931 [INFO][2928] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c3921329be0b99ab44432243751b1f497257d4211f21d56b85ac2a5312976e87" Namespace="calico-system" Pod="csi-node-driver-5vv27" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--5vv27-eth0" Dec 16 09:55:38.986118 containerd[1506]: time="2024-12-16T09:55:38.985384183Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 16 09:55:38.986118 containerd[1506]: time="2024-12-16T09:55:38.985528894Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 16 09:55:38.986118 containerd[1506]: time="2024-12-16T09:55:38.985547849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 16 09:55:38.986265 containerd[1506]: time="2024-12-16T09:55:38.985844896Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 16 09:55:39.019293 systemd[1]: Started cri-containerd-c3921329be0b99ab44432243751b1f497257d4211f21d56b85ac2a5312976e87.scope - libcontainer container c3921329be0b99ab44432243751b1f497257d4211f21d56b85ac2a5312976e87. Dec 16 09:55:39.049380 containerd[1506]: time="2024-12-16T09:55:39.049267750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5vv27,Uid:3b3f85f9-f74d-4e2a-9c4b-5a8a7393e8af,Namespace:calico-system,Attempt:1,} returns sandbox id \"c3921329be0b99ab44432243751b1f497257d4211f21d56b85ac2a5312976e87\"" Dec 16 09:55:39.051379 containerd[1506]: time="2024-12-16T09:55:39.051235103Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Dec 16 09:55:39.426503 kubelet[2028]: E1216 09:55:39.426444 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:55:39.726543 systemd[1]: run-containerd-runc-k8s.io-c3921329be0b99ab44432243751b1f497257d4211f21d56b85ac2a5312976e87-runc.JZcMU7.mount: Deactivated successfully. Dec 16 09:55:40.427668 kubelet[2028]: E1216 09:55:40.427606 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:55:40.501391 containerd[1506]: time="2024-12-16T09:55:40.501313334Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:55:40.502474 containerd[1506]: time="2024-12-16T09:55:40.502425376Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Dec 16 09:55:40.504377 containerd[1506]: time="2024-12-16T09:55:40.503339547Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:55:40.505789 containerd[1506]: time="2024-12-16T09:55:40.505743136Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:55:40.507091 containerd[1506]: time="2024-12-16T09:55:40.506754861Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.455492858s" Dec 16 09:55:40.507091 containerd[1506]: time="2024-12-16T09:55:40.506780589Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Dec 16 09:55:40.510720 containerd[1506]: time="2024-12-16T09:55:40.510663458Z" level=info msg="CreateContainer within sandbox \"c3921329be0b99ab44432243751b1f497257d4211f21d56b85ac2a5312976e87\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Dec 16 09:55:40.533987 containerd[1506]: time="2024-12-16T09:55:40.533933821Z" level=info msg="CreateContainer within sandbox \"c3921329be0b99ab44432243751b1f497257d4211f21d56b85ac2a5312976e87\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"ad8b6e60f9f78016990f8d2f3eb1de96a8981ae1c36ea78c2d7c9006931a7d9c\"" Dec 16 09:55:40.534712 containerd[1506]: time="2024-12-16T09:55:40.534654631Z" level=info msg="StartContainer for \"ad8b6e60f9f78016990f8d2f3eb1de96a8981ae1c36ea78c2d7c9006931a7d9c\"" Dec 16 09:55:40.585218 systemd[1]: Started cri-containerd-ad8b6e60f9f78016990f8d2f3eb1de96a8981ae1c36ea78c2d7c9006931a7d9c.scope - libcontainer container ad8b6e60f9f78016990f8d2f3eb1de96a8981ae1c36ea78c2d7c9006931a7d9c. Dec 16 09:55:40.593525 systemd-networkd[1393]: cali3271290d1d6: Gained IPv6LL Dec 16 09:55:40.631937 containerd[1506]: time="2024-12-16T09:55:40.631659086Z" level=info msg="StartContainer for \"ad8b6e60f9f78016990f8d2f3eb1de96a8981ae1c36ea78c2d7c9006931a7d9c\" returns successfully" Dec 16 09:55:40.633711 containerd[1506]: time="2024-12-16T09:55:40.633618033Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Dec 16 09:55:41.428530 kubelet[2028]: E1216 09:55:41.428485 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:55:41.507196 containerd[1506]: time="2024-12-16T09:55:41.507099670Z" level=info msg="StopPodSandbox for \"8676873e0c5b463954275491d346168ff6e1c9fa04be83fadac2c50e0e16bc75\"" Dec 16 09:55:41.621700 containerd[1506]: 2024-12-16 09:55:41.558 [INFO][3057] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8676873e0c5b463954275491d346168ff6e1c9fa04be83fadac2c50e0e16bc75" Dec 16 09:55:41.621700 containerd[1506]: 2024-12-16 09:55:41.558 [INFO][3057] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8676873e0c5b463954275491d346168ff6e1c9fa04be83fadac2c50e0e16bc75" iface="eth0" netns="/var/run/netns/cni-ed947976-ae29-b0db-60c8-9fa900c3bbc4" Dec 16 09:55:41.621700 containerd[1506]: 2024-12-16 09:55:41.559 [INFO][3057] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8676873e0c5b463954275491d346168ff6e1c9fa04be83fadac2c50e0e16bc75" iface="eth0" netns="/var/run/netns/cni-ed947976-ae29-b0db-60c8-9fa900c3bbc4" Dec 16 09:55:41.621700 containerd[1506]: 2024-12-16 09:55:41.559 [INFO][3057] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8676873e0c5b463954275491d346168ff6e1c9fa04be83fadac2c50e0e16bc75" iface="eth0" netns="/var/run/netns/cni-ed947976-ae29-b0db-60c8-9fa900c3bbc4" Dec 16 09:55:41.621700 containerd[1506]: 2024-12-16 09:55:41.559 [INFO][3057] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8676873e0c5b463954275491d346168ff6e1c9fa04be83fadac2c50e0e16bc75" Dec 16 09:55:41.621700 containerd[1506]: 2024-12-16 09:55:41.559 [INFO][3057] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8676873e0c5b463954275491d346168ff6e1c9fa04be83fadac2c50e0e16bc75" Dec 16 09:55:41.621700 containerd[1506]: 2024-12-16 09:55:41.602 [INFO][3064] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8676873e0c5b463954275491d346168ff6e1c9fa04be83fadac2c50e0e16bc75" HandleID="k8s-pod-network.8676873e0c5b463954275491d346168ff6e1c9fa04be83fadac2c50e0e16bc75" Workload="10.0.0.4-k8s-nginx--deployment--6d5f899847--4wg2x-eth0" Dec 16 09:55:41.621700 containerd[1506]: 2024-12-16 09:55:41.603 [INFO][3064] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 16 09:55:41.621700 containerd[1506]: 2024-12-16 09:55:41.603 [INFO][3064] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 16 09:55:41.621700 containerd[1506]: 2024-12-16 09:55:41.609 [WARNING][3064] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8676873e0c5b463954275491d346168ff6e1c9fa04be83fadac2c50e0e16bc75" HandleID="k8s-pod-network.8676873e0c5b463954275491d346168ff6e1c9fa04be83fadac2c50e0e16bc75" Workload="10.0.0.4-k8s-nginx--deployment--6d5f899847--4wg2x-eth0" Dec 16 09:55:41.621700 containerd[1506]: 2024-12-16 09:55:41.609 [INFO][3064] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8676873e0c5b463954275491d346168ff6e1c9fa04be83fadac2c50e0e16bc75" HandleID="k8s-pod-network.8676873e0c5b463954275491d346168ff6e1c9fa04be83fadac2c50e0e16bc75" Workload="10.0.0.4-k8s-nginx--deployment--6d5f899847--4wg2x-eth0" Dec 16 09:55:41.621700 containerd[1506]: 2024-12-16 09:55:41.612 [INFO][3064] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 16 09:55:41.621700 containerd[1506]: 2024-12-16 09:55:41.618 [INFO][3057] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8676873e0c5b463954275491d346168ff6e1c9fa04be83fadac2c50e0e16bc75" Dec 16 09:55:41.623015 containerd[1506]: time="2024-12-16T09:55:41.622412488Z" level=info msg="TearDown network for sandbox \"8676873e0c5b463954275491d346168ff6e1c9fa04be83fadac2c50e0e16bc75\" successfully" Dec 16 09:55:41.623015 containerd[1506]: time="2024-12-16T09:55:41.622453133Z" level=info msg="StopPodSandbox for \"8676873e0c5b463954275491d346168ff6e1c9fa04be83fadac2c50e0e16bc75\" returns successfully" Dec 16 09:55:41.626166 containerd[1506]: time="2024-12-16T09:55:41.625658354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-4wg2x,Uid:ff41c56c-b178-4ae4-bd93-317967373b16,Namespace:default,Attempt:1,}" Dec 16 09:55:41.627009 systemd[1]: run-netns-cni\x2ded947976\x2dae29\x2db0db\x2d60c8\x2d9fa900c3bbc4.mount: Deactivated successfully. Dec 16 09:55:41.801606 systemd-networkd[1393]: cali1e3627c6ab1: Link UP Dec 16 09:55:41.802756 systemd-networkd[1393]: cali1e3627c6ab1: Gained carrier Dec 16 09:55:41.820900 containerd[1506]: 2024-12-16 09:55:41.704 [INFO][3074] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.4-k8s-nginx--deployment--6d5f899847--4wg2x-eth0 nginx-deployment-6d5f899847- default ff41c56c-b178-4ae4-bd93-317967373b16 1663 0 2024-12-16 09:55:27 +0000 UTC map[app:nginx pod-template-hash:6d5f899847 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.4 nginx-deployment-6d5f899847-4wg2x eth0 default [] [] [kns.default ksa.default.default] cali1e3627c6ab1 [] []}} ContainerID="3c374ad8b76d7d818ebb25914dd5447c9be636f0fa98218fbf368e9381fc4c76" Namespace="default" Pod="nginx-deployment-6d5f899847-4wg2x" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--6d5f899847--4wg2x-" Dec 16 09:55:41.820900 containerd[1506]: 2024-12-16 09:55:41.704 [INFO][3074] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3c374ad8b76d7d818ebb25914dd5447c9be636f0fa98218fbf368e9381fc4c76" Namespace="default" Pod="nginx-deployment-6d5f899847-4wg2x" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--6d5f899847--4wg2x-eth0" Dec 16 09:55:41.820900 containerd[1506]: 2024-12-16 09:55:41.748 [INFO][3081] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3c374ad8b76d7d818ebb25914dd5447c9be636f0fa98218fbf368e9381fc4c76" HandleID="k8s-pod-network.3c374ad8b76d7d818ebb25914dd5447c9be636f0fa98218fbf368e9381fc4c76" Workload="10.0.0.4-k8s-nginx--deployment--6d5f899847--4wg2x-eth0" Dec 16 09:55:41.820900 containerd[1506]: 2024-12-16 09:55:41.759 [INFO][3081] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3c374ad8b76d7d818ebb25914dd5447c9be636f0fa98218fbf368e9381fc4c76" HandleID="k8s-pod-network.3c374ad8b76d7d818ebb25914dd5447c9be636f0fa98218fbf368e9381fc4c76" Workload="10.0.0.4-k8s-nginx--deployment--6d5f899847--4wg2x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004c8d40), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.4", "pod":"nginx-deployment-6d5f899847-4wg2x", "timestamp":"2024-12-16 09:55:41.748949228 +0000 UTC"}, Hostname:"10.0.0.4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 09:55:41.820900 containerd[1506]: 2024-12-16 09:55:41.759 [INFO][3081] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 16 09:55:41.820900 containerd[1506]: 2024-12-16 09:55:41.759 [INFO][3081] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 16 09:55:41.820900 containerd[1506]: 2024-12-16 09:55:41.759 [INFO][3081] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.4' Dec 16 09:55:41.820900 containerd[1506]: 2024-12-16 09:55:41.761 [INFO][3081] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3c374ad8b76d7d818ebb25914dd5447c9be636f0fa98218fbf368e9381fc4c76" host="10.0.0.4" Dec 16 09:55:41.820900 containerd[1506]: 2024-12-16 09:55:41.766 [INFO][3081] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.4" Dec 16 09:55:41.820900 containerd[1506]: 2024-12-16 09:55:41.771 [INFO][3081] ipam/ipam.go 489: Trying affinity for 192.168.99.192/26 host="10.0.0.4" Dec 16 09:55:41.820900 containerd[1506]: 2024-12-16 09:55:41.773 [INFO][3081] ipam/ipam.go 155: Attempting to load block cidr=192.168.99.192/26 host="10.0.0.4" Dec 16 09:55:41.820900 containerd[1506]: 2024-12-16 09:55:41.776 [INFO][3081] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.99.192/26 host="10.0.0.4" Dec 16 09:55:41.820900 containerd[1506]: 2024-12-16 09:55:41.776 [INFO][3081] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.99.192/26 handle="k8s-pod-network.3c374ad8b76d7d818ebb25914dd5447c9be636f0fa98218fbf368e9381fc4c76" host="10.0.0.4" Dec 16 09:55:41.820900 containerd[1506]: 2024-12-16 09:55:41.778 [INFO][3081] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3c374ad8b76d7d818ebb25914dd5447c9be636f0fa98218fbf368e9381fc4c76 Dec 16 09:55:41.820900 containerd[1506]: 2024-12-16 09:55:41.783 [INFO][3081] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.99.192/26 handle="k8s-pod-network.3c374ad8b76d7d818ebb25914dd5447c9be636f0fa98218fbf368e9381fc4c76" host="10.0.0.4" Dec 16 09:55:41.820900 containerd[1506]: 2024-12-16 09:55:41.787 [INFO][3081] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.99.194/26] block=192.168.99.192/26 handle="k8s-pod-network.3c374ad8b76d7d818ebb25914dd5447c9be636f0fa98218fbf368e9381fc4c76" host="10.0.0.4" Dec 16 09:55:41.820900 containerd[1506]: 2024-12-16 09:55:41.787 [INFO][3081] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.99.194/26] handle="k8s-pod-network.3c374ad8b76d7d818ebb25914dd5447c9be636f0fa98218fbf368e9381fc4c76" host="10.0.0.4" Dec 16 09:55:41.820900 containerd[1506]: 2024-12-16 09:55:41.788 [INFO][3081] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 16 09:55:41.820900 containerd[1506]: 2024-12-16 09:55:41.788 [INFO][3081] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.194/26] IPv6=[] ContainerID="3c374ad8b76d7d818ebb25914dd5447c9be636f0fa98218fbf368e9381fc4c76" HandleID="k8s-pod-network.3c374ad8b76d7d818ebb25914dd5447c9be636f0fa98218fbf368e9381fc4c76" Workload="10.0.0.4-k8s-nginx--deployment--6d5f899847--4wg2x-eth0" Dec 16 09:55:41.822513 containerd[1506]: 2024-12-16 09:55:41.793 [INFO][3074] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3c374ad8b76d7d818ebb25914dd5447c9be636f0fa98218fbf368e9381fc4c76" Namespace="default" Pod="nginx-deployment-6d5f899847-4wg2x" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--6d5f899847--4wg2x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-nginx--deployment--6d5f899847--4wg2x-eth0", GenerateName:"nginx-deployment-6d5f899847-", Namespace:"default", SelfLink:"", UID:"ff41c56c-b178-4ae4-bd93-317967373b16", ResourceVersion:"1663", Generation:0, CreationTimestamp:time.Date(2024, time.December, 16, 9, 55, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"6d5f899847", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"", Pod:"nginx-deployment-6d5f899847-4wg2x", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.99.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali1e3627c6ab1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 16 09:55:41.822513 containerd[1506]: 2024-12-16 09:55:41.793 [INFO][3074] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.99.194/32] ContainerID="3c374ad8b76d7d818ebb25914dd5447c9be636f0fa98218fbf368e9381fc4c76" Namespace="default" Pod="nginx-deployment-6d5f899847-4wg2x" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--6d5f899847--4wg2x-eth0" Dec 16 09:55:41.822513 containerd[1506]: 2024-12-16 09:55:41.793 [INFO][3074] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1e3627c6ab1 ContainerID="3c374ad8b76d7d818ebb25914dd5447c9be636f0fa98218fbf368e9381fc4c76" Namespace="default" Pod="nginx-deployment-6d5f899847-4wg2x" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--6d5f899847--4wg2x-eth0" Dec 16 09:55:41.822513 containerd[1506]: 2024-12-16 09:55:41.802 [INFO][3074] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3c374ad8b76d7d818ebb25914dd5447c9be636f0fa98218fbf368e9381fc4c76" Namespace="default" Pod="nginx-deployment-6d5f899847-4wg2x" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--6d5f899847--4wg2x-eth0" Dec 16 09:55:41.822513 containerd[1506]: 2024-12-16 09:55:41.804 [INFO][3074] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3c374ad8b76d7d818ebb25914dd5447c9be636f0fa98218fbf368e9381fc4c76" Namespace="default" Pod="nginx-deployment-6d5f899847-4wg2x" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--6d5f899847--4wg2x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-nginx--deployment--6d5f899847--4wg2x-eth0", GenerateName:"nginx-deployment-6d5f899847-", Namespace:"default", SelfLink:"", UID:"ff41c56c-b178-4ae4-bd93-317967373b16", ResourceVersion:"1663", Generation:0, CreationTimestamp:time.Date(2024, time.December, 16, 9, 55, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"6d5f899847", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"3c374ad8b76d7d818ebb25914dd5447c9be636f0fa98218fbf368e9381fc4c76", Pod:"nginx-deployment-6d5f899847-4wg2x", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.99.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali1e3627c6ab1", MAC:"7e:34:f8:7e:03:81", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 16 09:55:41.822513 containerd[1506]: 2024-12-16 09:55:41.813 [INFO][3074] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3c374ad8b76d7d818ebb25914dd5447c9be636f0fa98218fbf368e9381fc4c76" Namespace="default" Pod="nginx-deployment-6d5f899847-4wg2x" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--6d5f899847--4wg2x-eth0" Dec 16 09:55:41.850028 containerd[1506]: time="2024-12-16T09:55:41.849907863Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 16 09:55:41.850028 containerd[1506]: time="2024-12-16T09:55:41.849966734Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 16 09:55:41.850281 containerd[1506]: time="2024-12-16T09:55:41.849980179Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 16 09:55:41.850281 containerd[1506]: time="2024-12-16T09:55:41.850071349Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 16 09:55:41.880349 systemd[1]: Started cri-containerd-3c374ad8b76d7d818ebb25914dd5447c9be636f0fa98218fbf368e9381fc4c76.scope - libcontainer container 3c374ad8b76d7d818ebb25914dd5447c9be636f0fa98218fbf368e9381fc4c76. Dec 16 09:55:41.960089 containerd[1506]: time="2024-12-16T09:55:41.960026377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-4wg2x,Uid:ff41c56c-b178-4ae4-bd93-317967373b16,Namespace:default,Attempt:1,} returns sandbox id \"3c374ad8b76d7d818ebb25914dd5447c9be636f0fa98218fbf368e9381fc4c76\"" Dec 16 09:55:42.128420 containerd[1506]: time="2024-12-16T09:55:42.128272350Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:55:42.129919 containerd[1506]: time="2024-12-16T09:55:42.129831790Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Dec 16 09:55:42.130965 containerd[1506]: time="2024-12-16T09:55:42.130898086Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:55:42.133213 containerd[1506]: time="2024-12-16T09:55:42.133172955Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:55:42.133997 containerd[1506]: time="2024-12-16T09:55:42.133585938Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.499933701s" Dec 16 09:55:42.133997 containerd[1506]: time="2024-12-16T09:55:42.133612598Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Dec 16 09:55:42.134730 containerd[1506]: time="2024-12-16T09:55:42.134706196Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 16 09:55:42.135469 containerd[1506]: time="2024-12-16T09:55:42.135425943Z" level=info msg="CreateContainer within sandbox \"c3921329be0b99ab44432243751b1f497257d4211f21d56b85ac2a5312976e87\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Dec 16 09:55:42.159480 containerd[1506]: time="2024-12-16T09:55:42.159412150Z" level=info msg="CreateContainer within sandbox \"c3921329be0b99ab44432243751b1f497257d4211f21d56b85ac2a5312976e87\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"372d5b1087e491830e5087246f38fdf9a25fd8a29fc18a2b729f5a28d076af9b\"" Dec 16 09:55:42.160260 containerd[1506]: time="2024-12-16T09:55:42.160231994Z" level=info msg="StartContainer for \"372d5b1087e491830e5087246f38fdf9a25fd8a29fc18a2b729f5a28d076af9b\"" Dec 16 09:55:42.196199 systemd[1]: Started cri-containerd-372d5b1087e491830e5087246f38fdf9a25fd8a29fc18a2b729f5a28d076af9b.scope - libcontainer container 372d5b1087e491830e5087246f38fdf9a25fd8a29fc18a2b729f5a28d076af9b. Dec 16 09:55:42.227351 containerd[1506]: time="2024-12-16T09:55:42.227292614Z" level=info msg="StartContainer for \"372d5b1087e491830e5087246f38fdf9a25fd8a29fc18a2b729f5a28d076af9b\" returns successfully" Dec 16 09:55:42.428845 kubelet[2028]: E1216 09:55:42.428638 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:55:42.510835 kubelet[2028]: I1216 09:55:42.510755 2028 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Dec 16 09:55:42.512211 kubelet[2028]: I1216 09:55:42.512164 2028 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Dec 16 09:55:42.630798 kubelet[2028]: I1216 09:55:42.630767 2028 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-5vv27" podStartSLOduration=31.547651502 podStartE2EDuration="34.630731873s" podCreationTimestamp="2024-12-16 09:55:08 +0000 UTC" firstStartedPulling="2024-12-16 09:55:39.050829913 +0000 UTC m=+31.130497113" lastFinishedPulling="2024-12-16 09:55:42.133910315 +0000 UTC m=+34.213577484" observedRunningTime="2024-12-16 09:55:42.630480753 +0000 UTC m=+34.710147932" watchObservedRunningTime="2024-12-16 09:55:42.630731873 +0000 UTC m=+34.710399052" Dec 16 09:55:43.429097 kubelet[2028]: E1216 09:55:43.428976 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:55:43.664288 systemd-networkd[1393]: cali1e3627c6ab1: Gained IPv6LL Dec 16 09:55:44.430186 kubelet[2028]: E1216 09:55:44.430136 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:55:44.486996 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2497893759.mount: Deactivated successfully. Dec 16 09:55:45.430990 kubelet[2028]: E1216 09:55:45.430876 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:55:45.692424 containerd[1506]: time="2024-12-16T09:55:45.692204166Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:55:45.694282 containerd[1506]: time="2024-12-16T09:55:45.694203440Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=71036027" Dec 16 09:55:45.695698 containerd[1506]: time="2024-12-16T09:55:45.695579426Z" level=info msg="ImageCreate event name:\"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:55:45.704086 containerd[1506]: time="2024-12-16T09:55:45.702648382Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:55:45.707440 containerd[1506]: time="2024-12-16T09:55:45.707362698Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1\", size \"71035905\" in 3.572620797s" Dec 16 09:55:45.707531 containerd[1506]: time="2024-12-16T09:55:45.707448479Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 16 09:55:45.729330 containerd[1506]: time="2024-12-16T09:55:45.729274285Z" level=info msg="CreateContainer within sandbox \"3c374ad8b76d7d818ebb25914dd5447c9be636f0fa98218fbf368e9381fc4c76\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Dec 16 09:55:45.749113 containerd[1506]: time="2024-12-16T09:55:45.749051996Z" level=info msg="CreateContainer within sandbox \"3c374ad8b76d7d818ebb25914dd5447c9be636f0fa98218fbf368e9381fc4c76\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"02d5921cb1d133dced04fe237ad13a84855cda2fbc83bb032272d14214b3ff90\"" Dec 16 09:55:45.750918 containerd[1506]: time="2024-12-16T09:55:45.750873357Z" level=info msg="StartContainer for \"02d5921cb1d133dced04fe237ad13a84855cda2fbc83bb032272d14214b3ff90\"" Dec 16 09:55:45.794904 systemd[1]: run-containerd-runc-k8s.io-02d5921cb1d133dced04fe237ad13a84855cda2fbc83bb032272d14214b3ff90-runc.I06udK.mount: Deactivated successfully. Dec 16 09:55:45.803259 systemd[1]: Started cri-containerd-02d5921cb1d133dced04fe237ad13a84855cda2fbc83bb032272d14214b3ff90.scope - libcontainer container 02d5921cb1d133dced04fe237ad13a84855cda2fbc83bb032272d14214b3ff90. Dec 16 09:55:45.841557 containerd[1506]: time="2024-12-16T09:55:45.840384304Z" level=info msg="StartContainer for \"02d5921cb1d133dced04fe237ad13a84855cda2fbc83bb032272d14214b3ff90\" returns successfully" Dec 16 09:55:46.431529 kubelet[2028]: E1216 09:55:46.431425 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:55:46.647253 kubelet[2028]: I1216 09:55:46.647176 2028 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-6d5f899847-4wg2x" podStartSLOduration=15.899307384 podStartE2EDuration="19.647117163s" podCreationTimestamp="2024-12-16 09:55:27 +0000 UTC" firstStartedPulling="2024-12-16 09:55:41.961705811 +0000 UTC m=+34.041372990" lastFinishedPulling="2024-12-16 09:55:45.70951558 +0000 UTC m=+37.789182769" observedRunningTime="2024-12-16 09:55:46.646866906 +0000 UTC m=+38.726534124" watchObservedRunningTime="2024-12-16 09:55:46.647117163 +0000 UTC m=+38.726784383" Dec 16 09:55:47.431803 kubelet[2028]: E1216 09:55:47.431696 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:55:48.403246 kubelet[2028]: E1216 09:55:48.403134 2028 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:55:48.432471 kubelet[2028]: E1216 09:55:48.432385 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:55:49.432916 kubelet[2028]: E1216 09:55:49.432790 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:55:50.433851 kubelet[2028]: E1216 09:55:50.433735 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:55:51.434802 kubelet[2028]: E1216 09:55:51.434697 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:55:52.435909 kubelet[2028]: E1216 09:55:52.435693 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:55:53.436435 kubelet[2028]: E1216 09:55:53.436329 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:55:54.437525 kubelet[2028]: E1216 09:55:54.437389 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:55:55.438596 kubelet[2028]: E1216 09:55:55.438496 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:55:55.963534 kubelet[2028]: I1216 09:55:55.963453 2028 topology_manager.go:215] "Topology Admit Handler" podUID="1217f597-ab95-43b3-b5db-f05254e77edb" podNamespace="default" podName="nfs-server-provisioner-0" Dec 16 09:55:55.975647 systemd[1]: Created slice kubepods-besteffort-pod1217f597_ab95_43b3_b5db_f05254e77edb.slice - libcontainer container kubepods-besteffort-pod1217f597_ab95_43b3_b5db_f05254e77edb.slice. Dec 16 09:55:56.139929 kubelet[2028]: I1216 09:55:56.139823 2028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/1217f597-ab95-43b3-b5db-f05254e77edb-data\") pod \"nfs-server-provisioner-0\" (UID: \"1217f597-ab95-43b3-b5db-f05254e77edb\") " pod="default/nfs-server-provisioner-0" Dec 16 09:55:56.139929 kubelet[2028]: I1216 09:55:56.139906 2028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfs8p\" (UniqueName: \"kubernetes.io/projected/1217f597-ab95-43b3-b5db-f05254e77edb-kube-api-access-wfs8p\") pod \"nfs-server-provisioner-0\" (UID: \"1217f597-ab95-43b3-b5db-f05254e77edb\") " pod="default/nfs-server-provisioner-0" Dec 16 09:55:56.281639 containerd[1506]: time="2024-12-16T09:55:56.281558823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:1217f597-ab95-43b3-b5db-f05254e77edb,Namespace:default,Attempt:0,}" Dec 16 09:55:56.439989 kubelet[2028]: E1216 09:55:56.439946 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:55:56.472155 systemd-networkd[1393]: cali60e51b789ff: Link UP Dec 16 09:55:56.476197 systemd-networkd[1393]: cali60e51b789ff: Gained carrier Dec 16 09:55:56.491104 containerd[1506]: 2024-12-16 09:55:56.369 [INFO][3309] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.4-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 1217f597-ab95-43b3-b5db-f05254e77edb 1733 0 2024-12-16 09:55:55 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.0.0.4 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="493aca3ba4d32504d38bd9e0e52ca95dedfae38f76821f43f6b697a5c244c550" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-" Dec 16 09:55:56.491104 containerd[1506]: 2024-12-16 09:55:56.369 [INFO][3309] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="493aca3ba4d32504d38bd9e0e52ca95dedfae38f76821f43f6b697a5c244c550" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" Dec 16 09:55:56.491104 containerd[1506]: 2024-12-16 09:55:56.425 [INFO][3319] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="493aca3ba4d32504d38bd9e0e52ca95dedfae38f76821f43f6b697a5c244c550" HandleID="k8s-pod-network.493aca3ba4d32504d38bd9e0e52ca95dedfae38f76821f43f6b697a5c244c550" Workload="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" Dec 16 09:55:56.491104 containerd[1506]: 2024-12-16 09:55:56.435 [INFO][3319] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="493aca3ba4d32504d38bd9e0e52ca95dedfae38f76821f43f6b697a5c244c550" HandleID="k8s-pod-network.493aca3ba4d32504d38bd9e0e52ca95dedfae38f76821f43f6b697a5c244c550" Workload="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000265730), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.4", "pod":"nfs-server-provisioner-0", "timestamp":"2024-12-16 09:55:56.425935862 +0000 UTC"}, Hostname:"10.0.0.4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 09:55:56.491104 containerd[1506]: 2024-12-16 09:55:56.435 [INFO][3319] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 16 09:55:56.491104 containerd[1506]: 2024-12-16 09:55:56.435 [INFO][3319] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 16 09:55:56.491104 containerd[1506]: 2024-12-16 09:55:56.435 [INFO][3319] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.4' Dec 16 09:55:56.491104 containerd[1506]: 2024-12-16 09:55:56.437 [INFO][3319] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.493aca3ba4d32504d38bd9e0e52ca95dedfae38f76821f43f6b697a5c244c550" host="10.0.0.4" Dec 16 09:55:56.491104 containerd[1506]: 2024-12-16 09:55:56.443 [INFO][3319] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.4" Dec 16 09:55:56.491104 containerd[1506]: 2024-12-16 09:55:56.448 [INFO][3319] ipam/ipam.go 489: Trying affinity for 192.168.99.192/26 host="10.0.0.4" Dec 16 09:55:56.491104 containerd[1506]: 2024-12-16 09:55:56.450 [INFO][3319] ipam/ipam.go 155: Attempting to load block cidr=192.168.99.192/26 host="10.0.0.4" Dec 16 09:55:56.491104 containerd[1506]: 2024-12-16 09:55:56.452 [INFO][3319] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.99.192/26 host="10.0.0.4" Dec 16 09:55:56.491104 containerd[1506]: 2024-12-16 09:55:56.452 [INFO][3319] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.99.192/26 handle="k8s-pod-network.493aca3ba4d32504d38bd9e0e52ca95dedfae38f76821f43f6b697a5c244c550" host="10.0.0.4" Dec 16 09:55:56.491104 containerd[1506]: 2024-12-16 09:55:56.454 [INFO][3319] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.493aca3ba4d32504d38bd9e0e52ca95dedfae38f76821f43f6b697a5c244c550 Dec 16 09:55:56.491104 containerd[1506]: 2024-12-16 09:55:56.457 [INFO][3319] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.99.192/26 handle="k8s-pod-network.493aca3ba4d32504d38bd9e0e52ca95dedfae38f76821f43f6b697a5c244c550" host="10.0.0.4" Dec 16 09:55:56.491104 containerd[1506]: 2024-12-16 09:55:56.463 [INFO][3319] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.99.195/26] block=192.168.99.192/26 handle="k8s-pod-network.493aca3ba4d32504d38bd9e0e52ca95dedfae38f76821f43f6b697a5c244c550" host="10.0.0.4" Dec 16 09:55:56.491104 containerd[1506]: 2024-12-16 09:55:56.464 [INFO][3319] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.99.195/26] handle="k8s-pod-network.493aca3ba4d32504d38bd9e0e52ca95dedfae38f76821f43f6b697a5c244c550" host="10.0.0.4" Dec 16 09:55:56.491104 containerd[1506]: 2024-12-16 09:55:56.464 [INFO][3319] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 16 09:55:56.491104 containerd[1506]: 2024-12-16 09:55:56.464 [INFO][3319] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.195/26] IPv6=[] ContainerID="493aca3ba4d32504d38bd9e0e52ca95dedfae38f76821f43f6b697a5c244c550" HandleID="k8s-pod-network.493aca3ba4d32504d38bd9e0e52ca95dedfae38f76821f43f6b697a5c244c550" Workload="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" Dec 16 09:55:56.492573 containerd[1506]: 2024-12-16 09:55:56.467 [INFO][3309] cni-plugin/k8s.go 386: Populated endpoint ContainerID="493aca3ba4d32504d38bd9e0e52ca95dedfae38f76821f43f6b697a5c244c550" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"1217f597-ab95-43b3-b5db-f05254e77edb", ResourceVersion:"1733", Generation:0, CreationTimestamp:time.Date(2024, time.December, 16, 9, 55, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.99.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 16 09:55:56.492573 containerd[1506]: 2024-12-16 09:55:56.468 [INFO][3309] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.99.195/32] ContainerID="493aca3ba4d32504d38bd9e0e52ca95dedfae38f76821f43f6b697a5c244c550" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" Dec 16 09:55:56.492573 containerd[1506]: 2024-12-16 09:55:56.468 [INFO][3309] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="493aca3ba4d32504d38bd9e0e52ca95dedfae38f76821f43f6b697a5c244c550" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" Dec 16 09:55:56.492573 containerd[1506]: 2024-12-16 09:55:56.477 [INFO][3309] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="493aca3ba4d32504d38bd9e0e52ca95dedfae38f76821f43f6b697a5c244c550" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" Dec 16 09:55:56.492907 containerd[1506]: 2024-12-16 09:55:56.478 [INFO][3309] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="493aca3ba4d32504d38bd9e0e52ca95dedfae38f76821f43f6b697a5c244c550" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"1217f597-ab95-43b3-b5db-f05254e77edb", ResourceVersion:"1733", Generation:0, CreationTimestamp:time.Date(2024, time.December, 16, 9, 55, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"493aca3ba4d32504d38bd9e0e52ca95dedfae38f76821f43f6b697a5c244c550", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.99.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"26:e7:a7:49:a6:5e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 16 09:55:56.492907 containerd[1506]: 2024-12-16 09:55:56.486 [INFO][3309] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="493aca3ba4d32504d38bd9e0e52ca95dedfae38f76821f43f6b697a5c244c550" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" Dec 16 09:55:56.530570 containerd[1506]: time="2024-12-16T09:55:56.530375330Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 16 09:55:56.530570 containerd[1506]: time="2024-12-16T09:55:56.530418049Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 16 09:55:56.530570 containerd[1506]: time="2024-12-16T09:55:56.530431214Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 16 09:55:56.530570 containerd[1506]: time="2024-12-16T09:55:56.530504852Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 16 09:55:56.555188 systemd[1]: Started cri-containerd-493aca3ba4d32504d38bd9e0e52ca95dedfae38f76821f43f6b697a5c244c550.scope - libcontainer container 493aca3ba4d32504d38bd9e0e52ca95dedfae38f76821f43f6b697a5c244c550. Dec 16 09:55:56.604909 containerd[1506]: time="2024-12-16T09:55:56.604861317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:1217f597-ab95-43b3-b5db-f05254e77edb,Namespace:default,Attempt:0,} returns sandbox id \"493aca3ba4d32504d38bd9e0e52ca95dedfae38f76821f43f6b697a5c244c550\"" Dec 16 09:55:56.608523 containerd[1506]: time="2024-12-16T09:55:56.608184441Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Dec 16 09:55:57.441826 kubelet[2028]: E1216 09:55:57.441754 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:55:57.744188 systemd-networkd[1393]: cali60e51b789ff: Gained IPv6LL Dec 16 09:55:58.442504 kubelet[2028]: E1216 09:55:58.442442 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:55:58.551273 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3894492211.mount: Deactivated successfully. Dec 16 09:55:59.443022 kubelet[2028]: E1216 09:55:59.442754 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:56:00.403098 containerd[1506]: time="2024-12-16T09:56:00.403017257Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:56:00.404551 containerd[1506]: time="2024-12-16T09:56:00.404503481Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039474" Dec 16 09:56:00.405959 containerd[1506]: time="2024-12-16T09:56:00.405920384Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:56:00.411796 containerd[1506]: time="2024-12-16T09:56:00.411759280Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:56:00.413220 containerd[1506]: time="2024-12-16T09:56:00.412690276Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 3.804460018s" Dec 16 09:56:00.413220 containerd[1506]: time="2024-12-16T09:56:00.412723768Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Dec 16 09:56:00.415101 containerd[1506]: time="2024-12-16T09:56:00.415072608Z" level=info msg="CreateContainer within sandbox \"493aca3ba4d32504d38bd9e0e52ca95dedfae38f76821f43f6b697a5c244c550\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Dec 16 09:56:00.431123 containerd[1506]: time="2024-12-16T09:56:00.431037772Z" level=info msg="CreateContainer within sandbox \"493aca3ba4d32504d38bd9e0e52ca95dedfae38f76821f43f6b697a5c244c550\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"b83de24da33afc91c734a449151daebc744bb83ec136ba504e58353f10a75a7c\"" Dec 16 09:56:00.431770 containerd[1506]: time="2024-12-16T09:56:00.431745447Z" level=info msg="StartContainer for \"b83de24da33afc91c734a449151daebc744bb83ec136ba504e58353f10a75a7c\"" Dec 16 09:56:00.446258 kubelet[2028]: E1216 09:56:00.446223 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:56:00.480252 systemd[1]: Started cri-containerd-b83de24da33afc91c734a449151daebc744bb83ec136ba504e58353f10a75a7c.scope - libcontainer container b83de24da33afc91c734a449151daebc744bb83ec136ba504e58353f10a75a7c. Dec 16 09:56:00.507731 containerd[1506]: time="2024-12-16T09:56:00.507664200Z" level=info msg="StartContainer for \"b83de24da33afc91c734a449151daebc744bb83ec136ba504e58353f10a75a7c\" returns successfully" Dec 16 09:56:00.708423 kubelet[2028]: I1216 09:56:00.707790 2028 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.901778781 podStartE2EDuration="5.707731606s" podCreationTimestamp="2024-12-16 09:55:55 +0000 UTC" firstStartedPulling="2024-12-16 09:55:56.607469953 +0000 UTC m=+48.687137151" lastFinishedPulling="2024-12-16 09:56:00.413422797 +0000 UTC m=+52.493089976" observedRunningTime="2024-12-16 09:56:00.706943188 +0000 UTC m=+52.786610388" watchObservedRunningTime="2024-12-16 09:56:00.707731606 +0000 UTC m=+52.787398825" Dec 16 09:56:01.426529 systemd[1]: run-containerd-runc-k8s.io-b83de24da33afc91c734a449151daebc744bb83ec136ba504e58353f10a75a7c-runc.4xITZE.mount: Deactivated successfully. Dec 16 09:56:01.447307 kubelet[2028]: E1216 09:56:01.447221 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:56:02.448350 kubelet[2028]: E1216 09:56:02.448252 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:56:03.448556 kubelet[2028]: E1216 09:56:03.448454 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:56:04.449763 kubelet[2028]: E1216 09:56:04.449683 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:56:05.450514 kubelet[2028]: E1216 09:56:05.450432 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:56:06.451370 kubelet[2028]: E1216 09:56:06.451261 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:56:07.452002 kubelet[2028]: E1216 09:56:07.451908 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:56:08.402786 kubelet[2028]: E1216 09:56:08.402683 2028 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:56:08.426495 containerd[1506]: time="2024-12-16T09:56:08.426438157Z" level=info msg="StopPodSandbox for \"8676873e0c5b463954275491d346168ff6e1c9fa04be83fadac2c50e0e16bc75\"" Dec 16 09:56:08.452422 kubelet[2028]: E1216 09:56:08.452358 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:56:08.560181 containerd[1506]: 2024-12-16 09:56:08.489 [WARNING][3498] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8676873e0c5b463954275491d346168ff6e1c9fa04be83fadac2c50e0e16bc75" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-nginx--deployment--6d5f899847--4wg2x-eth0", GenerateName:"nginx-deployment-6d5f899847-", Namespace:"default", SelfLink:"", UID:"ff41c56c-b178-4ae4-bd93-317967373b16", ResourceVersion:"1693", Generation:0, CreationTimestamp:time.Date(2024, time.December, 16, 9, 55, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"6d5f899847", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"3c374ad8b76d7d818ebb25914dd5447c9be636f0fa98218fbf368e9381fc4c76", Pod:"nginx-deployment-6d5f899847-4wg2x", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.99.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali1e3627c6ab1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 16 09:56:08.560181 containerd[1506]: 2024-12-16 09:56:08.490 [INFO][3498] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8676873e0c5b463954275491d346168ff6e1c9fa04be83fadac2c50e0e16bc75" Dec 16 09:56:08.560181 containerd[1506]: 2024-12-16 09:56:08.490 [INFO][3498] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8676873e0c5b463954275491d346168ff6e1c9fa04be83fadac2c50e0e16bc75" iface="eth0" netns="" Dec 16 09:56:08.560181 containerd[1506]: 2024-12-16 09:56:08.490 [INFO][3498] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8676873e0c5b463954275491d346168ff6e1c9fa04be83fadac2c50e0e16bc75" Dec 16 09:56:08.560181 containerd[1506]: 2024-12-16 09:56:08.490 [INFO][3498] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8676873e0c5b463954275491d346168ff6e1c9fa04be83fadac2c50e0e16bc75" Dec 16 09:56:08.560181 containerd[1506]: 2024-12-16 09:56:08.543 [INFO][3504] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8676873e0c5b463954275491d346168ff6e1c9fa04be83fadac2c50e0e16bc75" HandleID="k8s-pod-network.8676873e0c5b463954275491d346168ff6e1c9fa04be83fadac2c50e0e16bc75" Workload="10.0.0.4-k8s-nginx--deployment--6d5f899847--4wg2x-eth0" Dec 16 09:56:08.560181 containerd[1506]: 2024-12-16 09:56:08.543 [INFO][3504] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 16 09:56:08.560181 containerd[1506]: 2024-12-16 09:56:08.543 [INFO][3504] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 16 09:56:08.560181 containerd[1506]: 2024-12-16 09:56:08.552 [WARNING][3504] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8676873e0c5b463954275491d346168ff6e1c9fa04be83fadac2c50e0e16bc75" HandleID="k8s-pod-network.8676873e0c5b463954275491d346168ff6e1c9fa04be83fadac2c50e0e16bc75" Workload="10.0.0.4-k8s-nginx--deployment--6d5f899847--4wg2x-eth0" Dec 16 09:56:08.560181 containerd[1506]: 2024-12-16 09:56:08.552 [INFO][3504] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8676873e0c5b463954275491d346168ff6e1c9fa04be83fadac2c50e0e16bc75" HandleID="k8s-pod-network.8676873e0c5b463954275491d346168ff6e1c9fa04be83fadac2c50e0e16bc75" Workload="10.0.0.4-k8s-nginx--deployment--6d5f899847--4wg2x-eth0" Dec 16 09:56:08.560181 containerd[1506]: 2024-12-16 09:56:08.553 [INFO][3504] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 16 09:56:08.560181 containerd[1506]: 2024-12-16 09:56:08.556 [INFO][3498] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8676873e0c5b463954275491d346168ff6e1c9fa04be83fadac2c50e0e16bc75" Dec 16 09:56:08.561804 containerd[1506]: time="2024-12-16T09:56:08.560228876Z" level=info msg="TearDown network for sandbox \"8676873e0c5b463954275491d346168ff6e1c9fa04be83fadac2c50e0e16bc75\" successfully" Dec 16 09:56:08.561804 containerd[1506]: time="2024-12-16T09:56:08.560260435Z" level=info msg="StopPodSandbox for \"8676873e0c5b463954275491d346168ff6e1c9fa04be83fadac2c50e0e16bc75\" returns successfully" Dec 16 09:56:08.566115 containerd[1506]: time="2024-12-16T09:56:08.566077212Z" level=info msg="RemovePodSandbox for \"8676873e0c5b463954275491d346168ff6e1c9fa04be83fadac2c50e0e16bc75\"" Dec 16 09:56:08.566195 containerd[1506]: time="2024-12-16T09:56:08.566114432Z" level=info msg="Forcibly stopping sandbox \"8676873e0c5b463954275491d346168ff6e1c9fa04be83fadac2c50e0e16bc75\"" Dec 16 09:56:08.675347 containerd[1506]: 2024-12-16 09:56:08.626 [WARNING][3524] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8676873e0c5b463954275491d346168ff6e1c9fa04be83fadac2c50e0e16bc75" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-nginx--deployment--6d5f899847--4wg2x-eth0", GenerateName:"nginx-deployment-6d5f899847-", Namespace:"default", SelfLink:"", UID:"ff41c56c-b178-4ae4-bd93-317967373b16", ResourceVersion:"1693", Generation:0, CreationTimestamp:time.Date(2024, time.December, 16, 9, 55, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"6d5f899847", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"3c374ad8b76d7d818ebb25914dd5447c9be636f0fa98218fbf368e9381fc4c76", Pod:"nginx-deployment-6d5f899847-4wg2x", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.99.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali1e3627c6ab1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 16 09:56:08.675347 containerd[1506]: 2024-12-16 09:56:08.626 [INFO][3524] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8676873e0c5b463954275491d346168ff6e1c9fa04be83fadac2c50e0e16bc75" Dec 16 09:56:08.675347 containerd[1506]: 2024-12-16 09:56:08.626 [INFO][3524] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8676873e0c5b463954275491d346168ff6e1c9fa04be83fadac2c50e0e16bc75" iface="eth0" netns="" Dec 16 09:56:08.675347 containerd[1506]: 2024-12-16 09:56:08.626 [INFO][3524] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8676873e0c5b463954275491d346168ff6e1c9fa04be83fadac2c50e0e16bc75" Dec 16 09:56:08.675347 containerd[1506]: 2024-12-16 09:56:08.626 [INFO][3524] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8676873e0c5b463954275491d346168ff6e1c9fa04be83fadac2c50e0e16bc75" Dec 16 09:56:08.675347 containerd[1506]: 2024-12-16 09:56:08.658 [INFO][3530] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8676873e0c5b463954275491d346168ff6e1c9fa04be83fadac2c50e0e16bc75" HandleID="k8s-pod-network.8676873e0c5b463954275491d346168ff6e1c9fa04be83fadac2c50e0e16bc75" Workload="10.0.0.4-k8s-nginx--deployment--6d5f899847--4wg2x-eth0" Dec 16 09:56:08.675347 containerd[1506]: 2024-12-16 09:56:08.658 [INFO][3530] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 16 09:56:08.675347 containerd[1506]: 2024-12-16 09:56:08.659 [INFO][3530] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 16 09:56:08.675347 containerd[1506]: 2024-12-16 09:56:08.665 [WARNING][3530] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8676873e0c5b463954275491d346168ff6e1c9fa04be83fadac2c50e0e16bc75" HandleID="k8s-pod-network.8676873e0c5b463954275491d346168ff6e1c9fa04be83fadac2c50e0e16bc75" Workload="10.0.0.4-k8s-nginx--deployment--6d5f899847--4wg2x-eth0" Dec 16 09:56:08.675347 containerd[1506]: 2024-12-16 09:56:08.665 [INFO][3530] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8676873e0c5b463954275491d346168ff6e1c9fa04be83fadac2c50e0e16bc75" HandleID="k8s-pod-network.8676873e0c5b463954275491d346168ff6e1c9fa04be83fadac2c50e0e16bc75" Workload="10.0.0.4-k8s-nginx--deployment--6d5f899847--4wg2x-eth0" Dec 16 09:56:08.675347 containerd[1506]: 2024-12-16 09:56:08.667 [INFO][3530] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 16 09:56:08.675347 containerd[1506]: 2024-12-16 09:56:08.670 [INFO][3524] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8676873e0c5b463954275491d346168ff6e1c9fa04be83fadac2c50e0e16bc75" Dec 16 09:56:08.675347 containerd[1506]: time="2024-12-16T09:56:08.675157860Z" level=info msg="TearDown network for sandbox \"8676873e0c5b463954275491d346168ff6e1c9fa04be83fadac2c50e0e16bc75\" successfully" Dec 16 09:56:08.705490 containerd[1506]: time="2024-12-16T09:56:08.705399994Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8676873e0c5b463954275491d346168ff6e1c9fa04be83fadac2c50e0e16bc75\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 16 09:56:08.705620 containerd[1506]: time="2024-12-16T09:56:08.705514339Z" level=info msg="RemovePodSandbox \"8676873e0c5b463954275491d346168ff6e1c9fa04be83fadac2c50e0e16bc75\" returns successfully" Dec 16 09:56:08.706366 containerd[1506]: time="2024-12-16T09:56:08.706300151Z" level=info msg="StopPodSandbox for \"ceb7e2f19c2c5d1096cb5f0485954ffe9d0544db69c2b6349a7d06a55288f0fe\"" Dec 16 09:56:08.815902 containerd[1506]: 2024-12-16 09:56:08.764 [WARNING][3548] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ceb7e2f19c2c5d1096cb5f0485954ffe9d0544db69c2b6349a7d06a55288f0fe" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-csi--node--driver--5vv27-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3b3f85f9-f74d-4e2a-9c4b-5a8a7393e8af", ResourceVersion:"1675", Generation:0, CreationTimestamp:time.Date(2024, time.December, 16, 9, 55, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"c3921329be0b99ab44432243751b1f497257d4211f21d56b85ac2a5312976e87", Pod:"csi-node-driver-5vv27", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.99.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3271290d1d6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 16 09:56:08.815902 containerd[1506]: 2024-12-16 09:56:08.764 [INFO][3548] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ceb7e2f19c2c5d1096cb5f0485954ffe9d0544db69c2b6349a7d06a55288f0fe" Dec 16 09:56:08.815902 containerd[1506]: 2024-12-16 09:56:08.764 [INFO][3548] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ceb7e2f19c2c5d1096cb5f0485954ffe9d0544db69c2b6349a7d06a55288f0fe" iface="eth0" netns="" Dec 16 09:56:08.815902 containerd[1506]: 2024-12-16 09:56:08.764 [INFO][3548] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ceb7e2f19c2c5d1096cb5f0485954ffe9d0544db69c2b6349a7d06a55288f0fe" Dec 16 09:56:08.815902 containerd[1506]: 2024-12-16 09:56:08.764 [INFO][3548] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ceb7e2f19c2c5d1096cb5f0485954ffe9d0544db69c2b6349a7d06a55288f0fe" Dec 16 09:56:08.815902 containerd[1506]: 2024-12-16 09:56:08.796 [INFO][3554] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ceb7e2f19c2c5d1096cb5f0485954ffe9d0544db69c2b6349a7d06a55288f0fe" HandleID="k8s-pod-network.ceb7e2f19c2c5d1096cb5f0485954ffe9d0544db69c2b6349a7d06a55288f0fe" Workload="10.0.0.4-k8s-csi--node--driver--5vv27-eth0" Dec 16 09:56:08.815902 containerd[1506]: 2024-12-16 09:56:08.797 [INFO][3554] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 16 09:56:08.815902 containerd[1506]: 2024-12-16 09:56:08.797 [INFO][3554] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 16 09:56:08.815902 containerd[1506]: 2024-12-16 09:56:08.805 [WARNING][3554] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ceb7e2f19c2c5d1096cb5f0485954ffe9d0544db69c2b6349a7d06a55288f0fe" HandleID="k8s-pod-network.ceb7e2f19c2c5d1096cb5f0485954ffe9d0544db69c2b6349a7d06a55288f0fe" Workload="10.0.0.4-k8s-csi--node--driver--5vv27-eth0" Dec 16 09:56:08.815902 containerd[1506]: 2024-12-16 09:56:08.805 [INFO][3554] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ceb7e2f19c2c5d1096cb5f0485954ffe9d0544db69c2b6349a7d06a55288f0fe" HandleID="k8s-pod-network.ceb7e2f19c2c5d1096cb5f0485954ffe9d0544db69c2b6349a7d06a55288f0fe" Workload="10.0.0.4-k8s-csi--node--driver--5vv27-eth0" Dec 16 09:56:08.815902 containerd[1506]: 2024-12-16 09:56:08.807 [INFO][3554] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 16 09:56:08.815902 containerd[1506]: 2024-12-16 09:56:08.811 [INFO][3548] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ceb7e2f19c2c5d1096cb5f0485954ffe9d0544db69c2b6349a7d06a55288f0fe" Dec 16 09:56:08.817401 containerd[1506]: time="2024-12-16T09:56:08.815920109Z" level=info msg="TearDown network for sandbox \"ceb7e2f19c2c5d1096cb5f0485954ffe9d0544db69c2b6349a7d06a55288f0fe\" successfully" Dec 16 09:56:08.817401 containerd[1506]: time="2024-12-16T09:56:08.815951408Z" level=info msg="StopPodSandbox for \"ceb7e2f19c2c5d1096cb5f0485954ffe9d0544db69c2b6349a7d06a55288f0fe\" returns successfully" Dec 16 09:56:08.817401 containerd[1506]: time="2024-12-16T09:56:08.817304402Z" level=info msg="RemovePodSandbox for \"ceb7e2f19c2c5d1096cb5f0485954ffe9d0544db69c2b6349a7d06a55288f0fe\"" Dec 16 09:56:08.817401 containerd[1506]: time="2024-12-16T09:56:08.817338295Z" level=info msg="Forcibly stopping sandbox \"ceb7e2f19c2c5d1096cb5f0485954ffe9d0544db69c2b6349a7d06a55288f0fe\"" Dec 16 09:56:08.920572 containerd[1506]: 2024-12-16 09:56:08.870 [WARNING][3572] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ceb7e2f19c2c5d1096cb5f0485954ffe9d0544db69c2b6349a7d06a55288f0fe" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-csi--node--driver--5vv27-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3b3f85f9-f74d-4e2a-9c4b-5a8a7393e8af", ResourceVersion:"1675", Generation:0, CreationTimestamp:time.Date(2024, time.December, 16, 9, 55, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"c3921329be0b99ab44432243751b1f497257d4211f21d56b85ac2a5312976e87", Pod:"csi-node-driver-5vv27", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.99.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3271290d1d6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 16 09:56:08.920572 containerd[1506]: 2024-12-16 09:56:08.870 [INFO][3572] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ceb7e2f19c2c5d1096cb5f0485954ffe9d0544db69c2b6349a7d06a55288f0fe" Dec 16 09:56:08.920572 containerd[1506]: 2024-12-16 09:56:08.870 [INFO][3572] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ceb7e2f19c2c5d1096cb5f0485954ffe9d0544db69c2b6349a7d06a55288f0fe" iface="eth0" netns="" Dec 16 09:56:08.920572 containerd[1506]: 2024-12-16 09:56:08.870 [INFO][3572] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ceb7e2f19c2c5d1096cb5f0485954ffe9d0544db69c2b6349a7d06a55288f0fe" Dec 16 09:56:08.920572 containerd[1506]: 2024-12-16 09:56:08.870 [INFO][3572] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ceb7e2f19c2c5d1096cb5f0485954ffe9d0544db69c2b6349a7d06a55288f0fe" Dec 16 09:56:08.920572 containerd[1506]: 2024-12-16 09:56:08.900 [INFO][3578] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ceb7e2f19c2c5d1096cb5f0485954ffe9d0544db69c2b6349a7d06a55288f0fe" HandleID="k8s-pod-network.ceb7e2f19c2c5d1096cb5f0485954ffe9d0544db69c2b6349a7d06a55288f0fe" Workload="10.0.0.4-k8s-csi--node--driver--5vv27-eth0" Dec 16 09:56:08.920572 containerd[1506]: 2024-12-16 09:56:08.900 [INFO][3578] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 16 09:56:08.920572 containerd[1506]: 2024-12-16 09:56:08.900 [INFO][3578] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 16 09:56:08.920572 containerd[1506]: 2024-12-16 09:56:08.907 [WARNING][3578] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ceb7e2f19c2c5d1096cb5f0485954ffe9d0544db69c2b6349a7d06a55288f0fe" HandleID="k8s-pod-network.ceb7e2f19c2c5d1096cb5f0485954ffe9d0544db69c2b6349a7d06a55288f0fe" Workload="10.0.0.4-k8s-csi--node--driver--5vv27-eth0" Dec 16 09:56:08.920572 containerd[1506]: 2024-12-16 09:56:08.907 [INFO][3578] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ceb7e2f19c2c5d1096cb5f0485954ffe9d0544db69c2b6349a7d06a55288f0fe" HandleID="k8s-pod-network.ceb7e2f19c2c5d1096cb5f0485954ffe9d0544db69c2b6349a7d06a55288f0fe" Workload="10.0.0.4-k8s-csi--node--driver--5vv27-eth0" Dec 16 09:56:08.920572 containerd[1506]: 2024-12-16 09:56:08.909 [INFO][3578] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 16 09:56:08.920572 containerd[1506]: 2024-12-16 09:56:08.914 [INFO][3572] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ceb7e2f19c2c5d1096cb5f0485954ffe9d0544db69c2b6349a7d06a55288f0fe" Dec 16 09:56:08.920572 containerd[1506]: time="2024-12-16T09:56:08.920192439Z" level=info msg="TearDown network for sandbox \"ceb7e2f19c2c5d1096cb5f0485954ffe9d0544db69c2b6349a7d06a55288f0fe\" successfully" Dec 16 09:56:08.924552 containerd[1506]: time="2024-12-16T09:56:08.924496953Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ceb7e2f19c2c5d1096cb5f0485954ffe9d0544db69c2b6349a7d06a55288f0fe\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 16 09:56:08.924660 containerd[1506]: time="2024-12-16T09:56:08.924589286Z" level=info msg="RemovePodSandbox \"ceb7e2f19c2c5d1096cb5f0485954ffe9d0544db69c2b6349a7d06a55288f0fe\" returns successfully" Dec 16 09:56:09.453803 kubelet[2028]: E1216 09:56:09.453709 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:56:10.039376 kubelet[2028]: I1216 09:56:10.039309 2028 topology_manager.go:215] "Topology Admit Handler" podUID="98414ef4-b853-4bb3-8075-fd646c21ade7" podNamespace="default" podName="test-pod-1" Dec 16 09:56:10.052258 systemd[1]: Created slice kubepods-besteffort-pod98414ef4_b853_4bb3_8075_fd646c21ade7.slice - libcontainer container kubepods-besteffort-pod98414ef4_b853_4bb3_8075_fd646c21ade7.slice. Dec 16 09:56:10.238256 kubelet[2028]: I1216 09:56:10.238162 2028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-54ce2999-d54f-4a77-9eb3-bbb177969ea9\" (UniqueName: \"kubernetes.io/nfs/98414ef4-b853-4bb3-8075-fd646c21ade7-pvc-54ce2999-d54f-4a77-9eb3-bbb177969ea9\") pod \"test-pod-1\" (UID: \"98414ef4-b853-4bb3-8075-fd646c21ade7\") " pod="default/test-pod-1" Dec 16 09:56:10.238256 kubelet[2028]: I1216 09:56:10.238242 2028 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjjbt\" (UniqueName: \"kubernetes.io/projected/98414ef4-b853-4bb3-8075-fd646c21ade7-kube-api-access-cjjbt\") pod \"test-pod-1\" (UID: \"98414ef4-b853-4bb3-8075-fd646c21ade7\") " pod="default/test-pod-1" Dec 16 09:56:10.404272 kernel: FS-Cache: Loaded Dec 16 09:56:10.454971 kubelet[2028]: E1216 09:56:10.454876 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:56:10.513571 kernel: RPC: Registered named UNIX socket transport module. Dec 16 09:56:10.513702 kernel: RPC: Registered udp transport module. Dec 16 09:56:10.513736 kernel: RPC: Registered tcp transport module. Dec 16 09:56:10.514325 kernel: RPC: Registered tcp-with-tls transport module. Dec 16 09:56:10.515296 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Dec 16 09:56:10.851196 kernel: NFS: Registering the id_resolver key type Dec 16 09:56:10.851354 kernel: Key type id_resolver registered Dec 16 09:56:10.857511 kernel: Key type id_legacy registered Dec 16 09:56:10.909277 nfsidmap[3600]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Dec 16 09:56:10.915762 nfsidmap[3601]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Dec 16 09:56:10.959008 containerd[1506]: time="2024-12-16T09:56:10.958944105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:98414ef4-b853-4bb3-8075-fd646c21ade7,Namespace:default,Attempt:0,}" Dec 16 09:56:11.108728 systemd-networkd[1393]: cali5ec59c6bf6e: Link UP Dec 16 09:56:11.111654 systemd-networkd[1393]: cali5ec59c6bf6e: Gained carrier Dec 16 09:56:11.129418 containerd[1506]: 2024-12-16 09:56:11.029 [INFO][3602] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.4-k8s-test--pod--1-eth0 default 98414ef4-b853-4bb3-8075-fd646c21ade7 1791 0 2024-12-16 09:55:58 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.4 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="871787fdc5f29347565b87c432960e2b9d91e04e508fb99c14d542c891d1d799" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-" Dec 16 09:56:11.129418 containerd[1506]: 2024-12-16 09:56:11.029 [INFO][3602] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="871787fdc5f29347565b87c432960e2b9d91e04e508fb99c14d542c891d1d799" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" Dec 16 09:56:11.129418 containerd[1506]: 2024-12-16 09:56:11.059 [INFO][3613] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="871787fdc5f29347565b87c432960e2b9d91e04e508fb99c14d542c891d1d799" HandleID="k8s-pod-network.871787fdc5f29347565b87c432960e2b9d91e04e508fb99c14d542c891d1d799" Workload="10.0.0.4-k8s-test--pod--1-eth0" Dec 16 09:56:11.129418 containerd[1506]: 2024-12-16 09:56:11.072 [INFO][3613] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="871787fdc5f29347565b87c432960e2b9d91e04e508fb99c14d542c891d1d799" HandleID="k8s-pod-network.871787fdc5f29347565b87c432960e2b9d91e04e508fb99c14d542c891d1d799" Workload="10.0.0.4-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000292b70), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.4", "pod":"test-pod-1", "timestamp":"2024-12-16 09:56:11.059287021 +0000 UTC"}, Hostname:"10.0.0.4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 09:56:11.129418 containerd[1506]: 2024-12-16 09:56:11.072 [INFO][3613] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 16 09:56:11.129418 containerd[1506]: 2024-12-16 09:56:11.072 [INFO][3613] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 16 09:56:11.129418 containerd[1506]: 2024-12-16 09:56:11.072 [INFO][3613] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.4' Dec 16 09:56:11.129418 containerd[1506]: 2024-12-16 09:56:11.074 [INFO][3613] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.871787fdc5f29347565b87c432960e2b9d91e04e508fb99c14d542c891d1d799" host="10.0.0.4" Dec 16 09:56:11.129418 containerd[1506]: 2024-12-16 09:56:11.078 [INFO][3613] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.4" Dec 16 09:56:11.129418 containerd[1506]: 2024-12-16 09:56:11.082 [INFO][3613] ipam/ipam.go 489: Trying affinity for 192.168.99.192/26 host="10.0.0.4" Dec 16 09:56:11.129418 containerd[1506]: 2024-12-16 09:56:11.084 [INFO][3613] ipam/ipam.go 155: Attempting to load block cidr=192.168.99.192/26 host="10.0.0.4" Dec 16 09:56:11.129418 containerd[1506]: 2024-12-16 09:56:11.086 [INFO][3613] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.99.192/26 host="10.0.0.4" Dec 16 09:56:11.129418 containerd[1506]: 2024-12-16 09:56:11.086 [INFO][3613] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.99.192/26 handle="k8s-pod-network.871787fdc5f29347565b87c432960e2b9d91e04e508fb99c14d542c891d1d799" host="10.0.0.4" Dec 16 09:56:11.129418 containerd[1506]: 2024-12-16 09:56:11.089 [INFO][3613] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.871787fdc5f29347565b87c432960e2b9d91e04e508fb99c14d542c891d1d799 Dec 16 09:56:11.129418 containerd[1506]: 2024-12-16 09:56:11.093 [INFO][3613] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.99.192/26 handle="k8s-pod-network.871787fdc5f29347565b87c432960e2b9d91e04e508fb99c14d542c891d1d799" host="10.0.0.4" Dec 16 09:56:11.129418 containerd[1506]: 2024-12-16 09:56:11.098 [INFO][3613] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.99.196/26] block=192.168.99.192/26 handle="k8s-pod-network.871787fdc5f29347565b87c432960e2b9d91e04e508fb99c14d542c891d1d799" host="10.0.0.4" Dec 16 09:56:11.129418 containerd[1506]: 2024-12-16 09:56:11.098 [INFO][3613] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.99.196/26] handle="k8s-pod-network.871787fdc5f29347565b87c432960e2b9d91e04e508fb99c14d542c891d1d799" host="10.0.0.4" Dec 16 09:56:11.129418 containerd[1506]: 2024-12-16 09:56:11.098 [INFO][3613] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 16 09:56:11.129418 containerd[1506]: 2024-12-16 09:56:11.098 [INFO][3613] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.196/26] IPv6=[] ContainerID="871787fdc5f29347565b87c432960e2b9d91e04e508fb99c14d542c891d1d799" HandleID="k8s-pod-network.871787fdc5f29347565b87c432960e2b9d91e04e508fb99c14d542c891d1d799" Workload="10.0.0.4-k8s-test--pod--1-eth0" Dec 16 09:56:11.129418 containerd[1506]: 2024-12-16 09:56:11.103 [INFO][3602] cni-plugin/k8s.go 386: Populated endpoint ContainerID="871787fdc5f29347565b87c432960e2b9d91e04e508fb99c14d542c891d1d799" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"98414ef4-b853-4bb3-8075-fd646c21ade7", ResourceVersion:"1791", Generation:0, CreationTimestamp:time.Date(2024, time.December, 16, 9, 55, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.99.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 16 09:56:11.130597 containerd[1506]: 2024-12-16 09:56:11.103 [INFO][3602] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.99.196/32] ContainerID="871787fdc5f29347565b87c432960e2b9d91e04e508fb99c14d542c891d1d799" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" Dec 16 09:56:11.130597 containerd[1506]: 2024-12-16 09:56:11.103 [INFO][3602] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="871787fdc5f29347565b87c432960e2b9d91e04e508fb99c14d542c891d1d799" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" Dec 16 09:56:11.130597 containerd[1506]: 2024-12-16 09:56:11.110 [INFO][3602] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="871787fdc5f29347565b87c432960e2b9d91e04e508fb99c14d542c891d1d799" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" Dec 16 09:56:11.130597 containerd[1506]: 2024-12-16 09:56:11.111 [INFO][3602] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="871787fdc5f29347565b87c432960e2b9d91e04e508fb99c14d542c891d1d799" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"98414ef4-b853-4bb3-8075-fd646c21ade7", ResourceVersion:"1791", Generation:0, CreationTimestamp:time.Date(2024, time.December, 16, 9, 55, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"871787fdc5f29347565b87c432960e2b9d91e04e508fb99c14d542c891d1d799", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.99.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"aa:3c:1a:3a:39:4e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 16 09:56:11.130597 containerd[1506]: 2024-12-16 09:56:11.121 [INFO][3602] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="871787fdc5f29347565b87c432960e2b9d91e04e508fb99c14d542c891d1d799" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" Dec 16 09:56:11.153591 containerd[1506]: time="2024-12-16T09:56:11.153251348Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 16 09:56:11.153591 containerd[1506]: time="2024-12-16T09:56:11.153335416Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 16 09:56:11.153591 containerd[1506]: time="2024-12-16T09:56:11.153368728Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 16 09:56:11.154149 containerd[1506]: time="2024-12-16T09:56:11.153553354Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 16 09:56:11.172202 systemd[1]: Started cri-containerd-871787fdc5f29347565b87c432960e2b9d91e04e508fb99c14d542c891d1d799.scope - libcontainer container 871787fdc5f29347565b87c432960e2b9d91e04e508fb99c14d542c891d1d799. Dec 16 09:56:11.214315 containerd[1506]: time="2024-12-16T09:56:11.214275524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:98414ef4-b853-4bb3-8075-fd646c21ade7,Namespace:default,Attempt:0,} returns sandbox id \"871787fdc5f29347565b87c432960e2b9d91e04e508fb99c14d542c891d1d799\"" Dec 16 09:56:11.216967 containerd[1506]: time="2024-12-16T09:56:11.216748237Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 16 09:56:11.455545 kubelet[2028]: E1216 09:56:11.455306 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:56:11.594123 containerd[1506]: time="2024-12-16T09:56:11.593972807Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 09:56:11.595680 containerd[1506]: time="2024-12-16T09:56:11.595565501Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Dec 16 09:56:11.600957 containerd[1506]: time="2024-12-16T09:56:11.600876079Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1\", size \"71035905\" in 384.097156ms" Dec 16 09:56:11.600957 containerd[1506]: time="2024-12-16T09:56:11.600930651Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 16 09:56:11.603921 containerd[1506]: time="2024-12-16T09:56:11.603815837Z" level=info msg="CreateContainer within sandbox \"871787fdc5f29347565b87c432960e2b9d91e04e508fb99c14d542c891d1d799\" for container &ContainerMetadata{Name:test,Attempt:0,}" Dec 16 09:56:11.644861 containerd[1506]: time="2024-12-16T09:56:11.644778669Z" level=info msg="CreateContainer within sandbox \"871787fdc5f29347565b87c432960e2b9d91e04e508fb99c14d542c891d1d799\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"53c0680262a48eedfa8c090c8ae52ebb042b1bc0e0e0ef319968fafd67fe8fbd\"" Dec 16 09:56:11.646131 containerd[1506]: time="2024-12-16T09:56:11.645985830Z" level=info msg="StartContainer for \"53c0680262a48eedfa8c090c8ae52ebb042b1bc0e0e0ef319968fafd67fe8fbd\"" Dec 16 09:56:11.702286 systemd[1]: Started cri-containerd-53c0680262a48eedfa8c090c8ae52ebb042b1bc0e0e0ef319968fafd67fe8fbd.scope - libcontainer container 53c0680262a48eedfa8c090c8ae52ebb042b1bc0e0e0ef319968fafd67fe8fbd. Dec 16 09:56:11.753449 containerd[1506]: time="2024-12-16T09:56:11.753159524Z" level=info msg="StartContainer for \"53c0680262a48eedfa8c090c8ae52ebb042b1bc0e0e0ef319968fafd67fe8fbd\" returns successfully" Dec 16 09:56:12.400483 systemd-networkd[1393]: cali5ec59c6bf6e: Gained IPv6LL Dec 16 09:56:12.456021 kubelet[2028]: E1216 09:56:12.455957 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:56:12.754118 kubelet[2028]: I1216 09:56:12.753731 2028 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=14.368696871 podStartE2EDuration="14.753670028s" podCreationTimestamp="2024-12-16 09:55:58 +0000 UTC" firstStartedPulling="2024-12-16 09:56:11.216320976 +0000 UTC m=+63.295988155" lastFinishedPulling="2024-12-16 09:56:11.601294103 +0000 UTC m=+63.680961312" observedRunningTime="2024-12-16 09:56:12.753278354 +0000 UTC m=+64.832945563" watchObservedRunningTime="2024-12-16 09:56:12.753670028 +0000 UTC m=+64.833337247" Dec 16 09:56:13.456807 kubelet[2028]: E1216 09:56:13.456730 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:56:14.458100 kubelet[2028]: E1216 09:56:14.457954 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:56:15.458749 kubelet[2028]: E1216 09:56:15.458622 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:56:16.460000 kubelet[2028]: E1216 09:56:16.459189 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:56:17.460425 kubelet[2028]: E1216 09:56:17.460327 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:56:18.461208 kubelet[2028]: E1216 09:56:18.461122 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:56:19.462360 kubelet[2028]: E1216 09:56:19.462264 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:56:20.462726 kubelet[2028]: E1216 09:56:20.462636 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:56:21.463274 kubelet[2028]: E1216 09:56:21.463155 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:56:22.464097 kubelet[2028]: E1216 09:56:22.463995 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:56:23.464736 kubelet[2028]: E1216 09:56:23.464659 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:56:24.465379 kubelet[2028]: E1216 09:56:24.465261 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:56:25.466289 kubelet[2028]: E1216 09:56:25.466196 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:56:26.467306 kubelet[2028]: E1216 09:56:26.467206 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:56:27.468412 kubelet[2028]: E1216 09:56:27.468321 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:56:28.403271 kubelet[2028]: E1216 09:56:28.403175 2028 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:56:28.469024 kubelet[2028]: E1216 09:56:28.468930 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:56:29.470124 kubelet[2028]: E1216 09:56:29.469969 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:56:29.921502 kubelet[2028]: E1216 09:56:29.921415 2028 controller.go:195] "Failed to update lease" err="Put \"https://188.245.229.227:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/10.0.0.4?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 16 09:56:30.470276 kubelet[2028]: E1216 09:56:30.470190 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:56:31.471050 kubelet[2028]: E1216 09:56:31.470940 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:56:32.471594 kubelet[2028]: E1216 09:56:32.471484 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:56:33.472866 kubelet[2028]: E1216 09:56:33.472766 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:56:34.473284 kubelet[2028]: E1216 09:56:34.473175 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:56:35.474373 kubelet[2028]: E1216 09:56:35.474272 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:56:36.475334 kubelet[2028]: E1216 09:56:36.475213 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:56:37.475555 kubelet[2028]: E1216 09:56:37.475457 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:56:38.476459 kubelet[2028]: E1216 09:56:38.476312 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:56:39.477655 kubelet[2028]: E1216 09:56:39.477540 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:56:39.847870 kubelet[2028]: E1216 09:56:39.847798 2028 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"10.0.0.4\": Get \"https://188.245.229.227:6443/api/v1/nodes/10.0.0.4?resourceVersion=0&timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 16 09:56:39.922008 kubelet[2028]: E1216 09:56:39.921919 2028 controller.go:195] "Failed to update lease" err="Put \"https://188.245.229.227:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/10.0.0.4?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 16 09:56:40.478262 kubelet[2028]: E1216 09:56:40.478162 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:56:41.479221 kubelet[2028]: E1216 09:56:41.479124 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:56:42.479587 kubelet[2028]: E1216 09:56:42.479486 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:56:43.480579 kubelet[2028]: E1216 09:56:43.480496 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:56:44.481007 kubelet[2028]: E1216 09:56:44.480872 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 16 09:56:45.481779 kubelet[2028]: E1216 09:56:45.481672 2028 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"